Nov 24 19:16:45 crc systemd[1]: Starting Kubernetes Kubelet... Nov 24 19:16:45 crc restorecon[4752]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:45 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 19:16:46 crc restorecon[4752]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 19:16:46 crc restorecon[4752]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 24 19:16:46 crc kubenswrapper[4812]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 19:16:46 crc kubenswrapper[4812]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 24 19:16:46 crc kubenswrapper[4812]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 19:16:46 crc kubenswrapper[4812]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 19:16:46 crc kubenswrapper[4812]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 24 19:16:46 crc kubenswrapper[4812]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.661471 4812 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670038 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670074 4812 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670083 4812 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670092 4812 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670101 4812 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670110 4812 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670120 4812 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670129 4812 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670323 4812 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670331 4812 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670364 4812 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670372 4812 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670380 4812 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670388 4812 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670396 4812 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670404 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670412 4812 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670420 4812 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670427 4812 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670435 4812 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670443 4812 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670451 4812 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670459 4812 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670466 4812 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670476 4812 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670484 4812 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670491 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670502 4812 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670512 4812 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670520 4812 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670527 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670535 4812 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670543 4812 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670551 4812 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670562 4812 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670571 4812 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670580 4812 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670588 4812 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670597 4812 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670606 4812 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670614 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670621 4812 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670629 4812 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670637 4812 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670644 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670654 4812 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670664 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670673 4812 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670683 4812 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670691 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670699 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670706 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670718 4812 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670727 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670736 4812 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670745 4812 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670753 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670761 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670769 4812 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670777 4812 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670785 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670793 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670801 4812 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670808 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670816 4812 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670824 4812 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670831 4812 feature_gate.go:330] unrecognized feature gate: Example Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670849 4812 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670857 4812 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670865 4812 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.670873 4812 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672448 4812 flags.go:64] FLAG: --address="0.0.0.0" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672472 4812 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672498 4812 flags.go:64] FLAG: --anonymous-auth="true" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672511 4812 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672523 4812 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672532 4812 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672544 4812 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672555 4812 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672564 4812 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672574 4812 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672584 4812 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672593 4812 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672602 4812 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672611 4812 flags.go:64] FLAG: --cgroup-root="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672620 4812 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672629 4812 flags.go:64] FLAG: --client-ca-file="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672637 4812 flags.go:64] FLAG: --cloud-config="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672646 4812 flags.go:64] FLAG: --cloud-provider="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672654 4812 flags.go:64] FLAG: --cluster-dns="[]" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672671 4812 flags.go:64] FLAG: --cluster-domain="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672680 4812 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672689 4812 flags.go:64] FLAG: --config-dir="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672697 4812 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672707 4812 flags.go:64] FLAG: --container-log-max-files="5" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672717 4812 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672726 4812 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672735 4812 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672744 4812 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672753 4812 flags.go:64] FLAG: --contention-profiling="false" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672762 4812 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672771 4812 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672780 4812 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672804 4812 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672816 4812 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672825 4812 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672834 4812 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672842 4812 flags.go:64] FLAG: --enable-load-reader="false" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672851 4812 flags.go:64] FLAG: --enable-server="true" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672860 4812 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672871 4812 flags.go:64] FLAG: --event-burst="100" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672881 4812 flags.go:64] FLAG: --event-qps="50" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672890 4812 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672898 4812 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672907 4812 flags.go:64] FLAG: --eviction-hard="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672918 4812 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672927 4812 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672936 4812 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672945 4812 flags.go:64] FLAG: --eviction-soft="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672954 4812 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672963 4812 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672971 4812 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672980 4812 flags.go:64] FLAG: --experimental-mounter-path="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672989 4812 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.672997 4812 flags.go:64] FLAG: --fail-swap-on="true" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673008 4812 flags.go:64] FLAG: --feature-gates="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673018 4812 flags.go:64] FLAG: --file-check-frequency="20s" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673027 4812 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673036 4812 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673045 4812 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673055 4812 flags.go:64] FLAG: --healthz-port="10248" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673064 4812 flags.go:64] FLAG: --help="false" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673075 4812 flags.go:64] FLAG: --hostname-override="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673087 4812 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673098 4812 flags.go:64] FLAG: --http-check-frequency="20s" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673111 4812 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673123 4812 flags.go:64] FLAG: --image-credential-provider-config="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673134 4812 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673145 4812 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673158 4812 flags.go:64] FLAG: --image-service-endpoint="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673169 4812 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673181 4812 flags.go:64] FLAG: --kube-api-burst="100" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673193 4812 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673203 4812 flags.go:64] FLAG: --kube-api-qps="50" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673211 4812 flags.go:64] FLAG: --kube-reserved="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673222 4812 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673232 4812 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673244 4812 flags.go:64] FLAG: --kubelet-cgroups="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673255 4812 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673266 4812 flags.go:64] FLAG: --lock-file="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673277 4812 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673288 4812 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673300 4812 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673318 4812 flags.go:64] FLAG: --log-json-split-stream="false" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673327 4812 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673370 4812 flags.go:64] FLAG: --log-text-split-stream="false" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673382 4812 flags.go:64] FLAG: --logging-format="text" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673393 4812 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673407 4812 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673418 4812 flags.go:64] FLAG: --manifest-url="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673429 4812 flags.go:64] FLAG: --manifest-url-header="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673444 4812 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673457 4812 flags.go:64] FLAG: --max-open-files="1000000" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673468 4812 flags.go:64] FLAG: --max-pods="110" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673479 4812 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673491 4812 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673502 4812 flags.go:64] FLAG: --memory-manager-policy="None" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673514 4812 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673526 4812 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673538 4812 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673550 4812 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673574 4812 flags.go:64] FLAG: --node-status-max-images="50" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673586 4812 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673598 4812 flags.go:64] FLAG: --oom-score-adj="-999" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673609 4812 flags.go:64] FLAG: --pod-cidr="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673623 4812 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673641 4812 flags.go:64] FLAG: --pod-manifest-path="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673652 4812 flags.go:64] FLAG: --pod-max-pids="-1" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673664 4812 flags.go:64] FLAG: --pods-per-core="0" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673677 4812 flags.go:64] FLAG: --port="10250" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673688 4812 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673699 4812 flags.go:64] FLAG: --provider-id="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673710 4812 flags.go:64] FLAG: --qos-reserved="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673721 4812 flags.go:64] FLAG: --read-only-port="10255" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673733 4812 flags.go:64] FLAG: --register-node="true" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673744 4812 flags.go:64] FLAG: --register-schedulable="true" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673754 4812 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673770 4812 flags.go:64] FLAG: --registry-burst="10" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673779 4812 flags.go:64] FLAG: --registry-qps="5" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673788 4812 flags.go:64] FLAG: --reserved-cpus="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673796 4812 flags.go:64] FLAG: --reserved-memory="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673807 4812 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673816 4812 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673825 4812 flags.go:64] FLAG: --rotate-certificates="false" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673835 4812 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673844 4812 flags.go:64] FLAG: --runonce="false" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673853 4812 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673863 4812 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673873 4812 flags.go:64] FLAG: --seccomp-default="false" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673883 4812 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673893 4812 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673902 4812 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673912 4812 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673921 4812 flags.go:64] FLAG: --storage-driver-password="root" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673929 4812 flags.go:64] FLAG: --storage-driver-secure="false" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673938 4812 flags.go:64] FLAG: --storage-driver-table="stats" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673947 4812 flags.go:64] FLAG: --storage-driver-user="root" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673955 4812 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673964 4812 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673973 4812 flags.go:64] FLAG: --system-cgroups="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673982 4812 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.673998 4812 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.674007 4812 flags.go:64] FLAG: --tls-cert-file="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.674016 4812 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.674027 4812 flags.go:64] FLAG: --tls-min-version="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.674036 4812 flags.go:64] FLAG: --tls-private-key-file="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.674045 4812 flags.go:64] FLAG: --topology-manager-policy="none" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.674054 4812 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.674063 4812 flags.go:64] FLAG: --topology-manager-scope="container" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.674072 4812 flags.go:64] FLAG: --v="2" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.674083 4812 flags.go:64] FLAG: --version="false" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.674094 4812 flags.go:64] FLAG: --vmodule="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.674104 4812 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.674113 4812 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674315 4812 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674326 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674398 4812 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674409 4812 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674417 4812 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674425 4812 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674433 4812 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674441 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674451 4812 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674458 4812 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674466 4812 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674473 4812 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674481 4812 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674489 4812 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674496 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674504 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674512 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674520 4812 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674527 4812 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674535 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674542 4812 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674550 4812 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674558 4812 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674569 4812 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674580 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674588 4812 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674596 4812 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674605 4812 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674613 4812 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674620 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674628 4812 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674635 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674643 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674651 4812 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674659 4812 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674666 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674674 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674681 4812 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674689 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674697 4812 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674707 4812 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674718 4812 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674727 4812 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674736 4812 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674744 4812 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674752 4812 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674759 4812 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674767 4812 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674774 4812 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674785 4812 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674794 4812 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674802 4812 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674810 4812 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674818 4812 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674827 4812 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674835 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674843 4812 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674852 4812 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674860 4812 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674869 4812 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674877 4812 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674886 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674894 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674901 4812 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674911 4812 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674919 4812 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674927 4812 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674935 4812 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674943 4812 feature_gate.go:330] unrecognized feature gate: Example Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674950 4812 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.674960 4812 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.674985 4812 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.690931 4812 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.690985 4812 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691163 4812 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691186 4812 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691196 4812 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691207 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691217 4812 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691228 4812 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691236 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691245 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691253 4812 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691261 4812 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691270 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691278 4812 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691285 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691295 4812 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691307 4812 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691317 4812 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691327 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691365 4812 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691375 4812 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691384 4812 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691392 4812 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691401 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691409 4812 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691416 4812 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691428 4812 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691439 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691449 4812 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691458 4812 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691466 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691475 4812 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691483 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691491 4812 feature_gate.go:330] unrecognized feature gate: Example Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691501 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691514 4812 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691529 4812 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691540 4812 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691551 4812 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691562 4812 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691572 4812 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691584 4812 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691593 4812 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691603 4812 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691613 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691623 4812 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691633 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691643 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691654 4812 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691664 4812 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691674 4812 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691682 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691692 4812 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691700 4812 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691709 4812 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691717 4812 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691724 4812 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691735 4812 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691746 4812 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691756 4812 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691764 4812 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691773 4812 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691781 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691788 4812 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691796 4812 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691803 4812 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691811 4812 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691819 4812 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691827 4812 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691837 4812 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691846 4812 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691854 4812 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.691863 4812 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.691877 4812 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692113 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692126 4812 feature_gate.go:330] unrecognized feature gate: Example Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692135 4812 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692143 4812 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692151 4812 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692159 4812 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692166 4812 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692174 4812 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692182 4812 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692190 4812 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692198 4812 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692206 4812 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692214 4812 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692221 4812 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692230 4812 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692240 4812 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692250 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692263 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692273 4812 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692284 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692295 4812 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692305 4812 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692312 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692320 4812 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692328 4812 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692363 4812 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692371 4812 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692379 4812 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692387 4812 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692395 4812 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692405 4812 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692415 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692423 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692431 4812 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692440 4812 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692452 4812 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692461 4812 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692469 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692480 4812 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692490 4812 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692499 4812 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692509 4812 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692519 4812 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692531 4812 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692542 4812 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692552 4812 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692562 4812 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692572 4812 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692581 4812 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692592 4812 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692604 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692614 4812 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692624 4812 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692635 4812 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692644 4812 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692654 4812 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692664 4812 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692672 4812 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692683 4812 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692691 4812 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692699 4812 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692708 4812 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692718 4812 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692729 4812 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692737 4812 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692746 4812 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692755 4812 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692763 4812 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692771 4812 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692779 4812 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.692786 4812 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.692799 4812 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.693978 4812 server.go:940] "Client rotation is on, will bootstrap in background" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.706363 4812 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.706531 4812 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.709852 4812 server.go:997] "Starting client certificate rotation" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.709904 4812 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.711129 4812 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-05 09:40:46.116571882 +0000 UTC Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.711269 4812 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 254h23m59.40531034s for next certificate rotation Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.737733 4812 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.748185 4812 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.775269 4812 log.go:25] "Validated CRI v1 runtime API" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.811825 4812 log.go:25] "Validated CRI v1 image API" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.814084 4812 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.823449 4812 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-24-19-12-12-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.823505 4812 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.859136 4812 manager.go:217] Machine: {Timestamp:2025-11-24 19:16:46.855329469 +0000 UTC m=+0.644281940 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:c55eafaa-534f-4192-bbf2-c31e1d9c2aed BootID:4c3675a7-88da-4626-a835-681b7b2f3a7b Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:aa:1a:f0 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:aa:1a:f0 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:cc:e5:99 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:48:3b:8b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:5e:6c:aa Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:78:29:95 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:79:27:43 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:82:d0:3a:9f:01:d7 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:be:bf:08:21:e0:61 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.859640 4812 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.859889 4812 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.862897 4812 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.863432 4812 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.863530 4812 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.863986 4812 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.864016 4812 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.864690 4812 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.864746 4812 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.865125 4812 state_mem.go:36] "Initialized new in-memory state store" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.865319 4812 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.871400 4812 kubelet.go:418] "Attempting to sync node with API server" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.871450 4812 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.871498 4812 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.871524 4812 kubelet.go:324] "Adding apiserver pod source" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.871546 4812 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.882938 4812 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.885473 4812 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.885838 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.885885 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Nov 24 19:16:46 crc kubenswrapper[4812]: E1124 19:16:46.886017 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Nov 24 19:16:46 crc kubenswrapper[4812]: E1124 19:16:46.886104 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.887449 4812 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.890376 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.890428 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.890444 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.890460 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.890483 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.890497 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.890510 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.890532 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.890549 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.890564 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.890584 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.890598 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.891865 4812 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.892799 4812 server.go:1280] "Started kubelet" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.893909 4812 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.894107 4812 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 19:16:46 crc systemd[1]: Started Kubernetes Kubelet. Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.895433 4812 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.895812 4812 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.898389 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.898461 4812 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.898672 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 01:03:13.175158694 +0000 UTC Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.898839 4812 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.898867 4812 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 24 19:16:46 crc kubenswrapper[4812]: E1124 19:16:46.899008 4812 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.899292 4812 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.899745 4812 server.go:460] "Adding debug handlers to kubelet server" Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.899780 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Nov 24 19:16:46 crc kubenswrapper[4812]: E1124 19:16:46.899870 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Nov 24 19:16:46 crc kubenswrapper[4812]: E1124 19:16:46.901144 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="200ms" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.901463 4812 factory.go:55] Registering systemd factory Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.901494 4812 factory.go:221] Registration of the systemd container factory successfully Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.901938 4812 factory.go:153] Registering CRI-O factory Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.901990 4812 factory.go:221] Registration of the crio container factory successfully Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.903513 4812 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.903612 4812 factory.go:103] Registering Raw factory Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.903654 4812 manager.go:1196] Started watching for new ooms in manager Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.904730 4812 manager.go:319] Starting recovery of all containers Nov 24 19:16:46 crc kubenswrapper[4812]: E1124 19:16:46.905108 4812 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.36:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b0764696feae6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-24 19:16:46.892747494 +0000 UTC m=+0.681699905,LastTimestamp:2025-11-24 19:16:46.892747494 +0000 UTC m=+0.681699905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.915661 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.915725 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.915743 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.915759 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.915774 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.915790 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.915806 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.915826 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.915847 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.915862 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.917890 4812 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.917922 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.917940 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.917953 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.917973 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.917987 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.917997 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918010 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918023 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918035 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918045 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918054 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918067 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918080 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918120 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918136 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918152 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918205 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918222 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918238 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918252 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918265 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918277 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918291 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918304 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918317 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918437 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918451 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918464 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918477 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918489 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918501 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918513 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918526 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918539 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918549 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918561 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918606 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918623 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918636 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918650 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918662 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918673 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918712 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918728 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918741 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918753 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918766 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918779 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918788 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918797 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918808 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918821 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918834 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918846 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918856 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918867 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918881 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918893 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918904 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918915 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918930 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918940 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918949 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918959 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918971 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918982 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.918992 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919003 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919016 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919026 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919037 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919046 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919057 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919068 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919079 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919092 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919103 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919113 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919129 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919144 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919159 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919172 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919185 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919198 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919213 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919226 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919237 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919253 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919267 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919278 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919291 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919302 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919319 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919345 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919367 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919380 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919395 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919408 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919421 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919435 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919450 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919462 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919475 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919488 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919501 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919513 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919525 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919539 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919551 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919563 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919576 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919590 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919603 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919617 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919630 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919644 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919661 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919675 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919688 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919701 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919717 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919729 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919742 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919755 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919770 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919785 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919798 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919811 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919823 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919836 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919875 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919893 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919907 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919921 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919933 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919946 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919960 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919973 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919985 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.919997 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920008 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920020 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920034 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920048 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920061 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920076 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920088 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920101 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920118 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920130 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920145 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920159 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920172 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920184 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920198 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920212 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920235 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920253 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920268 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920285 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920301 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920315 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920344 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920357 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920372 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920387 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920402 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920416 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920431 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920449 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920462 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920474 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920489 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920504 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920521 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920539 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920553 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920565 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920580 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920596 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920613 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920629 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920644 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920658 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920677 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920695 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920712 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920725 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920744 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920758 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920772 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920786 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920800 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920813 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920826 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920841 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920853 4812 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920864 4812 reconstruct.go:97] "Volume reconstruction finished" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.920874 4812 reconciler.go:26] "Reconciler: start to sync state" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.941016 4812 manager.go:324] Recovery completed Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.958262 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.960409 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.960485 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.960504 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.960908 4812 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.963014 4812 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.963051 4812 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.963085 4812 state_mem.go:36] "Initialized new in-memory state store" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.963899 4812 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.964128 4812 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.964270 4812 kubelet.go:2335] "Starting kubelet main sync loop" Nov 24 19:16:46 crc kubenswrapper[4812]: E1124 19:16:46.964604 4812 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 19:16:46 crc kubenswrapper[4812]: W1124 19:16:46.966032 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Nov 24 19:16:46 crc kubenswrapper[4812]: E1124 19:16:46.966109 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.979246 4812 policy_none.go:49] "None policy: Start" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.981818 4812 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 24 19:16:46 crc kubenswrapper[4812]: I1124 19:16:46.981888 4812 state_mem.go:35] "Initializing new in-memory state store" Nov 24 19:16:46 crc kubenswrapper[4812]: E1124 19:16:46.999955 4812 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.054731 4812 manager.go:334] "Starting Device Plugin manager" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.055044 4812 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.055067 4812 server.go:79] "Starting device plugin registration server" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.056057 4812 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.056080 4812 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.056631 4812 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.056833 4812 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.056866 4812 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.065395 4812 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.065477 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:47 crc kubenswrapper[4812]: E1124 19:16:47.066092 4812 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.068108 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.068163 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.068186 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.068433 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.068870 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.068954 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.069825 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.069862 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.069883 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.070076 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.070916 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.072559 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.073499 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.073561 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.073580 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.073512 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.073649 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.073662 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.073927 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.073964 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.074025 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.075056 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.075122 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.075143 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.076182 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.076235 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.076261 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.078560 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.078601 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.078618 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.078821 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.079457 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.079907 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.079958 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.079978 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.080327 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.080389 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.080438 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.081603 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.081641 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.081655 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.082160 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.082209 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.082223 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:47 crc kubenswrapper[4812]: E1124 19:16:47.101912 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="400ms" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.127833 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.127929 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.127981 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.128027 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.128074 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.128118 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.128166 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.128267 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.128323 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.128382 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.128416 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.128440 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.128462 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.128486 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.128680 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.157043 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.158726 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.158773 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.158783 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.158813 4812 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 19:16:47 crc kubenswrapper[4812]: E1124 19:16:47.159630 4812 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.229768 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.229832 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.229861 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.229887 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.229918 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.229948 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.229979 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230002 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230042 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230054 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230009 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230017 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230084 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230076 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230116 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230178 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230180 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230114 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230363 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230420 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230441 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230469 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230513 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230561 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230597 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230613 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230643 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230656 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230670 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.230620 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.360787 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.363069 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.363150 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.363176 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.363212 4812 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 19:16:47 crc kubenswrapper[4812]: E1124 19:16:47.364034 4812 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.403175 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.434302 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: W1124 19:16:47.456739 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-912d5ed9758c01021a4a79e3ddc5d8fdce2048748fbabe8563e06cb38c8fc000 WatchSource:0}: Error finding container 912d5ed9758c01021a4a79e3ddc5d8fdce2048748fbabe8563e06cb38c8fc000: Status 404 returned error can't find the container with id 912d5ed9758c01021a4a79e3ddc5d8fdce2048748fbabe8563e06cb38c8fc000 Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.468699 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: W1124 19:16:47.468930 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-9995bb6f99fc607c59918ce5399de0c610f0d9068cc69a4e3f4a8b5f5715d092 WatchSource:0}: Error finding container 9995bb6f99fc607c59918ce5399de0c610f0d9068cc69a4e3f4a8b5f5715d092: Status 404 returned error can't find the container with id 9995bb6f99fc607c59918ce5399de0c610f0d9068cc69a4e3f4a8b5f5715d092 Nov 24 19:16:47 crc kubenswrapper[4812]: W1124 19:16:47.486411 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-e6c26a344fb9d7194cbea9a6d98607fcaaf6daf6e66d7c07c7e6bf3b7662b47a WatchSource:0}: Error finding container e6c26a344fb9d7194cbea9a6d98607fcaaf6daf6e66d7c07c7e6bf3b7662b47a: Status 404 returned error can't find the container with id e6c26a344fb9d7194cbea9a6d98607fcaaf6daf6e66d7c07c7e6bf3b7662b47a Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.498970 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: E1124 19:16:47.503661 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="800ms" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.508195 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 19:16:47 crc kubenswrapper[4812]: W1124 19:16:47.516789 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-3a1a427687c7dac19506954b5166f12bacbf291a6bdb086eb8056b81b39e8520 WatchSource:0}: Error finding container 3a1a427687c7dac19506954b5166f12bacbf291a6bdb086eb8056b81b39e8520: Status 404 returned error can't find the container with id 3a1a427687c7dac19506954b5166f12bacbf291a6bdb086eb8056b81b39e8520 Nov 24 19:16:47 crc kubenswrapper[4812]: W1124 19:16:47.521888 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-3ed06d6f20e1f1de5c0e97ea5ffb3726cc2b56967f3a1c3603f9f2382d93f5cf WatchSource:0}: Error finding container 3ed06d6f20e1f1de5c0e97ea5ffb3726cc2b56967f3a1c3603f9f2382d93f5cf: Status 404 returned error can't find the container with id 3ed06d6f20e1f1de5c0e97ea5ffb3726cc2b56967f3a1c3603f9f2382d93f5cf Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.765156 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.767282 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.767367 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.767381 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.767414 4812 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 19:16:47 crc kubenswrapper[4812]: E1124 19:16:47.767843 4812 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Nov 24 19:16:47 crc kubenswrapper[4812]: W1124 19:16:47.858219 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Nov 24 19:16:47 crc kubenswrapper[4812]: E1124 19:16:47.858374 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.896635 4812 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.899682 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 09:47:18.195701339 +0000 UTC Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.899773 4812 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 614h30m30.295931552s for next certificate rotation Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.970048 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9995bb6f99fc607c59918ce5399de0c610f0d9068cc69a4e3f4a8b5f5715d092"} Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.971250 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"912d5ed9758c01021a4a79e3ddc5d8fdce2048748fbabe8563e06cb38c8fc000"} Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.972765 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3ed06d6f20e1f1de5c0e97ea5ffb3726cc2b56967f3a1c3603f9f2382d93f5cf"} Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.973924 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3a1a427687c7dac19506954b5166f12bacbf291a6bdb086eb8056b81b39e8520"} Nov 24 19:16:47 crc kubenswrapper[4812]: I1124 19:16:47.975429 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e6c26a344fb9d7194cbea9a6d98607fcaaf6daf6e66d7c07c7e6bf3b7662b47a"} Nov 24 19:16:47 crc kubenswrapper[4812]: W1124 19:16:47.995983 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Nov 24 19:16:47 crc kubenswrapper[4812]: E1124 19:16:47.996386 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Nov 24 19:16:48 crc kubenswrapper[4812]: W1124 19:16:48.109932 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Nov 24 19:16:48 crc kubenswrapper[4812]: E1124 19:16:48.110069 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Nov 24 19:16:48 crc kubenswrapper[4812]: E1124 19:16:48.304807 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="1.6s" Nov 24 19:16:48 crc kubenswrapper[4812]: W1124 19:16:48.423238 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Nov 24 19:16:48 crc kubenswrapper[4812]: E1124 19:16:48.423399 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.36:6443: connect: connection refused" logger="UnhandledError" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.568937 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.570957 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.571030 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.571053 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.571098 4812 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 19:16:48 crc kubenswrapper[4812]: E1124 19:16:48.571753 4812 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.897144 4812 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.980556 4812 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c" exitCode=0 Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.980707 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.980712 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c"} Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.982186 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.982224 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.982242 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.983321 4812 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="1cff157fe6a1158c6284824597c12979bd1c0b49d9014b18ecde558cecabf626" exitCode=0 Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.983416 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.983446 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"1cff157fe6a1158c6284824597c12979bd1c0b49d9014b18ecde558cecabf626"} Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.984455 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.984490 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.984506 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.987967 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978"} Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.988015 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51"} Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.988025 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66"} Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.988034 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff"} Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.988033 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.988936 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.989006 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.989081 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.991660 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380" exitCode=0 Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.991697 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380"} Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.991803 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.993366 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.993407 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.993428 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.994037 4812 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3a5b106331ecbdf6223c0a417dc103af41e3d44e823fe435b8cbffb34819445a" exitCode=0 Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.994068 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3a5b106331ecbdf6223c0a417dc103af41e3d44e823fe435b8cbffb34819445a"} Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.994173 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.994834 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.994871 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:48 crc kubenswrapper[4812]: I1124 19:16:48.994889 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:49 crc kubenswrapper[4812]: I1124 19:16:49.000898 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:49 crc kubenswrapper[4812]: I1124 19:16:49.002030 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:49 crc kubenswrapper[4812]: I1124 19:16:49.002068 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:49 crc kubenswrapper[4812]: I1124 19:16:49.002131 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:49 crc kubenswrapper[4812]: I1124 19:16:49.897085 4812 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.36:6443: connect: connection refused Nov 24 19:16:49 crc kubenswrapper[4812]: E1124 19:16:49.906411 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.36:6443: connect: connection refused" interval="3.2s" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.001954 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764"} Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.002057 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06"} Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.002078 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d"} Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.002118 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d"} Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.005200 4812 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3100cef9269b8932c3796554abaaa73b6083a53c903ea3c255dfb4806c65838e" exitCode=0 Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.005371 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3100cef9269b8932c3796554abaaa73b6083a53c903ea3c255dfb4806c65838e"} Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.005411 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.006749 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.006791 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.006809 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.010819 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"79ffc4cb4286a683020d574f40064acf95c34e6c28bde7742ce70d88f9a4fbaf"} Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.010870 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.010886 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d2c91670e98bc282ff883848aee029e9ff085db7efbd1ed8a91c5fe1f159eda4"} Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.010909 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d1b99fb73ebd02df37d8967752ad94ce4488e9690f5c163efa4a30592849e3e1"} Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.011970 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.012023 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.012045 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.015477 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.015521 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0d638be4feba4e12b69f18ce811971301474f2cc3fcac38d24389f4db9bb921f"} Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.015545 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.016546 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.016578 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.016589 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.017985 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.018038 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.018059 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.040563 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.060977 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.172224 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.173701 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.173776 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.173787 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.173844 4812 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 19:16:50 crc kubenswrapper[4812]: E1124 19:16:50.174671 4812 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.36:6443: connect: connection refused" node="crc" Nov 24 19:16:50 crc kubenswrapper[4812]: I1124 19:16:50.608832 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.024266 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e"} Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.024409 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.025823 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.025881 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.025902 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.029418 4812 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="763563e40455af3ff76a193d3836ff1642aefa3bbe5d8859465b9245da8c9fa6" exitCode=0 Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.029570 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.029616 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.029638 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"763563e40455af3ff76a193d3836ff1642aefa3bbe5d8859465b9245da8c9fa6"} Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.029690 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.029626 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.032029 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.032041 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.032077 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.032085 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.032101 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.032107 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.032175 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.032254 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.032292 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.034747 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.034789 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:51 crc kubenswrapper[4812]: I1124 19:16:51.034818 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:52 crc kubenswrapper[4812]: I1124 19:16:52.040058 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:52 crc kubenswrapper[4812]: I1124 19:16:52.040230 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"70b6d5f5190f8221fa3f7722e5a588150fed89d00fc7642d9bc79a2b33a9f08f"} Nov 24 19:16:52 crc kubenswrapper[4812]: I1124 19:16:52.040976 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"abf216fa83ef5fa98d2ff37110ab603c5b2ba3ad8a7058d2c2dfa841fd62ea48"} Nov 24 19:16:52 crc kubenswrapper[4812]: I1124 19:16:52.041019 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 19:16:52 crc kubenswrapper[4812]: I1124 19:16:52.041037 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e1b2c0976caab5b6dd5e2876a4b2a103769b9d4f5c7e0d821bdbbb77a37f9cfe"} Nov 24 19:16:52 crc kubenswrapper[4812]: I1124 19:16:52.040509 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:52 crc kubenswrapper[4812]: I1124 19:16:52.040397 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:52 crc kubenswrapper[4812]: I1124 19:16:52.043015 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:52 crc kubenswrapper[4812]: I1124 19:16:52.043087 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:52 crc kubenswrapper[4812]: I1124 19:16:52.043015 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:52 crc kubenswrapper[4812]: I1124 19:16:52.043118 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:52 crc kubenswrapper[4812]: I1124 19:16:52.043149 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:52 crc kubenswrapper[4812]: I1124 19:16:52.043291 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:52 crc kubenswrapper[4812]: I1124 19:16:52.043571 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:52 crc kubenswrapper[4812]: I1124 19:16:52.043729 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:52 crc kubenswrapper[4812]: I1124 19:16:52.043888 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.051517 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ea26a8e57647f1910976dfe5afdc45869a513a108d780cbc81e92fed2a61021f"} Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.051585 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.051679 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.051593 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"27f8e3348ff228f38a656f0a2f6af9023fa101953b96b8ff565a73d237587513"} Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.053510 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.053536 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.053557 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.053588 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.053617 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.053618 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.061637 4812 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.061737 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.324460 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.374808 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.376620 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.376697 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.376718 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.376768 4812 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.837004 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.837220 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.839132 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.839198 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.839222 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.847652 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 19:16:53 crc kubenswrapper[4812]: I1124 19:16:53.934666 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 24 19:16:54 crc kubenswrapper[4812]: I1124 19:16:54.055260 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:54 crc kubenswrapper[4812]: I1124 19:16:54.055325 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:54 crc kubenswrapper[4812]: I1124 19:16:54.055325 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:54 crc kubenswrapper[4812]: I1124 19:16:54.057479 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:54 crc kubenswrapper[4812]: I1124 19:16:54.057522 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:54 crc kubenswrapper[4812]: I1124 19:16:54.057539 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:54 crc kubenswrapper[4812]: I1124 19:16:54.057557 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:54 crc kubenswrapper[4812]: I1124 19:16:54.057563 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:54 crc kubenswrapper[4812]: I1124 19:16:54.057693 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:54 crc kubenswrapper[4812]: I1124 19:16:54.058153 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:54 crc kubenswrapper[4812]: I1124 19:16:54.058363 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:54 crc kubenswrapper[4812]: I1124 19:16:54.058552 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:54 crc kubenswrapper[4812]: I1124 19:16:54.691201 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 24 19:16:55 crc kubenswrapper[4812]: I1124 19:16:55.058933 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:55 crc kubenswrapper[4812]: I1124 19:16:55.060476 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:55 crc kubenswrapper[4812]: I1124 19:16:55.060526 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:55 crc kubenswrapper[4812]: I1124 19:16:55.060546 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:55 crc kubenswrapper[4812]: I1124 19:16:55.064167 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 19:16:55 crc kubenswrapper[4812]: I1124 19:16:55.064302 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:55 crc kubenswrapper[4812]: I1124 19:16:55.065377 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:55 crc kubenswrapper[4812]: I1124 19:16:55.065423 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:55 crc kubenswrapper[4812]: I1124 19:16:55.065441 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:55 crc kubenswrapper[4812]: I1124 19:16:55.223912 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 19:16:55 crc kubenswrapper[4812]: I1124 19:16:55.224224 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:55 crc kubenswrapper[4812]: I1124 19:16:55.225920 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:55 crc kubenswrapper[4812]: I1124 19:16:55.225994 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:55 crc kubenswrapper[4812]: I1124 19:16:55.226020 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:56 crc kubenswrapper[4812]: I1124 19:16:56.062294 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:16:56 crc kubenswrapper[4812]: I1124 19:16:56.064393 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:16:56 crc kubenswrapper[4812]: I1124 19:16:56.064458 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:16:56 crc kubenswrapper[4812]: I1124 19:16:56.064482 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:16:57 crc kubenswrapper[4812]: E1124 19:16:57.066313 4812 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 19:17:00 crc kubenswrapper[4812]: W1124 19:17:00.623580 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 19:17:00 crc kubenswrapper[4812]: I1124 19:17:00.623689 4812 trace.go:236] Trace[1491322175]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 19:16:50.622) (total time: 10001ms): Nov 24 19:17:00 crc kubenswrapper[4812]: Trace[1491322175]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:17:00.623) Nov 24 19:17:00 crc kubenswrapper[4812]: Trace[1491322175]: [10.001477376s] [10.001477376s] END Nov 24 19:17:00 crc kubenswrapper[4812]: E1124 19:17:00.623720 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 19:17:00 crc kubenswrapper[4812]: W1124 19:17:00.704040 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 19:17:00 crc kubenswrapper[4812]: I1124 19:17:00.704130 4812 trace.go:236] Trace[1639788410]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 19:16:50.702) (total time: 10001ms): Nov 24 19:17:00 crc kubenswrapper[4812]: Trace[1639788410]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:17:00.704) Nov 24 19:17:00 crc kubenswrapper[4812]: Trace[1639788410]: [10.001166998s] [10.001166998s] END Nov 24 19:17:00 crc kubenswrapper[4812]: E1124 19:17:00.704155 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 19:17:00 crc kubenswrapper[4812]: W1124 19:17:00.804089 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 19:17:00 crc kubenswrapper[4812]: I1124 19:17:00.804206 4812 trace.go:236] Trace[1554654605]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 19:16:50.802) (total time: 10001ms): Nov 24 19:17:00 crc kubenswrapper[4812]: Trace[1554654605]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (19:17:00.804) Nov 24 19:17:00 crc kubenswrapper[4812]: Trace[1554654605]: [10.001203498s] [10.001203498s] END Nov 24 19:17:00 crc kubenswrapper[4812]: E1124 19:17:00.804235 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 19:17:00 crc kubenswrapper[4812]: I1124 19:17:00.897548 4812 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 24 19:17:00 crc kubenswrapper[4812]: W1124 19:17:00.924359 4812 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 19:17:00 crc kubenswrapper[4812]: I1124 19:17:00.924446 4812 trace.go:236] Trace[1836285374]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 19:16:50.921) (total time: 10003ms): Nov 24 19:17:00 crc kubenswrapper[4812]: Trace[1836285374]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10003ms (19:17:00.924) Nov 24 19:17:00 crc kubenswrapper[4812]: Trace[1836285374]: [10.003327142s] [10.003327142s] END Nov 24 19:17:00 crc kubenswrapper[4812]: E1124 19:17:00.924469 4812 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 19:17:01 crc kubenswrapper[4812]: I1124 19:17:01.442490 4812 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 19:17:01 crc kubenswrapper[4812]: I1124 19:17:01.442589 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 19:17:01 crc kubenswrapper[4812]: I1124 19:17:01.448177 4812 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 19:17:01 crc kubenswrapper[4812]: I1124 19:17:01.448265 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 19:17:03 crc kubenswrapper[4812]: I1124 19:17:03.061899 4812 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 19:17:03 crc kubenswrapper[4812]: I1124 19:17:03.062050 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 19:17:04 crc kubenswrapper[4812]: I1124 19:17:04.730139 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 24 19:17:04 crc kubenswrapper[4812]: I1124 19:17:04.730379 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:17:04 crc kubenswrapper[4812]: I1124 19:17:04.731624 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:04 crc kubenswrapper[4812]: I1124 19:17:04.731688 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:04 crc kubenswrapper[4812]: I1124 19:17:04.731700 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:04 crc kubenswrapper[4812]: I1124 19:17:04.750206 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 24 19:17:05 crc kubenswrapper[4812]: I1124 19:17:05.071696 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 19:17:05 crc kubenswrapper[4812]: I1124 19:17:05.071942 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:17:05 crc kubenswrapper[4812]: I1124 19:17:05.073542 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:05 crc kubenswrapper[4812]: I1124 19:17:05.073645 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:05 crc kubenswrapper[4812]: I1124 19:17:05.073666 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:05 crc kubenswrapper[4812]: I1124 19:17:05.085002 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:17:05 crc kubenswrapper[4812]: I1124 19:17:05.086389 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:05 crc kubenswrapper[4812]: I1124 19:17:05.086445 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:05 crc kubenswrapper[4812]: I1124 19:17:05.086464 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:05 crc kubenswrapper[4812]: I1124 19:17:05.231037 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 19:17:05 crc kubenswrapper[4812]: I1124 19:17:05.231293 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:17:05 crc kubenswrapper[4812]: I1124 19:17:05.232776 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:05 crc kubenswrapper[4812]: I1124 19:17:05.232866 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:05 crc kubenswrapper[4812]: I1124 19:17:05.232894 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:05 crc kubenswrapper[4812]: I1124 19:17:05.236250 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 19:17:05 crc kubenswrapper[4812]: I1124 19:17:05.388625 4812 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 24 19:17:05 crc kubenswrapper[4812]: I1124 19:17:05.913074 4812 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.005340 4812 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.088278 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.088397 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.089641 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.089709 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.089732 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.116031 4812 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 24 19:17:06 crc kubenswrapper[4812]: E1124 19:17:06.441765 4812 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.446082 4812 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 24 19:17:06 crc kubenswrapper[4812]: E1124 19:17:06.448990 4812 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.497860 4812 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.497924 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.497868 4812 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.498093 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.502662 4812 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:56560->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.502723 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:56560->192.168.126.11:17697: read: connection reset by peer" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.881942 4812 apiserver.go:52] "Watching apiserver" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.885745 4812 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.886317 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.886888 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.887095 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.887123 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:06 crc kubenswrapper[4812]: E1124 19:17:06.887199 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:06 crc kubenswrapper[4812]: E1124 19:17:06.887230 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.887771 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.887879 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.887953 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 19:17:06 crc kubenswrapper[4812]: E1124 19:17:06.887956 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.892520 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.892587 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.893053 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.893932 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.894118 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.894556 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.894577 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.894765 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.894784 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.900222 4812 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.925321 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.941704 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.949599 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.949709 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.949772 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 19:17:06 crc kubenswrapper[4812]: E1124 19:17:06.949823 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:17:07.449775125 +0000 UTC m=+21.238727546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.949876 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.949937 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950005 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950070 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950107 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950124 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950192 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950231 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950266 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950299 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950386 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950426 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950463 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950496 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950529 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950561 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950600 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950644 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950682 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950750 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950786 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950821 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950853 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950886 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950936 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.950968 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.951034 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.951072 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.951103 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.951135 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.951170 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.951234 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.951271 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.951305 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.951363 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.951399 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.951430 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.951443 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.951464 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.951555 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.951611 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.951667 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.951725 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.952632 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.952747 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.952802 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.952856 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.953039 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.953092 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.953998 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.954069 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.954121 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.954178 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.954227 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.954280 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.954379 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.954445 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.954504 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.954558 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.954610 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.954662 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.954718 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.954806 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.954865 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.954914 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.954984 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955040 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955094 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955145 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955205 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955257 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955409 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955466 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955524 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955578 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955637 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955719 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955787 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955843 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955895 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955949 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.951609 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.956005 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.952768 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.953168 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.956060 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.953208 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.953388 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.953638 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.953656 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.953745 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.954263 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.954587 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.954736 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.954820 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955096 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955100 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955252 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955461 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955508 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.956647 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.956846 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.956954 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.957131 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.957190 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.957256 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.957248 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955720 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955711 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955889 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955940 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.956119 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.957728 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.957752 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.957796 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.957836 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.957877 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.957927 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.957964 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958001 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958001 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958039 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958078 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958114 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958151 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958160 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958187 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958209 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958230 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958270 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958297 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958310 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958379 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958416 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958454 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958493 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958530 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958568 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958568 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958601 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958637 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958673 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958716 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958767 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958863 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958868 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958905 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958944 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958980 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.959026 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.959080 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.959201 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.959260 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.959321 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.959419 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.959479 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.959541 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.959590 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.959652 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.959706 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.959764 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.959819 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.959876 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.959936 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.959992 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.960046 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.960102 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.960164 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.960214 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.960270 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.960324 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.960416 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.960469 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.960520 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.960573 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.960627 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.960767 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.960832 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.960896 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.960955 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961013 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961069 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961125 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961196 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961249 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961315 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961412 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961477 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961528 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961580 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961649 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961714 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961782 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961851 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961921 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.962008 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.962073 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.962137 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.962208 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.962267 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.962322 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.962414 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.962467 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.962526 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.962580 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.962635 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.962689 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.964002 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.964115 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.964598 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.964694 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.964793 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.964877 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.964959 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.965044 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.965135 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.965196 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.965254 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.965314 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.965417 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.965497 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.965566 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.965634 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.965720 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.965777 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.958902 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.959779 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.966796 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.966800 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.967237 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.967289 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.967482 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.967594 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.967698 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.967792 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.968037 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.968248 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.968331 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.968695 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.968870 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.959806 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.959815 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.960087 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.960624 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.960625 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.960806 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.960846 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961256 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961281 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961326 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961401 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961493 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961712 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961786 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.961843 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.962102 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.962157 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.959694 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.962313 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.962938 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.963012 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.963030 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.963078 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.963617 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.963879 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.969180 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.963922 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.963932 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.963931 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.964327 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.964467 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.964129 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.964634 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.964719 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.965119 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.965030 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.965234 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.965250 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.965571 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.965605 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.965692 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.965700 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.965902 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.966139 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.966313 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.969060 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.969727 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.969903 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.968563 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.970367 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.970449 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.970514 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.970704 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.972299 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.972420 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.972753 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.972849 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.972908 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.973091 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.955537 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.973256 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.974124 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.974375 4812 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.974423 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.974465 4812 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.974508 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.974536 4812 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.974567 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.974602 4812 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.974633 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.974663 4812 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.974691 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.974719 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.974735 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.974761 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.974867 4812 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.974920 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.974995 4812 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.975029 4812 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.975060 4812 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.975090 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.975767 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.975800 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.975823 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.975844 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.975864 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.975883 4812 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.975903 4812 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.975922 4812 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.975941 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.975960 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.975979 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.975997 4812 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.976015 4812 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.976040 4812 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.976062 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.976081 4812 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.976101 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.976119 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.976138 4812 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.976156 4812 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.976160 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.976175 4812 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.976268 4812 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.976302 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.976386 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.976419 4812 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.976462 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.975801 4812 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.970825 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.970843 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.970854 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.970898 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.970900 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.971144 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.971537 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.971504 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.972065 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.972277 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.972535 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.972537 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.972646 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.973116 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.973199 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.973237 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.973236 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.973524 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.973557 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.973574 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.973750 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.973930 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.973971 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.974048 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.974076 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: E1124 19:17:06.975502 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 19:17:06 crc kubenswrapper[4812]: E1124 19:17:06.976966 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:07.476940841 +0000 UTC m=+21.265893252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.977406 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.978122 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.978138 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.978309 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.978318 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.978832 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.979469 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.979612 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.979612 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.979650 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.980092 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.980700 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.976506 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981098 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981143 4812 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981179 4812 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981220 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981226 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981253 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981288 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981320 4812 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981395 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981435 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981465 4812 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981494 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981523 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981562 4812 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981591 4812 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981618 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981646 4812 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981675 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981703 4812 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981732 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981760 4812 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981789 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981817 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981844 4812 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981871 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981901 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981917 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.981931 4812 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.982049 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.982087 4812 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.982116 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.982955 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.983323 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.983901 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.984227 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.984654 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.984990 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.986485 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.987818 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.990061 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.990529 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.991492 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: E1124 19:17:06.991980 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 19:17:06 crc kubenswrapper[4812]: E1124 19:17:06.992241 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:07.492138702 +0000 UTC m=+21.281091113 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.993482 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: E1124 19:17:06.993650 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 19:17:06 crc kubenswrapper[4812]: E1124 19:17:06.993851 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 19:17:06 crc kubenswrapper[4812]: E1124 19:17:06.994174 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.994476 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: E1124 19:17:06.994851 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:07.494739269 +0000 UTC m=+21.283691760 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.993887 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.995140 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.996630 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.997073 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: I1124 19:17:06.998509 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:06 crc kubenswrapper[4812]: E1124 19:17:06.999087 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 19:17:06 crc kubenswrapper[4812]: E1124 19:17:06.999118 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 19:17:06 crc kubenswrapper[4812]: E1124 19:17:06.999139 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:06 crc kubenswrapper[4812]: E1124 19:17:06.999257 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:07.499180301 +0000 UTC m=+21.288132712 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.001850 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.003016 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.003606 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.003752 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.009239 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.014752 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.014841 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.014952 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.014985 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.015025 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.015191 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.015729 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.015791 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.016138 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.016338 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.016517 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.018130 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.018408 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.018854 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.018491 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.018797 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.018963 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.019233 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.019846 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.019859 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.020296 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.020381 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.020416 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.020461 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.021328 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.021694 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.021994 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.022296 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.022037 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.022427 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.022721 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.022776 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.022805 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.023029 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.023044 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.023051 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.024596 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.025522 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.026299 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.026839 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.027755 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.028076 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.028819 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.029410 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.029543 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.029692 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.029853 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.031323 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.032225 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.032974 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.034942 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.039426 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.040940 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.042140 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.043236 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.043419 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.046212 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.046325 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.048754 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.051101 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.052103 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.053446 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.055823 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.057451 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.058365 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.059975 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.060125 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.062595 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.064271 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.065584 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.067016 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.069149 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.070092 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.071847 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.072753 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.073617 4812 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.073816 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.074781 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.080054 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.081295 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082249 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082487 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082554 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082633 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082648 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082736 4812 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082752 4812 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082766 4812 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082781 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082794 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082807 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082819 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082831 4812 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082843 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082855 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082867 4812 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082879 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082899 4812 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082916 4812 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082925 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082935 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082945 4812 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082954 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082966 4812 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082974 4812 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082982 4812 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.082991 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083000 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083009 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083018 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083027 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083037 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083047 4812 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083056 4812 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083066 4812 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083075 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083083 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083091 4812 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083101 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083110 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083121 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083130 4812 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083138 4812 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083146 4812 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083154 4812 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083163 4812 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083171 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083182 4812 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083190 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083199 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083207 4812 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083216 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083226 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083235 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083245 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083254 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083263 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083273 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083282 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083291 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083301 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083309 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083319 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083332 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083355 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083365 4812 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083375 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083383 4812 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083392 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083406 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083416 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083424 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083434 4812 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083443 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083451 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083460 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083470 4812 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083478 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083487 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083496 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083504 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083512 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083520 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083529 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083537 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083547 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083555 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083565 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083573 4812 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083583 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083592 4812 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083600 4812 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083608 4812 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083616 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083625 4812 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083635 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083645 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083657 4812 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083669 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083678 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083685 4812 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083693 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083702 4812 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083711 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083720 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083729 4812 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083737 4812 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083747 4812 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083755 4812 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083763 4812 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083773 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083781 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083790 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083799 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083808 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083817 4812 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083825 4812 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083833 4812 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083842 4812 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083851 4812 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083860 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083870 4812 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083879 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083889 4812 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.083898 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.085420 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.086194 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.086744 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.087746 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.088809 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.089276 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.089876 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.089933 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.090891 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.092032 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.092651 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.093128 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.093621 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.094116 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.095235 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.095769 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.095764 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e" exitCode=255 Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.096677 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.097146 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.097713 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.098650 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.099114 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.099568 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e"} Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.101579 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.111461 4812 scope.go:117] "RemoveContainer" containerID="37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.111578 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.112487 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.124804 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.140483 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.156249 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.170168 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.182178 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.193671 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.206516 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.212786 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.215352 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.227514 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.228437 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 19:17:07 crc kubenswrapper[4812]: W1124 19:17:07.233781 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-7b120bbf6196db50f9640c8e77ece9582385f911a8e2e4f87b98bd9fef54afcc WatchSource:0}: Error finding container 7b120bbf6196db50f9640c8e77ece9582385f911a8e2e4f87b98bd9fef54afcc: Status 404 returned error can't find the container with id 7b120bbf6196db50f9640c8e77ece9582385f911a8e2e4f87b98bd9fef54afcc Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.239116 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 19:17:07 crc kubenswrapper[4812]: W1124 19:17:07.244007 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-211781776ef319c02f3cc20d4a36127bb873bbd70bc6d5803ac8d621fc4e1aa7 WatchSource:0}: Error finding container 211781776ef319c02f3cc20d4a36127bb873bbd70bc6d5803ac8d621fc4e1aa7: Status 404 returned error can't find the container with id 211781776ef319c02f3cc20d4a36127bb873bbd70bc6d5803ac8d621fc4e1aa7 Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.486047 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:17:07 crc kubenswrapper[4812]: E1124 19:17:07.486172 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:17:08.486139332 +0000 UTC m=+22.275091713 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.486237 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:07 crc kubenswrapper[4812]: E1124 19:17:07.486386 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 19:17:07 crc kubenswrapper[4812]: E1124 19:17:07.486465 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:08.486451811 +0000 UTC m=+22.275404192 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.586979 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.587048 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:07 crc kubenswrapper[4812]: I1124 19:17:07.587083 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:07 crc kubenswrapper[4812]: E1124 19:17:07.587222 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 19:17:07 crc kubenswrapper[4812]: E1124 19:17:07.587298 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:08.587277313 +0000 UTC m=+22.376229724 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 19:17:07 crc kubenswrapper[4812]: E1124 19:17:07.587829 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 19:17:07 crc kubenswrapper[4812]: E1124 19:17:07.587861 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 19:17:07 crc kubenswrapper[4812]: E1124 19:17:07.587879 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:07 crc kubenswrapper[4812]: E1124 19:17:07.587925 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:08.587910571 +0000 UTC m=+22.376862972 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:07 crc kubenswrapper[4812]: E1124 19:17:07.588001 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 19:17:07 crc kubenswrapper[4812]: E1124 19:17:07.588024 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 19:17:07 crc kubenswrapper[4812]: E1124 19:17:07.588038 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:07 crc kubenswrapper[4812]: E1124 19:17:07.588079 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:08.588066376 +0000 UTC m=+22.377018787 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.100922 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df"} Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.101010 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7b120bbf6196db50f9640c8e77ece9582385f911a8e2e4f87b98bd9fef54afcc"} Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.104682 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.108322 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15"} Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.108671 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.109450 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"9e53d7a543dc9bbd75535429d1b1ee85f1cc447d4209c2639115eaa5634e5517"} Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.110955 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93"} Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.110979 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a"} Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.110990 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"211781776ef319c02f3cc20d4a36127bb873bbd70bc6d5803ac8d621fc4e1aa7"} Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.124326 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:08Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.140449 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:08Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.153029 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:08Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.165858 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:08Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.179332 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:08Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.191840 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:08Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.205547 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:08Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.220882 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:08Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.239709 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:08Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.255152 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:08Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.269672 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:08Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.282567 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:08Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.296698 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:08Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.307244 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:08Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.496968 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.497092 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:08 crc kubenswrapper[4812]: E1124 19:17:08.497202 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 19:17:08 crc kubenswrapper[4812]: E1124 19:17:08.497277 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:10.497257512 +0000 UTC m=+24.286209883 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 19:17:08 crc kubenswrapper[4812]: E1124 19:17:08.497640 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:17:10.497607312 +0000 UTC m=+24.286559683 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.598478 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.598539 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.598572 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:08 crc kubenswrapper[4812]: E1124 19:17:08.598714 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 19:17:08 crc kubenswrapper[4812]: E1124 19:17:08.598781 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:10.598762263 +0000 UTC m=+24.387714634 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 19:17:08 crc kubenswrapper[4812]: E1124 19:17:08.598773 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 19:17:08 crc kubenswrapper[4812]: E1124 19:17:08.598831 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 19:17:08 crc kubenswrapper[4812]: E1124 19:17:08.598851 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:08 crc kubenswrapper[4812]: E1124 19:17:08.598943 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:10.598913507 +0000 UTC m=+24.387866068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:08 crc kubenswrapper[4812]: E1124 19:17:08.599077 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 19:17:08 crc kubenswrapper[4812]: E1124 19:17:08.599103 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 19:17:08 crc kubenswrapper[4812]: E1124 19:17:08.599117 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:08 crc kubenswrapper[4812]: E1124 19:17:08.599156 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:10.599143243 +0000 UTC m=+24.388095834 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.965319 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.965468 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:08 crc kubenswrapper[4812]: E1124 19:17:08.965550 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.966058 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:08 crc kubenswrapper[4812]: E1124 19:17:08.966275 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:08 crc kubenswrapper[4812]: E1124 19:17:08.966582 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.976589 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.977415 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.978457 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 24 19:17:08 crc kubenswrapper[4812]: I1124 19:17:08.978973 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.067255 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.073580 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.081740 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.092179 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.112935 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.134200 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.154989 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.185648 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.205100 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.225289 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.245549 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.264860 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.283020 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.304431 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.323868 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.343023 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.363145 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.385733 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:10Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.515727 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.515839 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:10 crc kubenswrapper[4812]: E1124 19:17:10.515967 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:17:14.515931931 +0000 UTC m=+28.304884312 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:17:10 crc kubenswrapper[4812]: E1124 19:17:10.515973 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 19:17:10 crc kubenswrapper[4812]: E1124 19:17:10.516063 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:14.516042744 +0000 UTC m=+28.304995225 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.616759 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.616877 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.616992 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:10 crc kubenswrapper[4812]: E1124 19:17:10.617076 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 19:17:10 crc kubenswrapper[4812]: E1124 19:17:10.617117 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 19:17:10 crc kubenswrapper[4812]: E1124 19:17:10.617143 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:10 crc kubenswrapper[4812]: E1124 19:17:10.617161 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 19:17:10 crc kubenswrapper[4812]: E1124 19:17:10.617194 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 19:17:10 crc kubenswrapper[4812]: E1124 19:17:10.617215 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:10 crc kubenswrapper[4812]: E1124 19:17:10.617209 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 19:17:10 crc kubenswrapper[4812]: E1124 19:17:10.617249 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:14.617215197 +0000 UTC m=+28.406167608 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:10 crc kubenswrapper[4812]: E1124 19:17:10.617292 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:14.617268798 +0000 UTC m=+28.406221209 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:10 crc kubenswrapper[4812]: E1124 19:17:10.617418 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:14.617380191 +0000 UTC m=+28.406332602 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.965112 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.965162 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:10 crc kubenswrapper[4812]: E1124 19:17:10.965308 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:10 crc kubenswrapper[4812]: I1124 19:17:10.965360 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:10 crc kubenswrapper[4812]: E1124 19:17:10.965496 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:10 crc kubenswrapper[4812]: E1124 19:17:10.965685 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:11 crc kubenswrapper[4812]: I1124 19:17:11.122547 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6"} Nov 24 19:17:11 crc kubenswrapper[4812]: I1124 19:17:11.145292 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:11Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:11 crc kubenswrapper[4812]: I1124 19:17:11.167707 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:11Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:11 crc kubenswrapper[4812]: I1124 19:17:11.186513 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:11Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:11 crc kubenswrapper[4812]: I1124 19:17:11.220574 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:11Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:11 crc kubenswrapper[4812]: I1124 19:17:11.240636 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:11Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:11 crc kubenswrapper[4812]: I1124 19:17:11.256103 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:11Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:11 crc kubenswrapper[4812]: I1124 19:17:11.269522 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:11Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:11 crc kubenswrapper[4812]: I1124 19:17:11.288133 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:11Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.009886 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-lhgj5"] Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.010531 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-zwlsb"] Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.010803 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.011083 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zwlsb" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.013713 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.013746 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.013890 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.013914 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.015256 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.015668 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.015713 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.016532 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.033917 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.045859 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.058909 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.071806 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.085632 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.098800 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.112769 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.127132 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.128382 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6-hosts-file\") pod \"node-resolver-zwlsb\" (UID: \"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\") " pod="openshift-dns/node-resolver-zwlsb" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.128533 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c270cb89-97c2-48c4-94c3-9b8420d81cfd-cni-binary-copy\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.128645 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8n2v\" (UniqueName: \"kubernetes.io/projected/c270cb89-97c2-48c4-94c3-9b8420d81cfd-kube-api-access-m8n2v\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.128758 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-system-cni-dir\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.128878 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-hostroot\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.129005 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-multus-cni-dir\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.129125 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-host-run-k8s-cni-cncf-io\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.129248 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-host-var-lib-cni-multus\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.129381 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-host-var-lib-kubelet\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.129508 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-host-run-multus-certs\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.129633 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2brdr\" (UniqueName: \"kubernetes.io/projected/1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6-kube-api-access-2brdr\") pod \"node-resolver-zwlsb\" (UID: \"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\") " pod="openshift-dns/node-resolver-zwlsb" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.129766 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-cnibin\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.129806 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-os-release\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.129831 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-multus-socket-dir-parent\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.129861 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-host-run-netns\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.129882 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-host-var-lib-cni-bin\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.129939 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-multus-conf-dir\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.129963 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c270cb89-97c2-48c4-94c3-9b8420d81cfd-multus-daemon-config\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.129983 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-etc-kubernetes\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.139405 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.150759 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.161978 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.181879 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.198107 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.215432 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.228311 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.230691 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-multus-cni-dir\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.230741 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-host-run-k8s-cni-cncf-io\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.230766 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-host-var-lib-cni-multus\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.230789 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-host-var-lib-kubelet\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.230815 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-host-run-multus-certs\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.230852 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2brdr\" (UniqueName: \"kubernetes.io/projected/1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6-kube-api-access-2brdr\") pod \"node-resolver-zwlsb\" (UID: \"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\") " pod="openshift-dns/node-resolver-zwlsb" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.230876 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-cnibin\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.230899 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-os-release\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.230922 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-multus-socket-dir-parent\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.230945 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-host-run-netns\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.230970 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-host-var-lib-cni-bin\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231006 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-multus-conf-dir\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231031 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c270cb89-97c2-48c4-94c3-9b8420d81cfd-multus-daemon-config\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231032 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-multus-socket-dir-parent\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231050 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-etc-kubernetes\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231050 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-cnibin\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231077 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c270cb89-97c2-48c4-94c3-9b8420d81cfd-cni-binary-copy\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231063 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-host-var-lib-kubelet\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231106 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-host-run-netns\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231110 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-host-run-k8s-cni-cncf-io\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231125 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-host-var-lib-cni-bin\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231156 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6-hosts-file\") pod \"node-resolver-zwlsb\" (UID: \"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\") " pod="openshift-dns/node-resolver-zwlsb" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231119 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6-hosts-file\") pod \"node-resolver-zwlsb\" (UID: \"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\") " pod="openshift-dns/node-resolver-zwlsb" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231161 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-host-var-lib-cni-multus\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231183 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-etc-kubernetes\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231224 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-multus-conf-dir\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231253 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-hostroot\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231276 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-hostroot\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231295 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-multus-cni-dir\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231320 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-host-run-multus-certs\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231303 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8n2v\" (UniqueName: \"kubernetes.io/projected/c270cb89-97c2-48c4-94c3-9b8420d81cfd-kube-api-access-m8n2v\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231384 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-system-cni-dir\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231442 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-os-release\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231461 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c270cb89-97c2-48c4-94c3-9b8420d81cfd-system-cni-dir\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.231958 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c270cb89-97c2-48c4-94c3-9b8420d81cfd-multus-daemon-config\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.232537 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c270cb89-97c2-48c4-94c3-9b8420d81cfd-cni-binary-copy\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.256326 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.265558 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8n2v\" (UniqueName: \"kubernetes.io/projected/c270cb89-97c2-48c4-94c3-9b8420d81cfd-kube-api-access-m8n2v\") pod \"multus-lhgj5\" (UID: \"c270cb89-97c2-48c4-94c3-9b8420d81cfd\") " pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.273911 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2brdr\" (UniqueName: \"kubernetes.io/projected/1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6-kube-api-access-2brdr\") pod \"node-resolver-zwlsb\" (UID: \"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\") " pod="openshift-dns/node-resolver-zwlsb" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.285639 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.314932 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.324383 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lhgj5" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.331823 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zwlsb" Nov 24 19:17:12 crc kubenswrapper[4812]: W1124 19:17:12.336279 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc270cb89_97c2_48c4_94c3_9b8420d81cfd.slice/crio-ec752dcd61b4eb8a2eff46f8c943135559f4ed9c5c44ffea5921b5d550bad48e WatchSource:0}: Error finding container ec752dcd61b4eb8a2eff46f8c943135559f4ed9c5c44ffea5921b5d550bad48e: Status 404 returned error can't find the container with id ec752dcd61b4eb8a2eff46f8c943135559f4ed9c5c44ffea5921b5d550bad48e Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.338295 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: W1124 19:17:12.351357 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e7fb7b6_a04c_4594_a2ed_b81aa6bcced6.slice/crio-5525935a4ea74ee3850365446c2ecaa5c321b1c6396ba4e6490d8d2a9b2c9cc0 WatchSource:0}: Error finding container 5525935a4ea74ee3850365446c2ecaa5c321b1c6396ba4e6490d8d2a9b2c9cc0: Status 404 returned error can't find the container with id 5525935a4ea74ee3850365446c2ecaa5c321b1c6396ba4e6490d8d2a9b2c9cc0 Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.399515 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-nscsk"] Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.400104 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.404466 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.404726 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.404858 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dgm54"] Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.405206 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.406140 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.406536 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.407631 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.407935 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-qj8tt"] Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.408676 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.411196 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.411306 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.411426 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.411608 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.412023 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.412109 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.412297 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.412458 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.412711 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.426694 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.445175 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.468496 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.494152 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.516577 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.529407 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.533696 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ed62377b-bc6c-45ab-9c18-a91c1b8fb56d-os-release\") pod \"multus-additional-cni-plugins-qj8tt\" (UID: \"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\") " pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.533779 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-run-ovn\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.533801 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ed62377b-bc6c-45ab-9c18-a91c1b8fb56d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qj8tt\" (UID: \"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\") " pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.533820 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bcb3ad4b-5afb-47fe-8963-9f79489d45d5-mcd-auth-proxy-config\") pod \"machine-config-daemon-nscsk\" (UID: \"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\") " pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.533865 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b24bf762-6020-46b4-b9e8-589eb8ed0650-ovnkube-config\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.533882 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4zx7\" (UniqueName: \"kubernetes.io/projected/b24bf762-6020-46b4-b9e8-589eb8ed0650-kube-api-access-x4zx7\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.533938 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-slash\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.533959 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-var-lib-openvswitch\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.533994 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-run-openvswitch\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534015 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-cni-netd\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534068 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534095 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b24bf762-6020-46b4-b9e8-589eb8ed0650-env-overrides\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534131 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ed62377b-bc6c-45ab-9c18-a91c1b8fb56d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qj8tt\" (UID: \"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\") " pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534153 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b24bf762-6020-46b4-b9e8-589eb8ed0650-ovnkube-script-lib\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534193 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ed62377b-bc6c-45ab-9c18-a91c1b8fb56d-system-cni-dir\") pod \"multus-additional-cni-plugins-qj8tt\" (UID: \"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\") " pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534217 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-systemd-units\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534253 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-cni-bin\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534283 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ed62377b-bc6c-45ab-9c18-a91c1b8fb56d-cnibin\") pod \"multus-additional-cni-plugins-qj8tt\" (UID: \"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\") " pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534355 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-run-ovn-kubernetes\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534434 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b24bf762-6020-46b4-b9e8-589eb8ed0650-ovn-node-metrics-cert\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534501 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ed62377b-bc6c-45ab-9c18-a91c1b8fb56d-cni-binary-copy\") pod \"multus-additional-cni-plugins-qj8tt\" (UID: \"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\") " pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534525 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-node-log\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534561 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-etc-openvswitch\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534584 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-log-socket\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534644 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bcb3ad4b-5afb-47fe-8963-9f79489d45d5-rootfs\") pod \"machine-config-daemon-nscsk\" (UID: \"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\") " pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534697 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-kubelet\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534724 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-run-systemd\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534749 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkmss\" (UniqueName: \"kubernetes.io/projected/ed62377b-bc6c-45ab-9c18-a91c1b8fb56d-kube-api-access-qkmss\") pod \"multus-additional-cni-plugins-qj8tt\" (UID: \"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\") " pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534773 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8fl9\" (UniqueName: \"kubernetes.io/projected/bcb3ad4b-5afb-47fe-8963-9f79489d45d5-kube-api-access-r8fl9\") pod \"machine-config-daemon-nscsk\" (UID: \"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\") " pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534819 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bcb3ad4b-5afb-47fe-8963-9f79489d45d5-proxy-tls\") pod \"machine-config-daemon-nscsk\" (UID: \"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\") " pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.534857 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-run-netns\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.541522 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.555322 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.569076 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.580678 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.589628 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.599425 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.616989 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.628511 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635346 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkmss\" (UniqueName: \"kubernetes.io/projected/ed62377b-bc6c-45ab-9c18-a91c1b8fb56d-kube-api-access-qkmss\") pod \"multus-additional-cni-plugins-qj8tt\" (UID: \"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\") " pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635409 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8fl9\" (UniqueName: \"kubernetes.io/projected/bcb3ad4b-5afb-47fe-8963-9f79489d45d5-kube-api-access-r8fl9\") pod \"machine-config-daemon-nscsk\" (UID: \"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\") " pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635451 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bcb3ad4b-5afb-47fe-8963-9f79489d45d5-proxy-tls\") pod \"machine-config-daemon-nscsk\" (UID: \"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\") " pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635478 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-run-netns\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635502 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ed62377b-bc6c-45ab-9c18-a91c1b8fb56d-os-release\") pod \"multus-additional-cni-plugins-qj8tt\" (UID: \"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\") " pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635529 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-run-ovn\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635552 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4zx7\" (UniqueName: \"kubernetes.io/projected/b24bf762-6020-46b4-b9e8-589eb8ed0650-kube-api-access-x4zx7\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635577 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ed62377b-bc6c-45ab-9c18-a91c1b8fb56d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qj8tt\" (UID: \"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\") " pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635603 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bcb3ad4b-5afb-47fe-8963-9f79489d45d5-mcd-auth-proxy-config\") pod \"machine-config-daemon-nscsk\" (UID: \"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\") " pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635632 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b24bf762-6020-46b4-b9e8-589eb8ed0650-ovnkube-config\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635660 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-run-openvswitch\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635675 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-run-ovn\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635687 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-slash\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635746 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-run-netns\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635752 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-var-lib-openvswitch\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635807 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-cni-netd\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635835 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635834 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ed62377b-bc6c-45ab-9c18-a91c1b8fb56d-os-release\") pod \"multus-additional-cni-plugins-qj8tt\" (UID: \"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\") " pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635865 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ed62377b-bc6c-45ab-9c18-a91c1b8fb56d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qj8tt\" (UID: \"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\") " pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635890 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b24bf762-6020-46b4-b9e8-589eb8ed0650-env-overrides\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635914 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b24bf762-6020-46b4-b9e8-589eb8ed0650-ovnkube-script-lib\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635990 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-systemd-units\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636021 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636032 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ed62377b-bc6c-45ab-9c18-a91c1b8fb56d-system-cni-dir\") pod \"multus-additional-cni-plugins-qj8tt\" (UID: \"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\") " pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636061 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-cni-netd\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636071 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ed62377b-bc6c-45ab-9c18-a91c1b8fb56d-cnibin\") pod \"multus-additional-cni-plugins-qj8tt\" (UID: \"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\") " pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636080 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ed62377b-bc6c-45ab-9c18-a91c1b8fb56d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-qj8tt\" (UID: \"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\") " pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636095 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-run-ovn-kubernetes\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636154 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-cni-bin\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636158 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-run-openvswitch\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636176 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b24bf762-6020-46b4-b9e8-589eb8ed0650-ovn-node-metrics-cert\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636200 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ed62377b-bc6c-45ab-9c18-a91c1b8fb56d-cni-binary-copy\") pod \"multus-additional-cni-plugins-qj8tt\" (UID: \"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\") " pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636214 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-slash\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636121 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-run-ovn-kubernetes\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636248 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-node-log\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636257 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-cni-bin\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636218 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-node-log\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636312 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bcb3ad4b-5afb-47fe-8963-9f79489d45d5-rootfs\") pod \"machine-config-daemon-nscsk\" (UID: \"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\") " pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636371 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-etc-openvswitch\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636397 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-log-socket\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636424 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-kubelet\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636451 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-run-systemd\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636480 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b24bf762-6020-46b4-b9e8-589eb8ed0650-ovnkube-config\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636517 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-run-systemd\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636556 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/bcb3ad4b-5afb-47fe-8963-9f79489d45d5-rootfs\") pod \"machine-config-daemon-nscsk\" (UID: \"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\") " pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636587 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-etc-openvswitch\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636623 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-log-socket\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636626 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bcb3ad4b-5afb-47fe-8963-9f79489d45d5-mcd-auth-proxy-config\") pod \"machine-config-daemon-nscsk\" (UID: \"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\") " pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636674 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-systemd-units\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636659 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-kubelet\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636833 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ed62377b-bc6c-45ab-9c18-a91c1b8fb56d-cnibin\") pod \"multus-additional-cni-plugins-qj8tt\" (UID: \"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\") " pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.636888 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ed62377b-bc6c-45ab-9c18-a91c1b8fb56d-system-cni-dir\") pod \"multus-additional-cni-plugins-qj8tt\" (UID: \"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\") " pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.635808 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-var-lib-openvswitch\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.637012 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ed62377b-bc6c-45ab-9c18-a91c1b8fb56d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-qj8tt\" (UID: \"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\") " pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.637135 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b24bf762-6020-46b4-b9e8-589eb8ed0650-ovnkube-script-lib\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.637410 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b24bf762-6020-46b4-b9e8-589eb8ed0650-env-overrides\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.639826 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bcb3ad4b-5afb-47fe-8963-9f79489d45d5-proxy-tls\") pod \"machine-config-daemon-nscsk\" (UID: \"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\") " pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.640981 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b24bf762-6020-46b4-b9e8-589eb8ed0650-ovn-node-metrics-cert\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.646400 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ed62377b-bc6c-45ab-9c18-a91c1b8fb56d-cni-binary-copy\") pod \"multus-additional-cni-plugins-qj8tt\" (UID: \"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\") " pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.653226 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8fl9\" (UniqueName: \"kubernetes.io/projected/bcb3ad4b-5afb-47fe-8963-9f79489d45d5-kube-api-access-r8fl9\") pod \"machine-config-daemon-nscsk\" (UID: \"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\") " pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.661217 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.661411 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkmss\" (UniqueName: \"kubernetes.io/projected/ed62377b-bc6c-45ab-9c18-a91c1b8fb56d-kube-api-access-qkmss\") pod \"multus-additional-cni-plugins-qj8tt\" (UID: \"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\") " pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.664999 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4zx7\" (UniqueName: \"kubernetes.io/projected/b24bf762-6020-46b4-b9e8-589eb8ed0650-kube-api-access-x4zx7\") pod \"ovnkube-node-dgm54\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.680539 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.700034 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.713828 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.718084 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:17:12 crc kubenswrapper[4812]: W1124 19:17:12.727380 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcb3ad4b_5afb_47fe_8963_9f79489d45d5.slice/crio-2c0b702b20827973fdb8d1bdfa4cb99204360f72f2005c86d30f72b2c9608715 WatchSource:0}: Error finding container 2c0b702b20827973fdb8d1bdfa4cb99204360f72f2005c86d30f72b2c9608715: Status 404 returned error can't find the container with id 2c0b702b20827973fdb8d1bdfa4cb99204360f72f2005c86d30f72b2c9608715 Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.727419 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.742969 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.754590 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.759718 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.806178 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.822726 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.831285 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.844669 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.849056 4812 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.850711 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.850742 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.850751 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.850859 4812 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.856106 4812 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.856302 4812 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.857076 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.857110 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.857119 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.857135 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.857144 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:12Z","lastTransitionTime":"2025-11-24T19:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:12 crc kubenswrapper[4812]: E1124 19:17:12.873564 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.877987 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.878058 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.878070 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.878115 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.878129 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:12Z","lastTransitionTime":"2025-11-24T19:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:12 crc kubenswrapper[4812]: E1124 19:17:12.888887 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.893449 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.893485 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.893493 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.893511 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.893520 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:12Z","lastTransitionTime":"2025-11-24T19:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:12 crc kubenswrapper[4812]: E1124 19:17:12.907067 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.910584 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.910619 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.910631 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.910649 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.910661 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:12Z","lastTransitionTime":"2025-11-24T19:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:12 crc kubenswrapper[4812]: E1124 19:17:12.922556 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.926445 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.926480 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.926490 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.926504 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.926514 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:12Z","lastTransitionTime":"2025-11-24T19:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:12 crc kubenswrapper[4812]: E1124 19:17:12.941437 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:12Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:12 crc kubenswrapper[4812]: E1124 19:17:12.941609 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.944287 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.944312 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.944321 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.944345 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.944354 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:12Z","lastTransitionTime":"2025-11-24T19:17:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.965060 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:12 crc kubenswrapper[4812]: E1124 19:17:12.965232 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.965747 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:12 crc kubenswrapper[4812]: E1124 19:17:12.965846 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:12 crc kubenswrapper[4812]: I1124 19:17:12.965948 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:12 crc kubenswrapper[4812]: E1124 19:17:12.966000 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.046471 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.046505 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.046519 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.046535 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.046547 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:13Z","lastTransitionTime":"2025-11-24T19:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.133231 4812 generic.go:334] "Generic (PLEG): container finished" podID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerID="ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312" exitCode=0 Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.133318 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerDied","Data":"ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312"} Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.133379 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerStarted","Data":"4c9662b6a6eccb4a86d933bc4d0f93a8ff37764244d1ea70eb4bd823cebd2a82"} Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.134788 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zwlsb" event={"ID":"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6","Type":"ContainerStarted","Data":"de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346"} Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.134822 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zwlsb" event={"ID":"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6","Type":"ContainerStarted","Data":"5525935a4ea74ee3850365446c2ecaa5c321b1c6396ba4e6490d8d2a9b2c9cc0"} Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.136616 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" event={"ID":"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d","Type":"ContainerStarted","Data":"297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2"} Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.136682 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" event={"ID":"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d","Type":"ContainerStarted","Data":"5bf96eb2be33c4cd1c7e2da0ed66cbde4a6051dac841649a851ad7bb34f57e10"} Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.139116 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820"} Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.139192 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b"} Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.139209 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"2c0b702b20827973fdb8d1bdfa4cb99204360f72f2005c86d30f72b2c9608715"} Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.140582 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lhgj5" event={"ID":"c270cb89-97c2-48c4-94c3-9b8420d81cfd","Type":"ContainerStarted","Data":"c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73"} Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.140762 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lhgj5" event={"ID":"c270cb89-97c2-48c4-94c3-9b8420d81cfd","Type":"ContainerStarted","Data":"ec752dcd61b4eb8a2eff46f8c943135559f4ed9c5c44ffea5921b5d550bad48e"} Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.150223 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.150281 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.150296 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.150323 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.150362 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:13Z","lastTransitionTime":"2025-11-24T19:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.153378 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.174928 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.190859 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.207123 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.237926 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.250754 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.252942 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.253008 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.253026 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.253053 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.253071 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:13Z","lastTransitionTime":"2025-11-24T19:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.262737 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.277806 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.291741 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.304113 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.320748 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.340125 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.352196 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.355247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.355301 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.355313 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.355349 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.355364 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:13Z","lastTransitionTime":"2025-11-24T19:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.367200 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.381539 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.398799 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.411919 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.426713 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.440686 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.454599 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.458300 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.458379 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.458396 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.458420 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.458439 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:13Z","lastTransitionTime":"2025-11-24T19:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.474006 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.488295 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.508769 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.523532 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.542475 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.555600 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.562374 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.562409 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.562419 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.562434 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.562443 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:13Z","lastTransitionTime":"2025-11-24T19:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.664197 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.664240 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.664250 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.664273 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.664283 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:13Z","lastTransitionTime":"2025-11-24T19:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.766777 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.766822 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.766835 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.766853 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.766866 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:13Z","lastTransitionTime":"2025-11-24T19:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.870981 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.871348 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.871357 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.871371 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.871381 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:13Z","lastTransitionTime":"2025-11-24T19:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.973537 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.973585 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.973596 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.973614 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:13 crc kubenswrapper[4812]: I1124 19:17:13.973626 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:13Z","lastTransitionTime":"2025-11-24T19:17:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.076110 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.076145 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.076154 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.076168 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.076179 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:14Z","lastTransitionTime":"2025-11-24T19:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.155175 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerStarted","Data":"9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189"} Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.155236 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerStarted","Data":"522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee"} Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.155251 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerStarted","Data":"2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c"} Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.155265 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerStarted","Data":"2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085"} Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.155278 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerStarted","Data":"f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0"} Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.155293 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerStarted","Data":"227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0"} Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.157045 4812 generic.go:334] "Generic (PLEG): container finished" podID="ed62377b-bc6c-45ab-9c18-a91c1b8fb56d" containerID="297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2" exitCode=0 Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.157170 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" event={"ID":"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d","Type":"ContainerDied","Data":"297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2"} Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.171025 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.179225 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.179288 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.179308 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.179359 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.179379 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:14Z","lastTransitionTime":"2025-11-24T19:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.184166 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.199616 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.211566 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.224815 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.236781 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.257206 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.273329 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.282099 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.282153 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.282171 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.282194 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.282212 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:14Z","lastTransitionTime":"2025-11-24T19:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.286722 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.321467 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.339494 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.355507 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.371908 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.385031 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.385059 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.385068 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.385083 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.385093 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:14Z","lastTransitionTime":"2025-11-24T19:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.488368 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.488416 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.488430 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.488450 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.488468 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:14Z","lastTransitionTime":"2025-11-24T19:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.552128 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:17:14 crc kubenswrapper[4812]: E1124 19:17:14.552379 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:17:22.552307983 +0000 UTC m=+36.341260374 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.552458 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:14 crc kubenswrapper[4812]: E1124 19:17:14.552619 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 19:17:14 crc kubenswrapper[4812]: E1124 19:17:14.552708 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:22.552689664 +0000 UTC m=+36.341642035 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.591741 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.591780 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.591793 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.591813 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.591825 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:14Z","lastTransitionTime":"2025-11-24T19:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.653621 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.653709 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.653739 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:14 crc kubenswrapper[4812]: E1124 19:17:14.653838 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 19:17:14 crc kubenswrapper[4812]: E1124 19:17:14.653933 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 19:17:14 crc kubenswrapper[4812]: E1124 19:17:14.653965 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 19:17:14 crc kubenswrapper[4812]: E1124 19:17:14.653966 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 19:17:14 crc kubenswrapper[4812]: E1124 19:17:14.654033 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 19:17:14 crc kubenswrapper[4812]: E1124 19:17:14.654059 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:14 crc kubenswrapper[4812]: E1124 19:17:14.653981 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:14 crc kubenswrapper[4812]: E1124 19:17:14.653945 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:22.653919608 +0000 UTC m=+36.442871979 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 19:17:14 crc kubenswrapper[4812]: E1124 19:17:14.654254 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:22.654192595 +0000 UTC m=+36.443144966 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:14 crc kubenswrapper[4812]: E1124 19:17:14.654280 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:22.654273637 +0000 UTC m=+36.443226258 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.695440 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.695485 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.695496 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.695513 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.695526 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:14Z","lastTransitionTime":"2025-11-24T19:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.798686 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.798739 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.798781 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.798805 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.798819 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:14Z","lastTransitionTime":"2025-11-24T19:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.902195 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.902236 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.902246 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.902262 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.902274 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:14Z","lastTransitionTime":"2025-11-24T19:17:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.965209 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.965217 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:14 crc kubenswrapper[4812]: E1124 19:17:14.965452 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:14 crc kubenswrapper[4812]: E1124 19:17:14.965526 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:14 crc kubenswrapper[4812]: I1124 19:17:14.965223 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:14 crc kubenswrapper[4812]: E1124 19:17:14.965640 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.005500 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.005544 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.005564 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.005585 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.005600 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:15Z","lastTransitionTime":"2025-11-24T19:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.061790 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-v9d5f"] Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.062312 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-v9d5f" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.064580 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.065226 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.065385 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.066509 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.085743 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.108056 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.108110 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.108127 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.108150 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.108166 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:15Z","lastTransitionTime":"2025-11-24T19:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.117418 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.152868 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.159629 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw6wv\" (UniqueName: \"kubernetes.io/projected/28fff088-73f1-4f9c-b240-e3bb4b704b07-kube-api-access-mw6wv\") pod \"node-ca-v9d5f\" (UID: \"28fff088-73f1-4f9c-b240-e3bb4b704b07\") " pod="openshift-image-registry/node-ca-v9d5f" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.159686 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/28fff088-73f1-4f9c-b240-e3bb4b704b07-serviceca\") pod \"node-ca-v9d5f\" (UID: \"28fff088-73f1-4f9c-b240-e3bb4b704b07\") " pod="openshift-image-registry/node-ca-v9d5f" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.159749 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28fff088-73f1-4f9c-b240-e3bb4b704b07-host\") pod \"node-ca-v9d5f\" (UID: \"28fff088-73f1-4f9c-b240-e3bb4b704b07\") " pod="openshift-image-registry/node-ca-v9d5f" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.161160 4812 generic.go:334] "Generic (PLEG): container finished" podID="ed62377b-bc6c-45ab-9c18-a91c1b8fb56d" containerID="159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3" exitCode=0 Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.161194 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" event={"ID":"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d","Type":"ContainerDied","Data":"159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3"} Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.169623 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.179846 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.189374 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.200926 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.211304 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.211353 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.211362 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.211376 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.211389 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:15Z","lastTransitionTime":"2025-11-24T19:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.214117 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.227398 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.240533 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.251641 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.260654 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28fff088-73f1-4f9c-b240-e3bb4b704b07-host\") pod \"node-ca-v9d5f\" (UID: \"28fff088-73f1-4f9c-b240-e3bb4b704b07\") " pod="openshift-image-registry/node-ca-v9d5f" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.260749 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw6wv\" (UniqueName: \"kubernetes.io/projected/28fff088-73f1-4f9c-b240-e3bb4b704b07-kube-api-access-mw6wv\") pod \"node-ca-v9d5f\" (UID: \"28fff088-73f1-4f9c-b240-e3bb4b704b07\") " pod="openshift-image-registry/node-ca-v9d5f" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.260818 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/28fff088-73f1-4f9c-b240-e3bb4b704b07-serviceca\") pod \"node-ca-v9d5f\" (UID: \"28fff088-73f1-4f9c-b240-e3bb4b704b07\") " pod="openshift-image-registry/node-ca-v9d5f" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.260816 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28fff088-73f1-4f9c-b240-e3bb4b704b07-host\") pod \"node-ca-v9d5f\" (UID: \"28fff088-73f1-4f9c-b240-e3bb4b704b07\") " pod="openshift-image-registry/node-ca-v9d5f" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.264049 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/28fff088-73f1-4f9c-b240-e3bb4b704b07-serviceca\") pod \"node-ca-v9d5f\" (UID: \"28fff088-73f1-4f9c-b240-e3bb4b704b07\") " pod="openshift-image-registry/node-ca-v9d5f" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.269866 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.279304 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.288952 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw6wv\" (UniqueName: \"kubernetes.io/projected/28fff088-73f1-4f9c-b240-e3bb4b704b07-kube-api-access-mw6wv\") pod \"node-ca-v9d5f\" (UID: \"28fff088-73f1-4f9c-b240-e3bb4b704b07\") " pod="openshift-image-registry/node-ca-v9d5f" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.295364 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.309627 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.313853 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.313897 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.313912 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.313928 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.313938 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:15Z","lastTransitionTime":"2025-11-24T19:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.323101 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.334317 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.345257 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.358207 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.370699 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.377362 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-v9d5f" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.381821 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: W1124 19:17:15.389524 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28fff088_73f1_4f9c_b240_e3bb4b704b07.slice/crio-1571ffaa5fb6b01126ef43832c867b09637372949439d154b474ccbb627ec71f WatchSource:0}: Error finding container 1571ffaa5fb6b01126ef43832c867b09637372949439d154b474ccbb627ec71f: Status 404 returned error can't find the container with id 1571ffaa5fb6b01126ef43832c867b09637372949439d154b474ccbb627ec71f Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.394945 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.417654 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.417689 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.417698 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.417712 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.417721 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:15Z","lastTransitionTime":"2025-11-24T19:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.421590 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.435671 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.449418 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.459927 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.472885 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.489196 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:15Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.521383 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.521417 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.521430 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.521447 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.521457 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:15Z","lastTransitionTime":"2025-11-24T19:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.624368 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.624424 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.624442 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.624467 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.624484 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:15Z","lastTransitionTime":"2025-11-24T19:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.727902 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.727947 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.727963 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.727985 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.728003 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:15Z","lastTransitionTime":"2025-11-24T19:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.831884 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.831930 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.831945 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.831965 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.831980 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:15Z","lastTransitionTime":"2025-11-24T19:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.934897 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.934934 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.934944 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.934960 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:15 crc kubenswrapper[4812]: I1124 19:17:15.934971 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:15Z","lastTransitionTime":"2025-11-24T19:17:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.037030 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.037086 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.037106 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.037131 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.037149 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:16Z","lastTransitionTime":"2025-11-24T19:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.141080 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.141380 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.141390 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.141405 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.141418 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:16Z","lastTransitionTime":"2025-11-24T19:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.170890 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerStarted","Data":"fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33"} Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.175423 4812 generic.go:334] "Generic (PLEG): container finished" podID="ed62377b-bc6c-45ab-9c18-a91c1b8fb56d" containerID="0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749" exitCode=0 Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.175489 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" event={"ID":"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d","Type":"ContainerDied","Data":"0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749"} Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.178995 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-v9d5f" event={"ID":"28fff088-73f1-4f9c-b240-e3bb4b704b07","Type":"ContainerStarted","Data":"30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee"} Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.179050 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-v9d5f" event={"ID":"28fff088-73f1-4f9c-b240-e3bb4b704b07","Type":"ContainerStarted","Data":"1571ffaa5fb6b01126ef43832c867b09637372949439d154b474ccbb627ec71f"} Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.195532 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.216059 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.230715 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.244872 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.244918 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.244936 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.244960 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.244975 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:16Z","lastTransitionTime":"2025-11-24T19:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.253663 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.266254 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.281961 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.294649 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.321219 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.338179 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.348967 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.349032 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.349054 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.349080 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.349099 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:16Z","lastTransitionTime":"2025-11-24T19:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.351867 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.365285 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.378017 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.390596 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.405251 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.424297 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.444133 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.452049 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.452079 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.452088 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.452102 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.452112 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:16Z","lastTransitionTime":"2025-11-24T19:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.461198 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.476104 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.490510 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.505100 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.521815 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.539249 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.554945 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.554993 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.555005 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.555024 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.555041 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:16Z","lastTransitionTime":"2025-11-24T19:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.558129 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.573435 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.585768 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.601827 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.612318 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.624367 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.659567 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.659621 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.659638 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.659662 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.659679 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:16Z","lastTransitionTime":"2025-11-24T19:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.762922 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.762960 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.762969 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.762985 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.762995 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:16Z","lastTransitionTime":"2025-11-24T19:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.866628 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.866709 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.866736 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.866770 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.866794 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:16Z","lastTransitionTime":"2025-11-24T19:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.965291 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.965404 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:16 crc kubenswrapper[4812]: E1124 19:17:16.965504 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:16 crc kubenswrapper[4812]: E1124 19:17:16.965587 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.965783 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:16 crc kubenswrapper[4812]: E1124 19:17:16.966058 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.970445 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.970473 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.970483 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.970498 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.970509 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:16Z","lastTransitionTime":"2025-11-24T19:17:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.982285 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:16 crc kubenswrapper[4812]: I1124 19:17:16.999568 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.013107 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.025180 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.041550 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.056665 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.072044 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.072075 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.072085 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.072100 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.072137 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:17Z","lastTransitionTime":"2025-11-24T19:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.077026 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.092671 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.105721 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.132935 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.145458 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.164223 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.175548 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.175608 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.175623 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.175641 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.175652 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:17Z","lastTransitionTime":"2025-11-24T19:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.179157 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.187835 4812 generic.go:334] "Generic (PLEG): container finished" podID="ed62377b-bc6c-45ab-9c18-a91c1b8fb56d" containerID="87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6" exitCode=0 Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.187873 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" event={"ID":"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d","Type":"ContainerDied","Data":"87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6"} Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.200623 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.216776 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.228046 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.238657 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.251422 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.261634 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.277164 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.278617 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.278639 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.278647 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.278659 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.278668 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:17Z","lastTransitionTime":"2025-11-24T19:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.288150 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.297029 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.324264 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.336825 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.349676 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.363983 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.379279 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.380510 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.380535 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.380543 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.380557 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.380566 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:17Z","lastTransitionTime":"2025-11-24T19:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.394411 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.483786 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.483862 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.483888 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.483913 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.483930 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:17Z","lastTransitionTime":"2025-11-24T19:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.587161 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.587213 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.587232 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.587258 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.587276 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:17Z","lastTransitionTime":"2025-11-24T19:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.690535 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.690609 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.690639 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.690671 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.690693 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:17Z","lastTransitionTime":"2025-11-24T19:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.793812 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.793878 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.793895 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.793920 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.793938 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:17Z","lastTransitionTime":"2025-11-24T19:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.897814 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.898290 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.898309 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.898365 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:17 crc kubenswrapper[4812]: I1124 19:17:17.898384 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:17Z","lastTransitionTime":"2025-11-24T19:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.001603 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.001641 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.001652 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.001669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.001683 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:18Z","lastTransitionTime":"2025-11-24T19:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.105607 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.105672 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.105683 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.105701 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.105713 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:18Z","lastTransitionTime":"2025-11-24T19:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.195642 4812 generic.go:334] "Generic (PLEG): container finished" podID="ed62377b-bc6c-45ab-9c18-a91c1b8fb56d" containerID="4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419" exitCode=0 Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.195722 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" event={"ID":"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d","Type":"ContainerDied","Data":"4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419"} Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.208171 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.208200 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.208211 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.208227 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.208238 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:18Z","lastTransitionTime":"2025-11-24T19:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.212614 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:18Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.225102 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:18Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.235751 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:18Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.248384 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:18Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.260432 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:18Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.272182 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:18Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.286244 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:18Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.299821 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:18Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.311136 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.311209 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.311221 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.311271 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.311285 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:18Z","lastTransitionTime":"2025-11-24T19:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.312300 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:18Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.325897 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:18Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.335180 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:18Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.351445 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:18Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.362539 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:18Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.375695 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:18Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.414720 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.414757 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.414766 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.414780 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.414791 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:18Z","lastTransitionTime":"2025-11-24T19:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.517424 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.517742 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.517751 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.517765 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.517778 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:18Z","lastTransitionTime":"2025-11-24T19:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.621151 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.621202 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.621216 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.621238 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.621254 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:18Z","lastTransitionTime":"2025-11-24T19:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.724537 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.724624 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.724642 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.724672 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.724690 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:18Z","lastTransitionTime":"2025-11-24T19:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.828282 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.828388 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.828413 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.828448 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.828472 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:18Z","lastTransitionTime":"2025-11-24T19:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.932140 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.932198 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.932217 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.932242 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.932258 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:18Z","lastTransitionTime":"2025-11-24T19:17:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.965476 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.965590 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:18 crc kubenswrapper[4812]: E1124 19:17:18.965614 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:18 crc kubenswrapper[4812]: I1124 19:17:18.965719 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:18 crc kubenswrapper[4812]: E1124 19:17:18.965869 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:18 crc kubenswrapper[4812]: E1124 19:17:18.966032 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.035287 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.035432 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.035459 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.035492 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.035513 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:19Z","lastTransitionTime":"2025-11-24T19:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.142589 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.142796 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.142905 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.142936 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.142979 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:19Z","lastTransitionTime":"2025-11-24T19:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.203053 4812 generic.go:334] "Generic (PLEG): container finished" podID="ed62377b-bc6c-45ab-9c18-a91c1b8fb56d" containerID="b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103" exitCode=0 Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.203154 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" event={"ID":"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d","Type":"ContainerDied","Data":"b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103"} Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.210650 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerStarted","Data":"555fd097894eb90e0adb78ba7ed7069cb55741567505625d2016385da82ef670"} Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.211493 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.211591 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.211618 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.249803 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.249845 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.249857 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.249876 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.249890 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:19Z","lastTransitionTime":"2025-11-24T19:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.254272 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.257443 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.257506 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.275840 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.296908 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.311065 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.323552 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.344914 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.352523 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.352555 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.352566 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.352582 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.352594 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:19Z","lastTransitionTime":"2025-11-24T19:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.359262 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.372740 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.389260 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.401197 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.424001 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.441748 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.455008 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.455051 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.455066 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.455085 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.455100 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:19Z","lastTransitionTime":"2025-11-24T19:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.462267 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.476149 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.490322 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.506399 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.519162 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.538359 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.557569 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.558730 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.558789 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.558809 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.558833 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.558851 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:19Z","lastTransitionTime":"2025-11-24T19:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.576125 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.590270 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.610289 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.657805 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://555fd097894eb90e0adb78ba7ed7069cb55741567505625d2016385da82ef670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.663629 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.663678 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.663692 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.663724 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.663738 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:19Z","lastTransitionTime":"2025-11-24T19:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.673386 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.693053 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.709547 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.722497 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.734028 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:19Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.765951 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.765981 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.765989 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.766002 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.766011 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:19Z","lastTransitionTime":"2025-11-24T19:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.868629 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.868860 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.868924 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.869023 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.869094 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:19Z","lastTransitionTime":"2025-11-24T19:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.971737 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.971807 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.971826 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.972226 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:19 crc kubenswrapper[4812]: I1124 19:17:19.972258 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:19Z","lastTransitionTime":"2025-11-24T19:17:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.075996 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.076047 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.076064 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.076093 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.076111 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:20Z","lastTransitionTime":"2025-11-24T19:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.180242 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.180296 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.180310 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.180352 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.180368 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:20Z","lastTransitionTime":"2025-11-24T19:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.220283 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" event={"ID":"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d","Type":"ContainerStarted","Data":"2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b"} Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.238848 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.263178 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://555fd097894eb90e0adb78ba7ed7069cb55741567505625d2016385da82ef670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.278634 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.282729 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.282798 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.282815 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.282839 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.282855 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:20Z","lastTransitionTime":"2025-11-24T19:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.300559 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.321823 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.347010 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.366116 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.384125 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.385303 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.385362 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.385374 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.385391 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.385403 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:20Z","lastTransitionTime":"2025-11-24T19:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.402698 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.422304 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.444160 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.465858 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.479698 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.487561 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.487603 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.487615 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.487633 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.487644 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:20Z","lastTransitionTime":"2025-11-24T19:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.490628 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.590562 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.590603 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.590615 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.590633 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.590645 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:20Z","lastTransitionTime":"2025-11-24T19:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.694501 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.694541 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.694551 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.694566 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.694576 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:20Z","lastTransitionTime":"2025-11-24T19:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.788871 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.797948 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.798023 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.798049 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.798079 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.798105 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:20Z","lastTransitionTime":"2025-11-24T19:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.814137 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.833676 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.851431 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.869807 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.885665 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.901590 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.901654 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.901668 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.901686 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.901699 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:20Z","lastTransitionTime":"2025-11-24T19:17:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.908199 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.927186 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.943047 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.965687 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.965687 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:20 crc kubenswrapper[4812]: E1124 19:17:20.965854 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:20 crc kubenswrapper[4812]: E1124 19:17:20.965900 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.965687 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:20 crc kubenswrapper[4812]: E1124 19:17:20.965993 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.969111 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://555fd097894eb90e0adb78ba7ed7069cb55741567505625d2016385da82ef670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.980135 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:20 crc kubenswrapper[4812]: I1124 19:17:20.995025 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:20Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.004563 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.005966 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.005979 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.005998 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.006011 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:21Z","lastTransitionTime":"2025-11-24T19:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.008942 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.027120 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.043513 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.108704 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.108753 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.108769 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.108793 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.108810 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:21Z","lastTransitionTime":"2025-11-24T19:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.211218 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.211278 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.211297 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.211327 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.211473 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:21Z","lastTransitionTime":"2025-11-24T19:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.224916 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dgm54_b24bf762-6020-46b4-b9e8-589eb8ed0650/ovnkube-controller/0.log" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.228556 4812 generic.go:334] "Generic (PLEG): container finished" podID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerID="555fd097894eb90e0adb78ba7ed7069cb55741567505625d2016385da82ef670" exitCode=1 Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.228620 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerDied","Data":"555fd097894eb90e0adb78ba7ed7069cb55741567505625d2016385da82ef670"} Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.229432 4812 scope.go:117] "RemoveContainer" containerID="555fd097894eb90e0adb78ba7ed7069cb55741567505625d2016385da82ef670" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.244986 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.261976 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.280506 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.299408 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.312434 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.314164 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.314193 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.314202 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.314234 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.314245 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:21Z","lastTransitionTime":"2025-11-24T19:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.340555 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://555fd097894eb90e0adb78ba7ed7069cb55741567505625d2016385da82ef670\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://555fd097894eb90e0adb78ba7ed7069cb55741567505625d2016385da82ef670\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:21Z\\\",\\\"message\\\":\\\"1 6095 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 19:17:21.076297 6095 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:17:21.075850 6095 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:17:21.076798 6095 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 19:17:21.075897 6095 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:17:21.077276 6095 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 19:17:21.077365 6095 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 19:17:21.077374 6095 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 19:17:21.077399 6095 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 19:17:21.077399 6095 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 19:17:21.077422 6095 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 19:17:21.077527 6095 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.356742 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.373032 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.385719 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.400107 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.415414 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.417266 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.417371 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.417385 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.417403 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.417416 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:21Z","lastTransitionTime":"2025-11-24T19:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.430565 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.449569 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.463452 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:21Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.521285 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.521323 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.521363 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.521382 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.521396 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:21Z","lastTransitionTime":"2025-11-24T19:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.624556 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.624668 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.624688 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.624720 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.624740 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:21Z","lastTransitionTime":"2025-11-24T19:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.727088 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.727146 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.727161 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.727185 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.727197 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:21Z","lastTransitionTime":"2025-11-24T19:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.829625 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.829667 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.829677 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.829694 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.829707 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:21Z","lastTransitionTime":"2025-11-24T19:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.932431 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.932455 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.932463 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.932475 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:21 crc kubenswrapper[4812]: I1124 19:17:21.932485 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:21Z","lastTransitionTime":"2025-11-24T19:17:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.034983 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.035044 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.035062 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.035087 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.035102 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:22Z","lastTransitionTime":"2025-11-24T19:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.137416 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.137483 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.137498 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.137515 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.137577 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:22Z","lastTransitionTime":"2025-11-24T19:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.236543 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dgm54_b24bf762-6020-46b4-b9e8-589eb8ed0650/ovnkube-controller/1.log" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.237877 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dgm54_b24bf762-6020-46b4-b9e8-589eb8ed0650/ovnkube-controller/0.log" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.240151 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.240199 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.240217 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.240241 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.240258 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:22Z","lastTransitionTime":"2025-11-24T19:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.243724 4812 generic.go:334] "Generic (PLEG): container finished" podID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerID="c43d55c738518a107b5cec5b010d57690e78b67ce912577f5fd760b52b80f8ce" exitCode=1 Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.243786 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerDied","Data":"c43d55c738518a107b5cec5b010d57690e78b67ce912577f5fd760b52b80f8ce"} Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.243863 4812 scope.go:117] "RemoveContainer" containerID="555fd097894eb90e0adb78ba7ed7069cb55741567505625d2016385da82ef670" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.245254 4812 scope.go:117] "RemoveContainer" containerID="c43d55c738518a107b5cec5b010d57690e78b67ce912577f5fd760b52b80f8ce" Nov 24 19:17:22 crc kubenswrapper[4812]: E1124 19:17:22.245644 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.270575 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.296834 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.316543 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.336278 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.344134 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.344212 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.344240 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.344272 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.344294 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:22Z","lastTransitionTime":"2025-11-24T19:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.361505 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.377696 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.393215 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.408372 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.434582 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43d55c738518a107b5cec5b010d57690e78b67ce912577f5fd760b52b80f8ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://555fd097894eb90e0adb78ba7ed7069cb55741567505625d2016385da82ef670\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:21Z\\\",\\\"message\\\":\\\"1 6095 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 19:17:21.076297 6095 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:17:21.075850 6095 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:17:21.076798 6095 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1124 19:17:21.075897 6095 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:17:21.077276 6095 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1124 19:17:21.077365 6095 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 19:17:21.077374 6095 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 19:17:21.077399 6095 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 19:17:21.077399 6095 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 19:17:21.077422 6095 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 19:17:21.077527 6095 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43d55c738518a107b5cec5b010d57690e78b67ce912577f5fd760b52b80f8ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:22Z\\\",\\\"message\\\":\\\"46 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 19:17:22.045496 6246 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 19:17:22.045709 6246 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1124 19:17:22.045752 6246 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1124 19:17:22.045620 6246 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1124 19:17:22.045766 6246 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nF1124 19:17:22.045787 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.447740 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.447809 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.447827 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.447851 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.447867 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:22Z","lastTransitionTime":"2025-11-24T19:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.450174 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.470840 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.488254 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.503529 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.519186 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:22Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.552596 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.552671 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.552696 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.552730 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.552751 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:22Z","lastTransitionTime":"2025-11-24T19:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.639633 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.639838 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:22 crc kubenswrapper[4812]: E1124 19:17:22.639929 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:17:38.63988082 +0000 UTC m=+52.428833231 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:17:22 crc kubenswrapper[4812]: E1124 19:17:22.640008 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 19:17:22 crc kubenswrapper[4812]: E1124 19:17:22.640087 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:38.640061375 +0000 UTC m=+52.429013786 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.656327 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.656412 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.656431 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.656457 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.656476 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:22Z","lastTransitionTime":"2025-11-24T19:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.779401 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.779464 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.779487 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:22 crc kubenswrapper[4812]: E1124 19:17:22.779599 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 19:17:22 crc kubenswrapper[4812]: E1124 19:17:22.779615 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 19:17:22 crc kubenswrapper[4812]: E1124 19:17:22.779607 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 19:17:22 crc kubenswrapper[4812]: E1124 19:17:22.779718 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:38.77969298 +0000 UTC m=+52.568645381 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 19:17:22 crc kubenswrapper[4812]: E1124 19:17:22.779625 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:22 crc kubenswrapper[4812]: E1124 19:17:22.779823 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:38.779795313 +0000 UTC m=+52.568747694 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:22 crc kubenswrapper[4812]: E1124 19:17:22.779823 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 19:17:22 crc kubenswrapper[4812]: E1124 19:17:22.779880 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 19:17:22 crc kubenswrapper[4812]: E1124 19:17:22.779904 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:22 crc kubenswrapper[4812]: E1124 19:17:22.779996 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 19:17:38.779963378 +0000 UTC m=+52.568915779 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.782029 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.782063 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.782078 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.782099 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.782110 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:22Z","lastTransitionTime":"2025-11-24T19:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.884931 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.885010 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.885030 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.885056 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.885082 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:22Z","lastTransitionTime":"2025-11-24T19:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.965725 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.965752 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.965934 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:22 crc kubenswrapper[4812]: E1124 19:17:22.966106 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:22 crc kubenswrapper[4812]: E1124 19:17:22.966395 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:22 crc kubenswrapper[4812]: E1124 19:17:22.966641 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.987767 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.987823 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.987840 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.987860 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:22 crc kubenswrapper[4812]: I1124 19:17:22.987873 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:22Z","lastTransitionTime":"2025-11-24T19:17:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.091247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.091307 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.091330 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.091403 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.091424 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:23Z","lastTransitionTime":"2025-11-24T19:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.194842 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.194900 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.194925 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.194953 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.194975 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:23Z","lastTransitionTime":"2025-11-24T19:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.250474 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dgm54_b24bf762-6020-46b4-b9e8-589eb8ed0650/ovnkube-controller/1.log" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.257727 4812 scope.go:117] "RemoveContainer" containerID="c43d55c738518a107b5cec5b010d57690e78b67ce912577f5fd760b52b80f8ce" Nov 24 19:17:23 crc kubenswrapper[4812]: E1124 19:17:23.258083 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.280674 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:23Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.298493 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.298564 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.298588 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.298617 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.298639 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:23Z","lastTransitionTime":"2025-11-24T19:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.300675 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:23Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.321659 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:23Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.335542 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.335608 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.335630 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.335660 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.335682 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:23Z","lastTransitionTime":"2025-11-24T19:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.345772 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:23Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:23 crc kubenswrapper[4812]: E1124 19:17:23.358286 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:23Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.362726 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.362783 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.362805 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.362830 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.362848 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:23Z","lastTransitionTime":"2025-11-24T19:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.366935 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:23Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:23 crc kubenswrapper[4812]: E1124 19:17:23.377105 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:23Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.383991 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.385566 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.385752 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.385829 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.385850 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:23Z","lastTransitionTime":"2025-11-24T19:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.388323 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:23Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.402273 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:23Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:23 crc kubenswrapper[4812]: E1124 19:17:23.404897 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:23Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.409519 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.409557 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.409569 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.409587 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.409600 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:23Z","lastTransitionTime":"2025-11-24T19:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.417598 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:23Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:23 crc kubenswrapper[4812]: E1124 19:17:23.422908 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:23Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.426060 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.426110 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.426124 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.426142 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.426155 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:23Z","lastTransitionTime":"2025-11-24T19:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.430464 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:23Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:23 crc kubenswrapper[4812]: E1124 19:17:23.439113 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:23Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:23 crc kubenswrapper[4812]: E1124 19:17:23.439482 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.441622 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.441714 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.441740 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.441773 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.441797 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:23Z","lastTransitionTime":"2025-11-24T19:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.446947 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:23Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.465808 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:23Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.478321 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:23Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.510607 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43d55c738518a107b5cec5b010d57690e78b67ce912577f5fd760b52b80f8ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43d55c738518a107b5cec5b010d57690e78b67ce912577f5fd760b52b80f8ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:22Z\\\",\\\"message\\\":\\\"46 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 19:17:22.045496 6246 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 19:17:22.045709 6246 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1124 19:17:22.045752 6246 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1124 19:17:22.045620 6246 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1124 19:17:22.045766 6246 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nF1124 19:17:22.045787 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:23Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.538052 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:23Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.545316 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.545376 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.545388 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.545405 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.545417 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:23Z","lastTransitionTime":"2025-11-24T19:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.648468 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.648543 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.648562 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.648588 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.648606 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:23Z","lastTransitionTime":"2025-11-24T19:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.751792 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.751852 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.751870 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.751897 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.751915 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:23Z","lastTransitionTime":"2025-11-24T19:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.854692 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.854771 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.854790 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.854817 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.854834 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:23Z","lastTransitionTime":"2025-11-24T19:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.958205 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.958280 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.958298 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.958323 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:23 crc kubenswrapper[4812]: I1124 19:17:23.958381 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:23Z","lastTransitionTime":"2025-11-24T19:17:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.061773 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.061828 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.061846 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.061868 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.061884 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:24Z","lastTransitionTime":"2025-11-24T19:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.164938 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.165022 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.165046 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.165077 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.165100 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:24Z","lastTransitionTime":"2025-11-24T19:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.267591 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.267645 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.267664 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.267686 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.267704 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:24Z","lastTransitionTime":"2025-11-24T19:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.370554 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.370630 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.370658 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.370690 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.370714 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:24Z","lastTransitionTime":"2025-11-24T19:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.473931 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.474003 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.474026 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.474055 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.474078 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:24Z","lastTransitionTime":"2025-11-24T19:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.577292 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.577418 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.577436 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.577461 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.577479 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:24Z","lastTransitionTime":"2025-11-24T19:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.680577 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.680670 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.680689 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.680713 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.680734 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:24Z","lastTransitionTime":"2025-11-24T19:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.783654 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.783721 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.783740 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.783766 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.783784 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:24Z","lastTransitionTime":"2025-11-24T19:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.886769 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.886826 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.886845 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.886868 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.886884 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:24Z","lastTransitionTime":"2025-11-24T19:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.965214 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.965297 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.965331 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:24 crc kubenswrapper[4812]: E1124 19:17:24.965495 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:24 crc kubenswrapper[4812]: E1124 19:17:24.965616 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:24 crc kubenswrapper[4812]: E1124 19:17:24.965787 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.990105 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.990175 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.990193 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.990221 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:24 crc kubenswrapper[4812]: I1124 19:17:24.990241 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:24Z","lastTransitionTime":"2025-11-24T19:17:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.093966 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.094054 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.094074 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.094099 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.094118 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:25Z","lastTransitionTime":"2025-11-24T19:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.197395 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.197451 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.197469 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.197496 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.197514 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:25Z","lastTransitionTime":"2025-11-24T19:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.299838 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.299903 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.299922 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.299948 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.299968 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:25Z","lastTransitionTime":"2025-11-24T19:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.366644 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4"] Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.367520 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.370950 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.371416 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.395606 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:25Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.402487 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.402535 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.402546 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.402567 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.402578 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:25Z","lastTransitionTime":"2025-11-24T19:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.414382 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:25Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.432321 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:25Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.450279 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:25Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.466857 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:25Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.486621 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:25Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.505009 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:25Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.505890 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6k5m\" (UniqueName: \"kubernetes.io/projected/23a6d3e0-f315-42dd-bfc0-1dcf90de3a56-kube-api-access-s6k5m\") pod \"ovnkube-control-plane-749d76644c-sxfp4\" (UID: \"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.506064 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.506117 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.506136 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.506037 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/23a6d3e0-f315-42dd-bfc0-1dcf90de3a56-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-sxfp4\" (UID: \"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.506161 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.506180 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:25Z","lastTransitionTime":"2025-11-24T19:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.506244 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/23a6d3e0-f315-42dd-bfc0-1dcf90de3a56-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-sxfp4\" (UID: \"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.506449 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/23a6d3e0-f315-42dd-bfc0-1dcf90de3a56-env-overrides\") pod \"ovnkube-control-plane-749d76644c-sxfp4\" (UID: \"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.525616 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:25Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.540748 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:25Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.571131 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43d55c738518a107b5cec5b010d57690e78b67ce912577f5fd760b52b80f8ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43d55c738518a107b5cec5b010d57690e78b67ce912577f5fd760b52b80f8ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:22Z\\\",\\\"message\\\":\\\"46 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 19:17:22.045496 6246 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 19:17:22.045709 6246 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1124 19:17:22.045752 6246 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1124 19:17:22.045620 6246 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1124 19:17:22.045766 6246 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nF1124 19:17:22.045787 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:25Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.588259 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:25Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.607682 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/23a6d3e0-f315-42dd-bfc0-1dcf90de3a56-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-sxfp4\" (UID: \"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.607781 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/23a6d3e0-f315-42dd-bfc0-1dcf90de3a56-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-sxfp4\" (UID: \"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.607903 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/23a6d3e0-f315-42dd-bfc0-1dcf90de3a56-env-overrides\") pod \"ovnkube-control-plane-749d76644c-sxfp4\" (UID: \"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.607945 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6k5m\" (UniqueName: \"kubernetes.io/projected/23a6d3e0-f315-42dd-bfc0-1dcf90de3a56-kube-api-access-s6k5m\") pod \"ovnkube-control-plane-749d76644c-sxfp4\" (UID: \"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.609124 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/23a6d3e0-f315-42dd-bfc0-1dcf90de3a56-env-overrides\") pod \"ovnkube-control-plane-749d76644c-sxfp4\" (UID: \"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.609217 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/23a6d3e0-f315-42dd-bfc0-1dcf90de3a56-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-sxfp4\" (UID: \"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.609212 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:25Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.609540 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.609590 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.609609 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.609632 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.609648 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:25Z","lastTransitionTime":"2025-11-24T19:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.616157 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/23a6d3e0-f315-42dd-bfc0-1dcf90de3a56-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-sxfp4\" (UID: \"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.630700 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:25Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.636066 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6k5m\" (UniqueName: \"kubernetes.io/projected/23a6d3e0-f315-42dd-bfc0-1dcf90de3a56-kube-api-access-s6k5m\") pod \"ovnkube-control-plane-749d76644c-sxfp4\" (UID: \"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.654886 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:25Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.675133 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxfp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:25Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.689811 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" Nov 24 19:17:25 crc kubenswrapper[4812]: W1124 19:17:25.709815 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23a6d3e0_f315_42dd_bfc0_1dcf90de3a56.slice/crio-c558a2046393a8e22f731fda2523bc32b26ed776193f4ad80df75b53d9fe353a WatchSource:0}: Error finding container c558a2046393a8e22f731fda2523bc32b26ed776193f4ad80df75b53d9fe353a: Status 404 returned error can't find the container with id c558a2046393a8e22f731fda2523bc32b26ed776193f4ad80df75b53d9fe353a Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.712794 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.712863 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.712882 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.712909 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.712927 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:25Z","lastTransitionTime":"2025-11-24T19:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.817665 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.817761 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.817778 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.817805 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.817822 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:25Z","lastTransitionTime":"2025-11-24T19:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.920850 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.920895 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.920907 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.920929 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:25 crc kubenswrapper[4812]: I1124 19:17:25.920941 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:25Z","lastTransitionTime":"2025-11-24T19:17:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.023079 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.023121 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.023132 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.023150 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.023163 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:26Z","lastTransitionTime":"2025-11-24T19:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.126523 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.127179 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.127196 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.127219 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.127235 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:26Z","lastTransitionTime":"2025-11-24T19:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.230090 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.230145 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.230157 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.230177 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.230189 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:26Z","lastTransitionTime":"2025-11-24T19:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.269038 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" event={"ID":"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56","Type":"ContainerStarted","Data":"909115dbbacbaefddfe18d3c3d9c0860352c95ed9c1e478155813d24d2d8dd4f"} Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.269129 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" event={"ID":"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56","Type":"ContainerStarted","Data":"2019326a15074e65a11467a45f9a1d8a0ac97d7669b071898b5f80ddc6984b7b"} Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.269145 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" event={"ID":"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56","Type":"ContainerStarted","Data":"c558a2046393a8e22f731fda2523bc32b26ed776193f4ad80df75b53d9fe353a"} Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.286488 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.301100 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2019326a15074e65a11467a45f9a1d8a0ac97d7669b071898b5f80ddc6984b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909115dbbacbaefddfe18d3c3d9c0860352c95ed9c1e478155813d24d2d8dd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxfp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.315694 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.325772 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.332739 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.332772 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.332784 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.332801 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.332812 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:26Z","lastTransitionTime":"2025-11-24T19:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.338466 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.351532 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.367354 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.380420 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.393868 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.410993 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.420872 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.435597 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.435637 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.435648 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.435665 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.435677 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:26Z","lastTransitionTime":"2025-11-24T19:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.439190 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43d55c738518a107b5cec5b010d57690e78b67ce912577f5fd760b52b80f8ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43d55c738518a107b5cec5b010d57690e78b67ce912577f5fd760b52b80f8ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:22Z\\\",\\\"message\\\":\\\"46 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 19:17:22.045496 6246 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 19:17:22.045709 6246 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1124 19:17:22.045752 6246 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1124 19:17:22.045620 6246 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1124 19:17:22.045766 6246 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nF1124 19:17:22.045787 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.449280 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.464434 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.474983 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.498473 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-jxmnc"] Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.499215 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:26 crc kubenswrapper[4812]: E1124 19:17:26.499322 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.515496 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs\") pod \"network-metrics-daemon-jxmnc\" (UID: \"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\") " pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.515640 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb9mn\" (UniqueName: \"kubernetes.io/projected/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-kube-api-access-xb9mn\") pod \"network-metrics-daemon-jxmnc\" (UID: \"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\") " pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.519977 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.538936 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.539205 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.539258 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.539277 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.539303 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.539322 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:26Z","lastTransitionTime":"2025-11-24T19:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.557795 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.576611 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.593282 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.607294 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.616436 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs\") pod \"network-metrics-daemon-jxmnc\" (UID: \"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\") " pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.616544 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb9mn\" (UniqueName: \"kubernetes.io/projected/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-kube-api-access-xb9mn\") pod \"network-metrics-daemon-jxmnc\" (UID: \"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\") " pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:26 crc kubenswrapper[4812]: E1124 19:17:26.616715 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 19:17:26 crc kubenswrapper[4812]: E1124 19:17:26.616830 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs podName:2d681a94-8d4a-45cc-8559-dfd15b6d0b1e nodeName:}" failed. No retries permitted until 2025-11-24 19:17:27.116799302 +0000 UTC m=+40.905751713 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs") pod "network-metrics-daemon-jxmnc" (UID: "2d681a94-8d4a-45cc-8559-dfd15b6d0b1e") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.632203 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.635103 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb9mn\" (UniqueName: \"kubernetes.io/projected/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-kube-api-access-xb9mn\") pod \"network-metrics-daemon-jxmnc\" (UID: \"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\") " pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.643240 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.643318 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.643394 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.643427 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.643446 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:26Z","lastTransitionTime":"2025-11-24T19:17:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.649550 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.662565 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.677802 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.705453 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43d55c738518a107b5cec5b010d57690e78b67ce912577f5fd760b52b80f8ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43d55c738518a107b5cec5b010d57690e78b67ce912577f5fd760b52b80f8ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:22Z\\\",\\\"message\\\":\\\"46 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 19:17:22.045496 6246 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 19:17:22.045709 6246 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1124 19:17:22.045752 6246 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1124 19:17:22.045620 6246 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1124 19:17:22.045766 6246 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nF1124 19:17:22.045787 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.730328 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.746109 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2019326a15074e65a11467a45f9a1d8a0ac97d7669b071898b5f80ddc6984b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909115dbbacbaefddfe18d3c3d9c0860352c95ed9c1e478155813d24d2d8dd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxfp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.761067 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxmnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxmnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.779930 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.795173 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.964780 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.964786 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:26 crc kubenswrapper[4812]: E1124 19:17:26.965460 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:26 crc kubenswrapper[4812]: E1124 19:17:26.965504 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.964981 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:26 crc kubenswrapper[4812]: E1124 19:17:26.966005 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:26 crc kubenswrapper[4812]: I1124 19:17:26.987641 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.003913 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.009696 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.009765 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.009785 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.009810 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.009830 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:27Z","lastTransitionTime":"2025-11-24T19:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.022111 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.035958 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.048911 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.064884 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.087135 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.106599 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.111582 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.111612 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.111624 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.111641 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.111656 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:27Z","lastTransitionTime":"2025-11-24T19:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.121608 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs\") pod \"network-metrics-daemon-jxmnc\" (UID: \"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\") " pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:27 crc kubenswrapper[4812]: E1124 19:17:27.121823 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 19:17:27 crc kubenswrapper[4812]: E1124 19:17:27.121932 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs podName:2d681a94-8d4a-45cc-8559-dfd15b6d0b1e nodeName:}" failed. No retries permitted until 2025-11-24 19:17:28.121905072 +0000 UTC m=+41.910857483 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs") pod "network-metrics-daemon-jxmnc" (UID: "2d681a94-8d4a-45cc-8559-dfd15b6d0b1e") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.126813 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.150663 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.168454 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.214478 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.214662 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.214693 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.214751 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.214780 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:27Z","lastTransitionTime":"2025-11-24T19:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.223422 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43d55c738518a107b5cec5b010d57690e78b67ce912577f5fd760b52b80f8ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43d55c738518a107b5cec5b010d57690e78b67ce912577f5fd760b52b80f8ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:22Z\\\",\\\"message\\\":\\\"46 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 19:17:22.045496 6246 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 19:17:22.045709 6246 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1124 19:17:22.045752 6246 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1124 19:17:22.045620 6246 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1124 19:17:22.045766 6246 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nF1124 19:17:22.045787 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.244809 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.267132 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.280309 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2019326a15074e65a11467a45f9a1d8a0ac97d7669b071898b5f80ddc6984b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909115dbbacbaefddfe18d3c3d9c0860352c95ed9c1e478155813d24d2d8dd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxfp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.291439 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxmnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxmnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.317513 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.317546 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.317555 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.317569 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.317581 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:27Z","lastTransitionTime":"2025-11-24T19:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.420538 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.420597 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.420614 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.420637 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.420656 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:27Z","lastTransitionTime":"2025-11-24T19:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.523895 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.523959 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.523977 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.524003 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.524021 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:27Z","lastTransitionTime":"2025-11-24T19:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.627213 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.627285 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.627311 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.627375 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.627405 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:27Z","lastTransitionTime":"2025-11-24T19:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.729823 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.729912 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.729940 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.729973 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.729996 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:27Z","lastTransitionTime":"2025-11-24T19:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.834091 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.834161 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.834180 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.834208 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.834233 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:27Z","lastTransitionTime":"2025-11-24T19:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.937631 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.938075 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.938278 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.938514 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.938708 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:27Z","lastTransitionTime":"2025-11-24T19:17:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:27 crc kubenswrapper[4812]: I1124 19:17:27.964810 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:27 crc kubenswrapper[4812]: E1124 19:17:27.965269 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.041383 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.041467 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.041489 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.041523 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.041579 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:28Z","lastTransitionTime":"2025-11-24T19:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.132428 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs\") pod \"network-metrics-daemon-jxmnc\" (UID: \"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\") " pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:28 crc kubenswrapper[4812]: E1124 19:17:28.132692 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 19:17:28 crc kubenswrapper[4812]: E1124 19:17:28.132767 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs podName:2d681a94-8d4a-45cc-8559-dfd15b6d0b1e nodeName:}" failed. No retries permitted until 2025-11-24 19:17:30.132743269 +0000 UTC m=+43.921695680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs") pod "network-metrics-daemon-jxmnc" (UID: "2d681a94-8d4a-45cc-8559-dfd15b6d0b1e") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.145222 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.145301 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.145327 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.145395 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.145421 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:28Z","lastTransitionTime":"2025-11-24T19:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.248393 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.248449 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.248468 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.248494 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.248512 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:28Z","lastTransitionTime":"2025-11-24T19:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.352047 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.352126 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.352144 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.352172 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.352190 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:28Z","lastTransitionTime":"2025-11-24T19:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.455075 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.455124 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.455141 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.455163 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.455184 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:28Z","lastTransitionTime":"2025-11-24T19:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.557663 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.557733 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.557761 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.557820 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.557848 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:28Z","lastTransitionTime":"2025-11-24T19:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.660562 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.660689 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.660720 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.660743 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.660760 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:28Z","lastTransitionTime":"2025-11-24T19:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.764784 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.764863 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.764887 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.764917 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.764941 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:28Z","lastTransitionTime":"2025-11-24T19:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.868580 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.868659 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.868700 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.868736 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.868760 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:28Z","lastTransitionTime":"2025-11-24T19:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.965034 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.965118 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.965116 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:28 crc kubenswrapper[4812]: E1124 19:17:28.965280 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:28 crc kubenswrapper[4812]: E1124 19:17:28.965430 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:28 crc kubenswrapper[4812]: E1124 19:17:28.965574 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.972434 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.972497 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.972521 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.972556 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:28 crc kubenswrapper[4812]: I1124 19:17:28.972581 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:28Z","lastTransitionTime":"2025-11-24T19:17:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.075692 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.075759 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.075778 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.075804 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.075821 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:29Z","lastTransitionTime":"2025-11-24T19:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.179382 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.179449 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.179471 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.179503 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.179527 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:29Z","lastTransitionTime":"2025-11-24T19:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.281648 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.281925 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.282068 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.282212 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.282331 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:29Z","lastTransitionTime":"2025-11-24T19:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.384771 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.384834 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.384854 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.384878 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.384895 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:29Z","lastTransitionTime":"2025-11-24T19:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.487506 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.487571 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.487592 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.487617 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.487636 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:29Z","lastTransitionTime":"2025-11-24T19:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.590747 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.590804 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.590817 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.590840 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.590854 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:29Z","lastTransitionTime":"2025-11-24T19:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.693872 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.693932 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.693948 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.693972 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.693990 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:29Z","lastTransitionTime":"2025-11-24T19:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.797671 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.797724 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.797741 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.797767 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.797785 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:29Z","lastTransitionTime":"2025-11-24T19:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.901299 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.901404 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.901464 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.901491 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.901509 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:29Z","lastTransitionTime":"2025-11-24T19:17:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:29 crc kubenswrapper[4812]: I1124 19:17:29.965239 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:29 crc kubenswrapper[4812]: E1124 19:17:29.965558 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.004779 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.004837 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.004854 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.004878 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.004897 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:30Z","lastTransitionTime":"2025-11-24T19:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.107722 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.107783 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.107807 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.107835 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.107855 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:30Z","lastTransitionTime":"2025-11-24T19:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.150641 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs\") pod \"network-metrics-daemon-jxmnc\" (UID: \"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\") " pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:30 crc kubenswrapper[4812]: E1124 19:17:30.150932 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 19:17:30 crc kubenswrapper[4812]: E1124 19:17:30.151060 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs podName:2d681a94-8d4a-45cc-8559-dfd15b6d0b1e nodeName:}" failed. No retries permitted until 2025-11-24 19:17:34.151029127 +0000 UTC m=+47.939981538 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs") pod "network-metrics-daemon-jxmnc" (UID: "2d681a94-8d4a-45cc-8559-dfd15b6d0b1e") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.211686 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.211798 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.211812 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.211829 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.211843 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:30Z","lastTransitionTime":"2025-11-24T19:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.315612 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.315673 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.315731 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.315757 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.315841 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:30Z","lastTransitionTime":"2025-11-24T19:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.419608 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.419672 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.419693 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.419722 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.419741 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:30Z","lastTransitionTime":"2025-11-24T19:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.522955 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.523017 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.523035 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.523059 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.523076 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:30Z","lastTransitionTime":"2025-11-24T19:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.626326 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.626397 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.626408 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.626424 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.626435 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:30Z","lastTransitionTime":"2025-11-24T19:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.729724 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.729784 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.729802 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.729826 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.729844 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:30Z","lastTransitionTime":"2025-11-24T19:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.832780 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.832871 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.832905 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.832949 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.832972 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:30Z","lastTransitionTime":"2025-11-24T19:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.936480 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.936644 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.936671 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.936702 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.936724 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:30Z","lastTransitionTime":"2025-11-24T19:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.965253 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:30 crc kubenswrapper[4812]: E1124 19:17:30.965389 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.965443 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:30 crc kubenswrapper[4812]: I1124 19:17:30.965249 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:30 crc kubenswrapper[4812]: E1124 19:17:30.965632 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:30 crc kubenswrapper[4812]: E1124 19:17:30.965679 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.040456 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.040555 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.040574 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.040602 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.040626 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:31Z","lastTransitionTime":"2025-11-24T19:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.144066 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.144132 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.144149 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.144194 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.144213 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:31Z","lastTransitionTime":"2025-11-24T19:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.247823 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.247885 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.247898 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.247921 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.247932 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:31Z","lastTransitionTime":"2025-11-24T19:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.351673 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.351746 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.351770 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.351801 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.351823 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:31Z","lastTransitionTime":"2025-11-24T19:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.454892 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.454958 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.454976 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.455003 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.455023 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:31Z","lastTransitionTime":"2025-11-24T19:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.565210 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.565272 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.565291 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.565316 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.565366 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:31Z","lastTransitionTime":"2025-11-24T19:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.668026 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.668098 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.668116 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.668141 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.668160 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:31Z","lastTransitionTime":"2025-11-24T19:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.770485 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.770559 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.770576 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.770600 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.770618 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:31Z","lastTransitionTime":"2025-11-24T19:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.878657 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.878929 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.879025 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.879130 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.879223 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:31Z","lastTransitionTime":"2025-11-24T19:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.965448 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:31 crc kubenswrapper[4812]: E1124 19:17:31.965624 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.982240 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.982312 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.982323 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.982358 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:31 crc kubenswrapper[4812]: I1124 19:17:31.982369 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:31Z","lastTransitionTime":"2025-11-24T19:17:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.085796 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.085839 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.085850 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.085866 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.085877 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:32Z","lastTransitionTime":"2025-11-24T19:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.188051 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.188113 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.188130 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.188155 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.188176 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:32Z","lastTransitionTime":"2025-11-24T19:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.289950 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.290005 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.290022 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.290044 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.290064 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:32Z","lastTransitionTime":"2025-11-24T19:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.393505 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.393585 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.393608 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.393638 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.393660 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:32Z","lastTransitionTime":"2025-11-24T19:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.496580 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.496621 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.496632 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.496649 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.496662 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:32Z","lastTransitionTime":"2025-11-24T19:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.598922 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.598965 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.598977 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.598994 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.599005 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:32Z","lastTransitionTime":"2025-11-24T19:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.701998 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.702059 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.702076 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.702108 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.702126 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:32Z","lastTransitionTime":"2025-11-24T19:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.804755 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.805321 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.805368 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.805390 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.805403 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:32Z","lastTransitionTime":"2025-11-24T19:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.908857 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.908927 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.908949 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.908983 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.909003 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:32Z","lastTransitionTime":"2025-11-24T19:17:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.964958 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.964992 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:32 crc kubenswrapper[4812]: I1124 19:17:32.965026 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:32 crc kubenswrapper[4812]: E1124 19:17:32.965112 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:32 crc kubenswrapper[4812]: E1124 19:17:32.965555 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:32 crc kubenswrapper[4812]: E1124 19:17:32.965649 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.012269 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.012367 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.012392 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.012421 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.012443 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:33Z","lastTransitionTime":"2025-11-24T19:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.115746 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.115825 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.115843 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.115867 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.115885 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:33Z","lastTransitionTime":"2025-11-24T19:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.218822 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.218899 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.218920 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.218946 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.218965 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:33Z","lastTransitionTime":"2025-11-24T19:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.322469 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.322529 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.322547 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.322571 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.322589 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:33Z","lastTransitionTime":"2025-11-24T19:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.425832 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.425895 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.425915 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.425942 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.425959 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:33Z","lastTransitionTime":"2025-11-24T19:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.529533 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.529599 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.529621 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.529650 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.529675 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:33Z","lastTransitionTime":"2025-11-24T19:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.559407 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.559461 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.559477 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.559499 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.559515 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:33Z","lastTransitionTime":"2025-11-24T19:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:33 crc kubenswrapper[4812]: E1124 19:17:33.579654 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:33Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.590574 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.590638 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.590657 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.590681 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.590698 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:33Z","lastTransitionTime":"2025-11-24T19:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:33 crc kubenswrapper[4812]: E1124 19:17:33.612465 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:33Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.618174 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.618271 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.618291 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.618316 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.618376 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:33Z","lastTransitionTime":"2025-11-24T19:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:33 crc kubenswrapper[4812]: E1124 19:17:33.640134 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:33Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.645542 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.645590 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.645607 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.645628 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.645644 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:33Z","lastTransitionTime":"2025-11-24T19:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:33 crc kubenswrapper[4812]: E1124 19:17:33.666204 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:33Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.670841 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.670894 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.670911 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.670933 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.670968 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:33Z","lastTransitionTime":"2025-11-24T19:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:33 crc kubenswrapper[4812]: E1124 19:17:33.690978 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:33Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:33 crc kubenswrapper[4812]: E1124 19:17:33.691218 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.693695 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.693756 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.693773 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.693799 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.693817 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:33Z","lastTransitionTime":"2025-11-24T19:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.797277 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.797394 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.797414 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.797439 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.797457 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:33Z","lastTransitionTime":"2025-11-24T19:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.901305 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.901403 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.901421 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.901478 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.901498 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:33Z","lastTransitionTime":"2025-11-24T19:17:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.965232 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:33 crc kubenswrapper[4812]: E1124 19:17:33.965511 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:17:33 crc kubenswrapper[4812]: I1124 19:17:33.966610 4812 scope.go:117] "RemoveContainer" containerID="c43d55c738518a107b5cec5b010d57690e78b67ce912577f5fd760b52b80f8ce" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.004717 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.004779 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.004797 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.004824 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.004845 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:34Z","lastTransitionTime":"2025-11-24T19:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.107595 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.107649 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.107669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.107694 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.107711 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:34Z","lastTransitionTime":"2025-11-24T19:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.188291 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs\") pod \"network-metrics-daemon-jxmnc\" (UID: \"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\") " pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:34 crc kubenswrapper[4812]: E1124 19:17:34.188598 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 19:17:34 crc kubenswrapper[4812]: E1124 19:17:34.188745 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs podName:2d681a94-8d4a-45cc-8559-dfd15b6d0b1e nodeName:}" failed. No retries permitted until 2025-11-24 19:17:42.188709105 +0000 UTC m=+55.977661556 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs") pod "network-metrics-daemon-jxmnc" (UID: "2d681a94-8d4a-45cc-8559-dfd15b6d0b1e") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.211130 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.211191 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.211204 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.211226 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.211244 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:34Z","lastTransitionTime":"2025-11-24T19:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.310736 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dgm54_b24bf762-6020-46b4-b9e8-589eb8ed0650/ovnkube-controller/1.log" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.313716 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.313756 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.313771 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.313794 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.313807 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:34Z","lastTransitionTime":"2025-11-24T19:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.315894 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerStarted","Data":"ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77"} Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.316593 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.338785 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.359443 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.380757 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.405503 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.416741 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.416802 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.416822 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.416848 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.416866 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:34Z","lastTransitionTime":"2025-11-24T19:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.432960 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.449522 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.471329 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43d55c738518a107b5cec5b010d57690e78b67ce912577f5fd760b52b80f8ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:22Z\\\",\\\"message\\\":\\\"46 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 19:17:22.045496 6246 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 19:17:22.045709 6246 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1124 19:17:22.045752 6246 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1124 19:17:22.045620 6246 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1124 19:17:22.045766 6246 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nF1124 19:17:22.045787 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.489156 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.509929 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.518954 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.518998 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.519010 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.519028 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.519039 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:34Z","lastTransitionTime":"2025-11-24T19:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.529926 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.541810 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.554345 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.564014 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.576758 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.587312 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2019326a15074e65a11467a45f9a1d8a0ac97d7669b071898b5f80ddc6984b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909115dbbacbaefddfe18d3c3d9c0860352c95ed9c1e478155813d24d2d8dd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxfp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.600386 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxmnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxmnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.624322 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.624385 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.624398 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.624415 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.624428 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:34Z","lastTransitionTime":"2025-11-24T19:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.727063 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.727121 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.727138 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.727164 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.727181 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:34Z","lastTransitionTime":"2025-11-24T19:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.829984 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.830032 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.830044 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.830063 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.830075 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:34Z","lastTransitionTime":"2025-11-24T19:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.932680 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.932749 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.932760 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.932779 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.932791 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:34Z","lastTransitionTime":"2025-11-24T19:17:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.965043 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.965149 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:34 crc kubenswrapper[4812]: E1124 19:17:34.965212 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:34 crc kubenswrapper[4812]: I1124 19:17:34.965266 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:34 crc kubenswrapper[4812]: E1124 19:17:34.965427 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:34 crc kubenswrapper[4812]: E1124 19:17:34.965598 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.035911 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.036022 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.036044 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.036073 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.036093 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:35Z","lastTransitionTime":"2025-11-24T19:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.138994 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.139086 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.139110 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.139144 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.139169 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:35Z","lastTransitionTime":"2025-11-24T19:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.241985 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.242063 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.242086 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.242130 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.242156 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:35Z","lastTransitionTime":"2025-11-24T19:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.323564 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dgm54_b24bf762-6020-46b4-b9e8-589eb8ed0650/ovnkube-controller/2.log" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.325079 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dgm54_b24bf762-6020-46b4-b9e8-589eb8ed0650/ovnkube-controller/1.log" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.329468 4812 generic.go:334] "Generic (PLEG): container finished" podID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerID="ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77" exitCode=1 Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.329524 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerDied","Data":"ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77"} Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.329638 4812 scope.go:117] "RemoveContainer" containerID="c43d55c738518a107b5cec5b010d57690e78b67ce912577f5fd760b52b80f8ce" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.330949 4812 scope.go:117] "RemoveContainer" containerID="ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77" Nov 24 19:17:35 crc kubenswrapper[4812]: E1124 19:17:35.331365 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.345983 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.346055 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.346079 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.346109 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.346131 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:35Z","lastTransitionTime":"2025-11-24T19:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.355995 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:35Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.377586 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:35Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.398739 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:35Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.419865 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:35Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.437612 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:35Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.449639 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.449704 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.449723 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.449750 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.449769 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:35Z","lastTransitionTime":"2025-11-24T19:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.454729 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:35Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.478646 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:35Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.494943 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:35Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.527130 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c43d55c738518a107b5cec5b010d57690e78b67ce912577f5fd760b52b80f8ce\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:22Z\\\",\\\"message\\\":\\\"46 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1124 19:17:22.045496 6246 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1124 19:17:22.045709 6246 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1124 19:17:22.045752 6246 default_network_controller.go:776] Recording success event on pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1124 19:17:22.045620 6246 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1124 19:17:22.045766 6246 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}\\\\nF1124 19:17:22.045787 6246 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:21Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:34Z\\\",\\\"message\\\":\\\"art network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z]\\\\nI1124 19:17:34.965800 6469 obj_retry.go:409] Going to retry *v1.Pod resource setup for 15 objects: [openshift-image-registry/node-ca-v9d5f openshift-kube-apiserver/kube-apiserver-crc openshift-multus/multus-additional-cni-plugins-qj8tt openshift-multus/network-metrics-daemon-jxmnc openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-dns/node-resolver-zwlsb openshift-network-node-identity/network-node-identity-vrzqb openshift-machine-config-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:35Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.545721 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:35Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.552434 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.552517 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.552547 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.552583 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.552611 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:35Z","lastTransitionTime":"2025-11-24T19:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.572926 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:35Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.586315 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxmnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxmnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:35Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.602157 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:35Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.618265 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2019326a15074e65a11467a45f9a1d8a0ac97d7669b071898b5f80ddc6984b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909115dbbacbaefddfe18d3c3d9c0860352c95ed9c1e478155813d24d2d8dd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxfp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:35Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.638088 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:35Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.652197 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:35Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.655591 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.655639 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.655653 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.655679 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.655692 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:35Z","lastTransitionTime":"2025-11-24T19:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.759860 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.759919 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.759932 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.759951 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.759965 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:35Z","lastTransitionTime":"2025-11-24T19:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.863379 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.863848 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.863936 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.864047 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.864142 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:35Z","lastTransitionTime":"2025-11-24T19:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.965308 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:35 crc kubenswrapper[4812]: E1124 19:17:35.965593 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.966989 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.967042 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.967060 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.967085 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:35 crc kubenswrapper[4812]: I1124 19:17:35.967103 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:35Z","lastTransitionTime":"2025-11-24T19:17:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.070545 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.070604 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.070622 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.070646 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.070664 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:36Z","lastTransitionTime":"2025-11-24T19:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.174459 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.174538 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.174564 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.174598 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.174624 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:36Z","lastTransitionTime":"2025-11-24T19:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.279087 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.279171 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.279185 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.279210 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.279224 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:36Z","lastTransitionTime":"2025-11-24T19:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.335690 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dgm54_b24bf762-6020-46b4-b9e8-589eb8ed0650/ovnkube-controller/2.log" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.340859 4812 scope.go:117] "RemoveContainer" containerID="ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77" Nov 24 19:17:36 crc kubenswrapper[4812]: E1124 19:17:36.341154 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.364487 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:36Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.382366 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2019326a15074e65a11467a45f9a1d8a0ac97d7669b071898b5f80ddc6984b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909115dbbacbaefddfe18d3c3d9c0860352c95ed9c1e478155813d24d2d8dd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxfp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:36Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.383021 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.383076 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.383088 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.383109 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.383124 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:36Z","lastTransitionTime":"2025-11-24T19:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.400888 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxmnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxmnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:36Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.425773 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:36Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.444990 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:36Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.465295 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:36Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.484459 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:36Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.486637 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.486710 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.486732 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.486760 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.486784 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:36Z","lastTransitionTime":"2025-11-24T19:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.503901 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:36Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.524900 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:36Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.545115 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:36Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.566676 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:36Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.582460 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:36Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.590198 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.590377 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.590468 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.590561 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.590642 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:36Z","lastTransitionTime":"2025-11-24T19:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.613798 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:34Z\\\",\\\"message\\\":\\\"art network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z]\\\\nI1124 19:17:34.965800 6469 obj_retry.go:409] Going to retry *v1.Pod resource setup for 15 objects: [openshift-image-registry/node-ca-v9d5f openshift-kube-apiserver/kube-apiserver-crc openshift-multus/multus-additional-cni-plugins-qj8tt openshift-multus/network-metrics-daemon-jxmnc openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-dns/node-resolver-zwlsb openshift-network-node-identity/network-node-identity-vrzqb openshift-machine-config-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:36Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.632160 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:36Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.656309 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:36Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.676470 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:36Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.694364 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.694424 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.694442 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.694469 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.694489 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:36Z","lastTransitionTime":"2025-11-24T19:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.797659 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.797716 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.797732 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.797757 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.797774 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:36Z","lastTransitionTime":"2025-11-24T19:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.900682 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.900780 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.900810 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.900840 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.900858 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:36Z","lastTransitionTime":"2025-11-24T19:17:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.965101 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.965157 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:36 crc kubenswrapper[4812]: E1124 19:17:36.965293 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.965472 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:36 crc kubenswrapper[4812]: E1124 19:17:36.965527 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:36 crc kubenswrapper[4812]: E1124 19:17:36.965681 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:36 crc kubenswrapper[4812]: I1124 19:17:36.992520 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:36Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.003785 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.003840 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.003860 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.003887 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.003906 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:37Z","lastTransitionTime":"2025-11-24T19:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.011780 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2019326a15074e65a11467a45f9a1d8a0ac97d7669b071898b5f80ddc6984b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909115dbbacbaefddfe18d3c3d9c0860352c95ed9c1e478155813d24d2d8dd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxfp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:37Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.028992 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxmnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxmnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:37Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.053092 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:37Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.071954 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:37Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.099066 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:37Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.107478 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.107538 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.107557 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.107588 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.107607 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:37Z","lastTransitionTime":"2025-11-24T19:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.116870 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:37Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.134060 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:37Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.154643 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:37Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.175849 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:37Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.192063 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:37Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.210813 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.210875 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.210894 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.210919 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.210936 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:37Z","lastTransitionTime":"2025-11-24T19:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.226081 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:34Z\\\",\\\"message\\\":\\\"art network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z]\\\\nI1124 19:17:34.965800 6469 obj_retry.go:409] Going to retry *v1.Pod resource setup for 15 objects: [openshift-image-registry/node-ca-v9d5f openshift-kube-apiserver/kube-apiserver-crc openshift-multus/multus-additional-cni-plugins-qj8tt openshift-multus/network-metrics-daemon-jxmnc openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-dns/node-resolver-zwlsb openshift-network-node-identity/network-node-identity-vrzqb openshift-machine-config-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:37Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.241037 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:37Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.264385 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:37Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.290328 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:37Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.312954 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:37Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.313186 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.313217 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.313234 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.313260 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.313281 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:37Z","lastTransitionTime":"2025-11-24T19:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.416564 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.416642 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.416659 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.416689 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.416707 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:37Z","lastTransitionTime":"2025-11-24T19:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.520788 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.520862 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.520880 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.520913 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.520932 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:37Z","lastTransitionTime":"2025-11-24T19:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.624521 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.624580 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.624598 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.624622 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.624642 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:37Z","lastTransitionTime":"2025-11-24T19:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.728097 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.728173 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.728195 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.728222 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.728240 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:37Z","lastTransitionTime":"2025-11-24T19:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.831824 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.831906 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.831925 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.831951 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.831968 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:37Z","lastTransitionTime":"2025-11-24T19:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.935239 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.935302 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.935319 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.935378 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.935396 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:37Z","lastTransitionTime":"2025-11-24T19:17:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:37 crc kubenswrapper[4812]: I1124 19:17:37.965539 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:37 crc kubenswrapper[4812]: E1124 19:17:37.965787 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.039032 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.039109 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.039127 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.039153 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.039171 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:38Z","lastTransitionTime":"2025-11-24T19:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.142110 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.142175 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.142192 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.142225 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.142245 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:38Z","lastTransitionTime":"2025-11-24T19:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.245556 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.245633 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.245656 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.245688 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.245709 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:38Z","lastTransitionTime":"2025-11-24T19:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.348387 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.348461 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.348485 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.348511 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.348530 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:38Z","lastTransitionTime":"2025-11-24T19:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.451681 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.451751 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.451769 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.451805 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.451823 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:38Z","lastTransitionTime":"2025-11-24T19:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.554954 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.555015 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.555031 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.555056 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.555073 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:38Z","lastTransitionTime":"2025-11-24T19:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.641513 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:17:38 crc kubenswrapper[4812]: E1124 19:17:38.641766 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:18:10.641715559 +0000 UTC m=+84.430668000 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.641900 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:38 crc kubenswrapper[4812]: E1124 19:17:38.642040 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 19:17:38 crc kubenswrapper[4812]: E1124 19:17:38.642118 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 19:18:10.64210036 +0000 UTC m=+84.431052761 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.658641 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.658707 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.658727 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.658752 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.658773 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:38Z","lastTransitionTime":"2025-11-24T19:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.761865 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.761930 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.761947 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.761978 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.761998 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:38Z","lastTransitionTime":"2025-11-24T19:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.844927 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.845045 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.845108 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:38 crc kubenswrapper[4812]: E1124 19:17:38.845207 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 19:17:38 crc kubenswrapper[4812]: E1124 19:17:38.845321 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 19:18:10.845292267 +0000 UTC m=+84.634244678 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 19:17:38 crc kubenswrapper[4812]: E1124 19:17:38.845375 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 19:17:38 crc kubenswrapper[4812]: E1124 19:17:38.845382 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 19:17:38 crc kubenswrapper[4812]: E1124 19:17:38.845416 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 19:17:38 crc kubenswrapper[4812]: E1124 19:17:38.845444 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 19:17:38 crc kubenswrapper[4812]: E1124 19:17:38.845450 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:38 crc kubenswrapper[4812]: E1124 19:17:38.845468 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:38 crc kubenswrapper[4812]: E1124 19:17:38.845577 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 19:18:10.845543974 +0000 UTC m=+84.634496395 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:38 crc kubenswrapper[4812]: E1124 19:17:38.845620 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 19:18:10.845602576 +0000 UTC m=+84.634555067 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.865653 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.865750 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.865780 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.865812 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.865835 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:38Z","lastTransitionTime":"2025-11-24T19:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.965211 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.965315 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.965366 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:38 crc kubenswrapper[4812]: E1124 19:17:38.965514 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:38 crc kubenswrapper[4812]: E1124 19:17:38.965640 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:38 crc kubenswrapper[4812]: E1124 19:17:38.965789 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.968548 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.968599 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.968617 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.968645 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:38 crc kubenswrapper[4812]: I1124 19:17:38.968663 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:38Z","lastTransitionTime":"2025-11-24T19:17:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.071702 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.071764 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.071781 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.071806 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.071825 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:39Z","lastTransitionTime":"2025-11-24T19:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.175459 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.175522 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.175538 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.175565 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.175606 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:39Z","lastTransitionTime":"2025-11-24T19:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.279463 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.279573 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.279599 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.279630 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.279653 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:39Z","lastTransitionTime":"2025-11-24T19:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.382412 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.382494 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.382517 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.382545 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.382563 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:39Z","lastTransitionTime":"2025-11-24T19:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.485265 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.485363 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.485417 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.485444 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.485461 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:39Z","lastTransitionTime":"2025-11-24T19:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.589181 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.589261 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.589278 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.589305 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.589323 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:39Z","lastTransitionTime":"2025-11-24T19:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.692220 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.692279 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.692296 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.692320 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.692367 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:39Z","lastTransitionTime":"2025-11-24T19:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.795689 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.795747 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.795762 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.795790 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.795807 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:39Z","lastTransitionTime":"2025-11-24T19:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.898813 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.898884 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.898901 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.898929 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.898948 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:39Z","lastTransitionTime":"2025-11-24T19:17:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:39 crc kubenswrapper[4812]: I1124 19:17:39.965000 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:39 crc kubenswrapper[4812]: E1124 19:17:39.965212 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.002120 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.002182 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.002199 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.002222 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.002241 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:40Z","lastTransitionTime":"2025-11-24T19:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.047903 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.064199 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.072985 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.093706 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.104222 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.104289 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.104312 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.104377 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.104404 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:40Z","lastTransitionTime":"2025-11-24T19:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.115235 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.135416 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.154460 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.172529 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.196789 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:34Z\\\",\\\"message\\\":\\\"art network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z]\\\\nI1124 19:17:34.965800 6469 obj_retry.go:409] Going to retry *v1.Pod resource setup for 15 objects: [openshift-image-registry/node-ca-v9d5f openshift-kube-apiserver/kube-apiserver-crc openshift-multus/multus-additional-cni-plugins-qj8tt openshift-multus/network-metrics-daemon-jxmnc openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-dns/node-resolver-zwlsb openshift-network-node-identity/network-node-identity-vrzqb openshift-machine-config-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.207475 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.207538 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.207561 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.207592 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.207614 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:40Z","lastTransitionTime":"2025-11-24T19:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.214540 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.234879 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.255159 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.275858 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.296227 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.310469 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.310540 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.310558 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.310580 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.310598 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:40Z","lastTransitionTime":"2025-11-24T19:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.317736 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.341318 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.360321 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2019326a15074e65a11467a45f9a1d8a0ac97d7669b071898b5f80ddc6984b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909115dbbacbaefddfe18d3c3d9c0860352c95ed9c1e478155813d24d2d8dd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxfp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.377870 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxmnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxmnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:40Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.413586 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.413643 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.413660 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.413684 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.413702 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:40Z","lastTransitionTime":"2025-11-24T19:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.517239 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.517311 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.517329 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.517386 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.517405 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:40Z","lastTransitionTime":"2025-11-24T19:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.619995 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.620043 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.620060 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.620081 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.620098 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:40Z","lastTransitionTime":"2025-11-24T19:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.723174 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.723238 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.723261 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.723307 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.723330 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:40Z","lastTransitionTime":"2025-11-24T19:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.827568 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.827696 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.827818 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.827856 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.827878 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:40Z","lastTransitionTime":"2025-11-24T19:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.931744 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.931854 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.931953 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.931978 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.931995 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:40Z","lastTransitionTime":"2025-11-24T19:17:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.964837 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:40 crc kubenswrapper[4812]: E1124 19:17:40.964998 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.965600 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:40 crc kubenswrapper[4812]: I1124 19:17:40.965774 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:40 crc kubenswrapper[4812]: E1124 19:17:40.965860 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:40 crc kubenswrapper[4812]: E1124 19:17:40.965974 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.034816 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.034888 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.034905 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.034933 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.034952 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:41Z","lastTransitionTime":"2025-11-24T19:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.138480 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.138543 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.138563 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.138588 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.138606 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:41Z","lastTransitionTime":"2025-11-24T19:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.241305 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.241403 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.241423 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.241448 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.241466 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:41Z","lastTransitionTime":"2025-11-24T19:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.345031 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.345520 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.345860 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.346066 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.346264 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:41Z","lastTransitionTime":"2025-11-24T19:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.449787 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.449831 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.449899 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.449931 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.450001 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:41Z","lastTransitionTime":"2025-11-24T19:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.553289 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.553455 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.553482 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.553506 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.553523 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:41Z","lastTransitionTime":"2025-11-24T19:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.656585 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.656675 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.656698 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.656729 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.656749 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:41Z","lastTransitionTime":"2025-11-24T19:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.760168 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.760800 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.760835 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.760867 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.760889 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:41Z","lastTransitionTime":"2025-11-24T19:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.864803 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.864863 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.864883 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.864909 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.864930 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:41Z","lastTransitionTime":"2025-11-24T19:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.965580 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:41 crc kubenswrapper[4812]: E1124 19:17:41.965813 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.968623 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.968714 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.968740 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.968778 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:41 crc kubenswrapper[4812]: I1124 19:17:41.968803 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:41Z","lastTransitionTime":"2025-11-24T19:17:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.072059 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.072101 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.072112 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.072129 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.072141 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:42Z","lastTransitionTime":"2025-11-24T19:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.175657 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.175732 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.175753 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.175779 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.175798 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:42Z","lastTransitionTime":"2025-11-24T19:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.279142 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.279203 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.279219 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.279244 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.279262 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:42Z","lastTransitionTime":"2025-11-24T19:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.284700 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs\") pod \"network-metrics-daemon-jxmnc\" (UID: \"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\") " pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:42 crc kubenswrapper[4812]: E1124 19:17:42.284966 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 19:17:42 crc kubenswrapper[4812]: E1124 19:17:42.285069 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs podName:2d681a94-8d4a-45cc-8559-dfd15b6d0b1e nodeName:}" failed. No retries permitted until 2025-11-24 19:17:58.285035337 +0000 UTC m=+72.073987748 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs") pod "network-metrics-daemon-jxmnc" (UID: "2d681a94-8d4a-45cc-8559-dfd15b6d0b1e") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.381939 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.381998 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.382023 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.382050 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.382067 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:42Z","lastTransitionTime":"2025-11-24T19:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.485660 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.485729 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.485751 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.485777 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.485796 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:42Z","lastTransitionTime":"2025-11-24T19:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.589487 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.589548 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.589565 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.589588 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.589606 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:42Z","lastTransitionTime":"2025-11-24T19:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.693454 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.693525 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.693542 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.693567 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.693586 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:42Z","lastTransitionTime":"2025-11-24T19:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.820294 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.820393 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.820419 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.820451 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.820474 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:42Z","lastTransitionTime":"2025-11-24T19:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.923535 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.923584 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.923602 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.923626 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.923645 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:42Z","lastTransitionTime":"2025-11-24T19:17:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.965315 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.965478 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:42 crc kubenswrapper[4812]: E1124 19:17:42.965528 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:42 crc kubenswrapper[4812]: E1124 19:17:42.965667 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:42 crc kubenswrapper[4812]: I1124 19:17:42.965746 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:42 crc kubenswrapper[4812]: E1124 19:17:42.965833 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.027010 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.027069 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.027088 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.027112 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.027132 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:43Z","lastTransitionTime":"2025-11-24T19:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.131314 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.131729 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.131747 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.131773 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.131792 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:43Z","lastTransitionTime":"2025-11-24T19:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.234980 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.235039 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.235057 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.235083 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.235101 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:43Z","lastTransitionTime":"2025-11-24T19:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.337883 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.338171 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.338316 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.338491 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.338611 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:43Z","lastTransitionTime":"2025-11-24T19:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.442129 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.442474 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.442650 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.442821 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.442964 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:43Z","lastTransitionTime":"2025-11-24T19:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.546267 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.546316 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.546328 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.546373 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.546384 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:43Z","lastTransitionTime":"2025-11-24T19:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.650030 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.650073 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.650085 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.650105 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.650117 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:43Z","lastTransitionTime":"2025-11-24T19:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.755053 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.755427 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.755454 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.755484 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.755504 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:43Z","lastTransitionTime":"2025-11-24T19:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.811039 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.811111 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.811134 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.811163 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.811184 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:43Z","lastTransitionTime":"2025-11-24T19:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:43 crc kubenswrapper[4812]: E1124 19:17:43.831148 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:43Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.836946 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.837007 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.837029 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.837055 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.837074 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:43Z","lastTransitionTime":"2025-11-24T19:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:43 crc kubenswrapper[4812]: E1124 19:17:43.857987 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:43Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.864109 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.864168 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.864205 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.864230 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.864244 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:43Z","lastTransitionTime":"2025-11-24T19:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:43 crc kubenswrapper[4812]: E1124 19:17:43.886201 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:43Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.892379 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.892440 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.892461 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.892493 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.892516 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:43Z","lastTransitionTime":"2025-11-24T19:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:43 crc kubenswrapper[4812]: E1124 19:17:43.912069 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:43Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.917325 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.917425 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.917443 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.917469 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.917488 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:43Z","lastTransitionTime":"2025-11-24T19:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:43 crc kubenswrapper[4812]: E1124 19:17:43.939249 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:43Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:43 crc kubenswrapper[4812]: E1124 19:17:43.939506 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.941724 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.941801 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.941821 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.941848 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.941867 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:43Z","lastTransitionTime":"2025-11-24T19:17:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:43 crc kubenswrapper[4812]: I1124 19:17:43.965706 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:43 crc kubenswrapper[4812]: E1124 19:17:43.965919 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.045331 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.045444 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.045463 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.045491 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.045514 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:44Z","lastTransitionTime":"2025-11-24T19:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.150209 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.150276 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.150294 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.150320 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.150366 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:44Z","lastTransitionTime":"2025-11-24T19:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.253960 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.254035 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.254058 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.254092 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.254113 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:44Z","lastTransitionTime":"2025-11-24T19:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.356466 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.356503 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.356511 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.356527 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.356538 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:44Z","lastTransitionTime":"2025-11-24T19:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.459166 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.459204 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.459214 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.459244 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.459258 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:44Z","lastTransitionTime":"2025-11-24T19:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.562841 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.562911 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.562930 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.562959 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.562985 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:44Z","lastTransitionTime":"2025-11-24T19:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.666734 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.667109 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.667327 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.667600 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.667743 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:44Z","lastTransitionTime":"2025-11-24T19:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.770500 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.770812 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.771001 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.771144 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.771261 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:44Z","lastTransitionTime":"2025-11-24T19:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.874522 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.874588 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.874611 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.874641 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.874664 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:44Z","lastTransitionTime":"2025-11-24T19:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.965081 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:44 crc kubenswrapper[4812]: E1124 19:17:44.965258 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.965397 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.965525 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:44 crc kubenswrapper[4812]: E1124 19:17:44.965712 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:44 crc kubenswrapper[4812]: E1124 19:17:44.965869 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.977834 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.977912 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.977937 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.977966 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:44 crc kubenswrapper[4812]: I1124 19:17:44.977989 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:44Z","lastTransitionTime":"2025-11-24T19:17:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.080758 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.080821 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.080838 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.080863 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.080882 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:45Z","lastTransitionTime":"2025-11-24T19:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.184240 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.184319 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.184379 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.184412 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.184435 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:45Z","lastTransitionTime":"2025-11-24T19:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.287987 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.288048 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.288076 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.288120 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.288143 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:45Z","lastTransitionTime":"2025-11-24T19:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.390580 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.390645 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.390664 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.390688 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.390707 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:45Z","lastTransitionTime":"2025-11-24T19:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.493784 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.493854 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.493872 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.493898 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.493915 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:45Z","lastTransitionTime":"2025-11-24T19:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.597173 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.597238 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.597260 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.597289 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.597313 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:45Z","lastTransitionTime":"2025-11-24T19:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.699918 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.700007 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.700033 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.700062 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.700111 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:45Z","lastTransitionTime":"2025-11-24T19:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.802566 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.802605 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.802616 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.802629 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.802638 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:45Z","lastTransitionTime":"2025-11-24T19:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.905969 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.906029 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.906047 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.906070 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.906085 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:45Z","lastTransitionTime":"2025-11-24T19:17:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:45 crc kubenswrapper[4812]: I1124 19:17:45.965131 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:45 crc kubenswrapper[4812]: E1124 19:17:45.965364 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.008779 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.008842 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.008858 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.008883 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.008901 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:46Z","lastTransitionTime":"2025-11-24T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.112199 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.112249 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.112266 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.112289 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.112307 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:46Z","lastTransitionTime":"2025-11-24T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.215775 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.215861 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.215894 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.215927 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.215952 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:46Z","lastTransitionTime":"2025-11-24T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.319164 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.319267 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.319284 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.319303 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.319314 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:46Z","lastTransitionTime":"2025-11-24T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.422430 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.422497 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.422517 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.422543 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.422566 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:46Z","lastTransitionTime":"2025-11-24T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.525635 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.525713 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.525738 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.525770 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.525794 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:46Z","lastTransitionTime":"2025-11-24T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.629593 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.629669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.629693 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.629732 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.629761 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:46Z","lastTransitionTime":"2025-11-24T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.733010 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.733065 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.733078 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.733097 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.733109 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:46Z","lastTransitionTime":"2025-11-24T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.837186 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.837256 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.837276 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.837302 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.837320 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:46Z","lastTransitionTime":"2025-11-24T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.941125 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.941174 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.941186 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.941204 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.941217 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:46Z","lastTransitionTime":"2025-11-24T19:17:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.964918 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.964936 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:46 crc kubenswrapper[4812]: E1124 19:17:46.965129 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.964956 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:46 crc kubenswrapper[4812]: E1124 19:17:46.965218 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:46 crc kubenswrapper[4812]: E1124 19:17:46.965294 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:46 crc kubenswrapper[4812]: I1124 19:17:46.990049 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:46Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.009789 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:47Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.028598 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:47Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.043925 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.044163 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.044448 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.044620 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.044757 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:47Z","lastTransitionTime":"2025-11-24T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.046556 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:47Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.064261 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1e76a3-3e3a-4cf3-a230-ba862eb87bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b99fb73ebd02df37d8967752ad94ce4488e9690f5c163efa4a30592849e3e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c91670e98bc282ff883848aee029e9ff085db7efbd1ed8a91c5fe1f159eda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ffc4cb4286a683020d574f40064acf95c34e6c28bde7742ce70d88f9a4fbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:47Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.085209 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:47Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.107218 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:47Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.128241 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:47Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.144279 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:47Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.147787 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.147831 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.147848 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.147872 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.147888 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:47Z","lastTransitionTime":"2025-11-24T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.178861 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:34Z\\\",\\\"message\\\":\\\"art network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z]\\\\nI1124 19:17:34.965800 6469 obj_retry.go:409] Going to retry *v1.Pod resource setup for 15 objects: [openshift-image-registry/node-ca-v9d5f openshift-kube-apiserver/kube-apiserver-crc openshift-multus/multus-additional-cni-plugins-qj8tt openshift-multus/network-metrics-daemon-jxmnc openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-dns/node-resolver-zwlsb openshift-network-node-identity/network-node-identity-vrzqb openshift-machine-config-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:47Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.202252 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:47Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.217698 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:47Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.239886 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:47Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.250712 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.250760 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.250777 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.250802 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.250819 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:47Z","lastTransitionTime":"2025-11-24T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.262477 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:47Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.286595 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:47Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.304760 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2019326a15074e65a11467a45f9a1d8a0ac97d7669b071898b5f80ddc6984b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909115dbbacbaefddfe18d3c3d9c0860352c95ed9c1e478155813d24d2d8dd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxfp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:47Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.350013 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxmnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxmnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:47Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.353525 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.353578 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.353604 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.353635 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.353659 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:47Z","lastTransitionTime":"2025-11-24T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.456531 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.456625 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.456647 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.456678 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.456703 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:47Z","lastTransitionTime":"2025-11-24T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.560468 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.560558 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.560592 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.560623 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.560648 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:47Z","lastTransitionTime":"2025-11-24T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.663410 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.663456 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.663473 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.663496 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.663514 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:47Z","lastTransitionTime":"2025-11-24T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.766675 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.766738 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.766755 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.766779 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.766796 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:47Z","lastTransitionTime":"2025-11-24T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.870128 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.870201 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.870220 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.870252 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.870274 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:47Z","lastTransitionTime":"2025-11-24T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.965709 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:47 crc kubenswrapper[4812]: E1124 19:17:47.965971 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.968507 4812 scope.go:117] "RemoveContainer" containerID="ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77" Nov 24 19:17:47 crc kubenswrapper[4812]: E1124 19:17:47.968827 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.973095 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.973127 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.973144 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.973163 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:47 crc kubenswrapper[4812]: I1124 19:17:47.973181 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:47Z","lastTransitionTime":"2025-11-24T19:17:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.077498 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.077564 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.077582 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.077609 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.077627 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:48Z","lastTransitionTime":"2025-11-24T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.181135 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.181204 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.181223 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.181248 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.181268 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:48Z","lastTransitionTime":"2025-11-24T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.284561 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.284621 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.284643 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.284670 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.284689 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:48Z","lastTransitionTime":"2025-11-24T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.387221 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.387317 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.387379 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.387418 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.387443 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:48Z","lastTransitionTime":"2025-11-24T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.490610 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.490679 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.490699 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.490725 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.490745 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:48Z","lastTransitionTime":"2025-11-24T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.594608 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.594673 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.594690 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.594714 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.594732 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:48Z","lastTransitionTime":"2025-11-24T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.697769 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.697923 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.697951 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.697980 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.698001 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:48Z","lastTransitionTime":"2025-11-24T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.801788 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.801948 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.801972 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.802005 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.802030 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:48Z","lastTransitionTime":"2025-11-24T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.905371 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.905430 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.905449 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.905475 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.905493 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:48Z","lastTransitionTime":"2025-11-24T19:17:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.966166 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.966181 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:48 crc kubenswrapper[4812]: I1124 19:17:48.966588 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:48 crc kubenswrapper[4812]: E1124 19:17:48.967483 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:48 crc kubenswrapper[4812]: E1124 19:17:48.966582 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:48 crc kubenswrapper[4812]: E1124 19:17:48.970192 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.008779 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.008840 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.008858 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.008880 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.008897 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:49Z","lastTransitionTime":"2025-11-24T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.112030 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.112093 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.112111 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.112134 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.112152 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:49Z","lastTransitionTime":"2025-11-24T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.215075 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.215172 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.215191 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.215215 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.215233 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:49Z","lastTransitionTime":"2025-11-24T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.318396 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.318451 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.318468 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.318492 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.318510 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:49Z","lastTransitionTime":"2025-11-24T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.422024 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.422085 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.422104 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.422128 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.422146 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:49Z","lastTransitionTime":"2025-11-24T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.525639 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.525701 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.525718 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.525743 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.525762 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:49Z","lastTransitionTime":"2025-11-24T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.628409 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.628460 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.628477 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.628501 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.628518 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:49Z","lastTransitionTime":"2025-11-24T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.731121 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.731188 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.731207 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.731242 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.731261 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:49Z","lastTransitionTime":"2025-11-24T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.834720 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.834794 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.834817 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.834847 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.834865 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:49Z","lastTransitionTime":"2025-11-24T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.937979 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.938056 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.938092 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.938122 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.938157 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:49Z","lastTransitionTime":"2025-11-24T19:17:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:49 crc kubenswrapper[4812]: I1124 19:17:49.965564 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:49 crc kubenswrapper[4812]: E1124 19:17:49.965783 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.040685 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.040778 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.040804 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.040841 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.040864 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:50Z","lastTransitionTime":"2025-11-24T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.143623 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.143704 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.143717 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.143734 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.143745 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:50Z","lastTransitionTime":"2025-11-24T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.247566 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.247613 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.247627 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.247645 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.247657 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:50Z","lastTransitionTime":"2025-11-24T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.356609 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.356665 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.356682 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.356705 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.356725 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:50Z","lastTransitionTime":"2025-11-24T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.459450 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.459498 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.459509 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.459530 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.459543 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:50Z","lastTransitionTime":"2025-11-24T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.562566 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.562624 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.562638 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.562660 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.562675 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:50Z","lastTransitionTime":"2025-11-24T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.665424 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.665491 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.665515 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.665581 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.665600 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:50Z","lastTransitionTime":"2025-11-24T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.768880 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.768945 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.768964 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.768989 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.769007 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:50Z","lastTransitionTime":"2025-11-24T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.871691 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.871728 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.871741 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.871756 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.871767 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:50Z","lastTransitionTime":"2025-11-24T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.964603 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.964623 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:50 crc kubenswrapper[4812]: E1124 19:17:50.964791 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.964627 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:50 crc kubenswrapper[4812]: E1124 19:17:50.964855 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:50 crc kubenswrapper[4812]: E1124 19:17:50.964948 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.974382 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.974445 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.974458 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.974474 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:50 crc kubenswrapper[4812]: I1124 19:17:50.974484 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:50Z","lastTransitionTime":"2025-11-24T19:17:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.076556 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.076599 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.076611 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.076634 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.076648 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:51Z","lastTransitionTime":"2025-11-24T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.179791 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.179885 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.179908 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.179935 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.179953 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:51Z","lastTransitionTime":"2025-11-24T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.283129 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.283534 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.283606 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.283743 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.283812 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:51Z","lastTransitionTime":"2025-11-24T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.386216 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.386278 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.386301 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.386327 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.386386 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:51Z","lastTransitionTime":"2025-11-24T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.489910 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.489970 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.489987 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.490011 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.490027 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:51Z","lastTransitionTime":"2025-11-24T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.592511 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.592847 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.593029 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.593209 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.593447 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:51Z","lastTransitionTime":"2025-11-24T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.696111 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.696532 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.696734 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.696932 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.697128 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:51Z","lastTransitionTime":"2025-11-24T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.800070 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.800120 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.800132 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.800149 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.800161 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:51Z","lastTransitionTime":"2025-11-24T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.902706 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.902751 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.902761 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.902777 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.902786 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:51Z","lastTransitionTime":"2025-11-24T19:17:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:51 crc kubenswrapper[4812]: I1124 19:17:51.964574 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:51 crc kubenswrapper[4812]: E1124 19:17:51.964754 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.004970 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.005016 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.005026 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.005044 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.005055 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:52Z","lastTransitionTime":"2025-11-24T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.107470 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.107510 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.107519 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.107534 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.107543 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:52Z","lastTransitionTime":"2025-11-24T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.210729 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.210996 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.211056 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.211133 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.211192 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:52Z","lastTransitionTime":"2025-11-24T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.313939 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.313980 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.313988 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.314004 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.314013 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:52Z","lastTransitionTime":"2025-11-24T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.416726 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.417008 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.417106 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.417196 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.417291 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:52Z","lastTransitionTime":"2025-11-24T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.520780 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.521020 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.521080 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.521146 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.521204 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:52Z","lastTransitionTime":"2025-11-24T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.623981 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.624035 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.624053 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.624078 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.624096 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:52Z","lastTransitionTime":"2025-11-24T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.726826 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.726859 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.726868 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.726882 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.726891 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:52Z","lastTransitionTime":"2025-11-24T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.829128 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.829185 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.829196 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.829213 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.829223 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:52Z","lastTransitionTime":"2025-11-24T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.931728 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.931765 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.931776 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.931794 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.931805 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:52Z","lastTransitionTime":"2025-11-24T19:17:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.965423 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:52 crc kubenswrapper[4812]: E1124 19:17:52.965815 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.965565 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:52 crc kubenswrapper[4812]: I1124 19:17:52.965430 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:52 crc kubenswrapper[4812]: E1124 19:17:52.965907 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:52 crc kubenswrapper[4812]: E1124 19:17:52.966086 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.034470 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.034535 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.034554 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.034579 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.034599 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:53Z","lastTransitionTime":"2025-11-24T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.137602 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.137643 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.137656 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.137674 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.137684 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:53Z","lastTransitionTime":"2025-11-24T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.238866 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.238889 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.238899 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.238912 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.238923 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:53Z","lastTransitionTime":"2025-11-24T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.341821 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.341865 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.341878 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.341893 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.341904 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:53Z","lastTransitionTime":"2025-11-24T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.444837 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.444898 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.444915 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.444940 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.444963 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:53Z","lastTransitionTime":"2025-11-24T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.547203 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.547266 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.547284 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.547309 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.547327 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:53Z","lastTransitionTime":"2025-11-24T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.651540 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.651606 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.651623 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.651646 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.651664 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:53Z","lastTransitionTime":"2025-11-24T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.754706 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.754782 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.754808 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.754834 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.754856 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:53Z","lastTransitionTime":"2025-11-24T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.858478 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.858526 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.858535 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.858554 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.858564 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:53Z","lastTransitionTime":"2025-11-24T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.961586 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.961625 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.961636 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.961656 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.961666 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:53Z","lastTransitionTime":"2025-11-24T19:17:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:53 crc kubenswrapper[4812]: I1124 19:17:53.964854 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:53 crc kubenswrapper[4812]: E1124 19:17:53.964969 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.062997 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.063027 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.063036 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.063048 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.063058 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:54Z","lastTransitionTime":"2025-11-24T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:54 crc kubenswrapper[4812]: E1124 19:17:54.075296 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:54Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.079698 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.079730 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.079739 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.079752 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.079760 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:54Z","lastTransitionTime":"2025-11-24T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:54 crc kubenswrapper[4812]: E1124 19:17:54.091655 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:54Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.095169 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.095223 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.095266 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.095292 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.095310 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:54Z","lastTransitionTime":"2025-11-24T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:54 crc kubenswrapper[4812]: E1124 19:17:54.110686 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:54Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.115004 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.115074 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.115086 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.115102 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.115141 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:54Z","lastTransitionTime":"2025-11-24T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:54 crc kubenswrapper[4812]: E1124 19:17:54.133905 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:54Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.137560 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.137612 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.137629 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.137651 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.137667 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:54Z","lastTransitionTime":"2025-11-24T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:54 crc kubenswrapper[4812]: E1124 19:17:54.150907 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:54Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:54 crc kubenswrapper[4812]: E1124 19:17:54.151022 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.153811 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.153867 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.153881 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.153900 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.153914 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:54Z","lastTransitionTime":"2025-11-24T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.256095 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.256175 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.256194 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.256221 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.256240 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:54Z","lastTransitionTime":"2025-11-24T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.358716 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.358778 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.358824 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.358851 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.358869 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:54Z","lastTransitionTime":"2025-11-24T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.461566 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.461627 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.461658 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.461683 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.461702 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:54Z","lastTransitionTime":"2025-11-24T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.564247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.564291 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.564301 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.564316 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.564327 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:54Z","lastTransitionTime":"2025-11-24T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.667606 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.667719 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.667741 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.667771 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.667794 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:54Z","lastTransitionTime":"2025-11-24T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.774556 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.774737 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.775167 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.775207 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.775231 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:54Z","lastTransitionTime":"2025-11-24T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.878309 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.878378 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.878391 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.878407 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.878419 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:54Z","lastTransitionTime":"2025-11-24T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.964923 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.964923 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:54 crc kubenswrapper[4812]: E1124 19:17:54.965044 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:54 crc kubenswrapper[4812]: E1124 19:17:54.965092 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.964926 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:54 crc kubenswrapper[4812]: E1124 19:17:54.965170 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.980920 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.980946 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.980958 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.980974 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:54 crc kubenswrapper[4812]: I1124 19:17:54.980987 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:54Z","lastTransitionTime":"2025-11-24T19:17:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.082883 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.082917 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.082927 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.082940 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.082949 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:55Z","lastTransitionTime":"2025-11-24T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.185062 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.185089 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.185100 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.185113 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.185124 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:55Z","lastTransitionTime":"2025-11-24T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.286733 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.286765 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.286776 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.286790 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.286801 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:55Z","lastTransitionTime":"2025-11-24T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.390143 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.390204 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.390217 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.390229 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.390239 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:55Z","lastTransitionTime":"2025-11-24T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.492945 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.492997 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.493008 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.493025 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.493037 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:55Z","lastTransitionTime":"2025-11-24T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.595389 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.595432 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.595445 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.595462 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.595477 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:55Z","lastTransitionTime":"2025-11-24T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.698531 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.698558 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.698570 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.698588 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.698599 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:55Z","lastTransitionTime":"2025-11-24T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.801746 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.801792 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.801803 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.801819 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.801829 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:55Z","lastTransitionTime":"2025-11-24T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.903933 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.903960 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.903997 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.904014 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.904024 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:55Z","lastTransitionTime":"2025-11-24T19:17:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:55 crc kubenswrapper[4812]: I1124 19:17:55.964948 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:55 crc kubenswrapper[4812]: E1124 19:17:55.965097 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.006174 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.006202 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.006213 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.006253 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.006271 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:56Z","lastTransitionTime":"2025-11-24T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.108636 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.108704 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.108721 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.108751 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.108768 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:56Z","lastTransitionTime":"2025-11-24T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.211553 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.211639 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.211659 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.211683 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.211699 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:56Z","lastTransitionTime":"2025-11-24T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.314913 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.314958 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.314970 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.314988 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.315000 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:56Z","lastTransitionTime":"2025-11-24T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.417709 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.417786 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.417812 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.417847 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.417870 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:56Z","lastTransitionTime":"2025-11-24T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.521225 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.521329 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.521390 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.521425 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.521447 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:56Z","lastTransitionTime":"2025-11-24T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.624216 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.624262 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.624275 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.624290 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.624301 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:56Z","lastTransitionTime":"2025-11-24T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.726499 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.726557 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.726573 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.726595 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.726612 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:56Z","lastTransitionTime":"2025-11-24T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.829458 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.829533 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.829568 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.829600 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.829622 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:56Z","lastTransitionTime":"2025-11-24T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.932843 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.932900 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.932927 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.932959 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.932981 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:56Z","lastTransitionTime":"2025-11-24T19:17:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.964939 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:56 crc kubenswrapper[4812]: E1124 19:17:56.965150 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.965237 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.965003 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:56 crc kubenswrapper[4812]: E1124 19:17:56.965309 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:56 crc kubenswrapper[4812]: E1124 19:17:56.965465 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:56 crc kubenswrapper[4812]: I1124 19:17:56.985060 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:56Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.002515 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:57Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.018443 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1e76a3-3e3a-4cf3-a230-ba862eb87bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b99fb73ebd02df37d8967752ad94ce4488e9690f5c163efa4a30592849e3e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c91670e98bc282ff883848aee029e9ff085db7efbd1ed8a91c5fe1f159eda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ffc4cb4286a683020d574f40064acf95c34e6c28bde7742ce70d88f9a4fbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:57Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.037180 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:57Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.037305 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.037379 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.037393 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.037416 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.037427 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:57Z","lastTransitionTime":"2025-11-24T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.052965 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:57Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.069053 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:57Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.082504 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:57Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.107685 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:57Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.122840 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:57Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.138437 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:57Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.139287 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.139319 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.139346 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.139367 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.139379 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:57Z","lastTransitionTime":"2025-11-24T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.153099 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:57Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.163533 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:57Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.185147 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:34Z\\\",\\\"message\\\":\\\"art network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z]\\\\nI1124 19:17:34.965800 6469 obj_retry.go:409] Going to retry *v1.Pod resource setup for 15 objects: [openshift-image-registry/node-ca-v9d5f openshift-kube-apiserver/kube-apiserver-crc openshift-multus/multus-additional-cni-plugins-qj8tt openshift-multus/network-metrics-daemon-jxmnc openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-dns/node-resolver-zwlsb openshift-network-node-identity/network-node-identity-vrzqb openshift-machine-config-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:57Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.200441 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:57Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.217310 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:57Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.229801 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2019326a15074e65a11467a45f9a1d8a0ac97d7669b071898b5f80ddc6984b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909115dbbacbaefddfe18d3c3d9c0860352c95ed9c1e478155813d24d2d8dd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxfp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:57Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.238646 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxmnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxmnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:57Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.241408 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.241455 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.241466 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.241482 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.241492 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:57Z","lastTransitionTime":"2025-11-24T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.344138 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.344205 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.344225 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.344250 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.344267 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:57Z","lastTransitionTime":"2025-11-24T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.446674 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.446709 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.446718 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.446733 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.446746 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:57Z","lastTransitionTime":"2025-11-24T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.549192 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.549251 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.549273 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.549300 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.549322 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:57Z","lastTransitionTime":"2025-11-24T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.652814 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.652850 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.652862 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.652881 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.652894 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:57Z","lastTransitionTime":"2025-11-24T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.755614 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.755678 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.755697 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.755734 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.755758 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:57Z","lastTransitionTime":"2025-11-24T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.861809 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.861873 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.861891 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.861915 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.861935 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:57Z","lastTransitionTime":"2025-11-24T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.964076 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.964612 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.964758 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.964788 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.964894 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:57 crc kubenswrapper[4812]: E1124 19:17:57.965103 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:17:57 crc kubenswrapper[4812]: I1124 19:17:57.965134 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:57Z","lastTransitionTime":"2025-11-24T19:17:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.067939 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.068004 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.068014 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.068035 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.068046 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:58Z","lastTransitionTime":"2025-11-24T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.171771 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.172426 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.172631 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.172837 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.173039 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:58Z","lastTransitionTime":"2025-11-24T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.276592 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.276683 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.276712 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.276746 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.276770 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:58Z","lastTransitionTime":"2025-11-24T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.361050 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs\") pod \"network-metrics-daemon-jxmnc\" (UID: \"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\") " pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:58 crc kubenswrapper[4812]: E1124 19:17:58.361309 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 19:17:58 crc kubenswrapper[4812]: E1124 19:17:58.361462 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs podName:2d681a94-8d4a-45cc-8559-dfd15b6d0b1e nodeName:}" failed. No retries permitted until 2025-11-24 19:18:30.361427916 +0000 UTC m=+104.150380327 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs") pod "network-metrics-daemon-jxmnc" (UID: "2d681a94-8d4a-45cc-8559-dfd15b6d0b1e") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.379720 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.379771 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.379789 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.379813 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.379835 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:58Z","lastTransitionTime":"2025-11-24T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.482958 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.483014 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.483030 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.483052 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.483069 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:58Z","lastTransitionTime":"2025-11-24T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.585600 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.585664 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.585682 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.585706 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.585725 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:58Z","lastTransitionTime":"2025-11-24T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.688605 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.688663 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.688678 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.688699 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.688711 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:58Z","lastTransitionTime":"2025-11-24T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.792513 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.792582 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.792604 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.792630 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.792650 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:58Z","lastTransitionTime":"2025-11-24T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.895932 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.896017 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.896034 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.896077 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.896096 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:58Z","lastTransitionTime":"2025-11-24T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.965552 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.965637 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:17:58 crc kubenswrapper[4812]: E1124 19:17:58.965669 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.965755 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:17:58 crc kubenswrapper[4812]: E1124 19:17:58.965810 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:17:58 crc kubenswrapper[4812]: E1124 19:17:58.966080 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.999196 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.999248 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.999269 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.999293 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:58 crc kubenswrapper[4812]: I1124 19:17:58.999313 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:58Z","lastTransitionTime":"2025-11-24T19:17:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.102765 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.102856 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.102887 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.102921 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.102946 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:59Z","lastTransitionTime":"2025-11-24T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.205980 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.206043 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.206062 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.206087 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.206106 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:59Z","lastTransitionTime":"2025-11-24T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.309246 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.309283 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.309292 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.309311 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.309320 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:59Z","lastTransitionTime":"2025-11-24T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.412247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.412285 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.412295 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.412311 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.412321 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:59Z","lastTransitionTime":"2025-11-24T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.421598 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lhgj5_c270cb89-97c2-48c4-94c3-9b8420d81cfd/kube-multus/0.log" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.421641 4812 generic.go:334] "Generic (PLEG): container finished" podID="c270cb89-97c2-48c4-94c3-9b8420d81cfd" containerID="c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73" exitCode=1 Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.421671 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lhgj5" event={"ID":"c270cb89-97c2-48c4-94c3-9b8420d81cfd","Type":"ContainerDied","Data":"c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73"} Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.422112 4812 scope.go:117] "RemoveContainer" containerID="c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.440510 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.462786 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.480778 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1e76a3-3e3a-4cf3-a230-ba862eb87bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b99fb73ebd02df37d8967752ad94ce4488e9690f5c163efa4a30592849e3e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c91670e98bc282ff883848aee029e9ff085db7efbd1ed8a91c5fe1f159eda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ffc4cb4286a683020d574f40064acf95c34e6c28bde7742ce70d88f9a4fbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.500218 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.515263 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.515358 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.515384 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.515417 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.515439 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:59Z","lastTransitionTime":"2025-11-24T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.525450 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.545357 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.564584 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.582748 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.595753 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.613399 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.621687 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.621766 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.621788 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.621829 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.621847 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:59Z","lastTransitionTime":"2025-11-24T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.637313 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:59Z\\\",\\\"message\\\":\\\"2025-11-24T19:17:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b722c2d6-b54a-47ae-b686-d03de7e9e151\\\\n2025-11-24T19:17:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b722c2d6-b54a-47ae-b686-d03de7e9e151 to /host/opt/cni/bin/\\\\n2025-11-24T19:17:14Z [verbose] multus-daemon started\\\\n2025-11-24T19:17:14Z [verbose] Readiness Indicator file check\\\\n2025-11-24T19:17:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.651137 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.681539 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:34Z\\\",\\\"message\\\":\\\"art network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z]\\\\nI1124 19:17:34.965800 6469 obj_retry.go:409] Going to retry *v1.Pod resource setup for 15 objects: [openshift-image-registry/node-ca-v9d5f openshift-kube-apiserver/kube-apiserver-crc openshift-multus/multus-additional-cni-plugins-qj8tt openshift-multus/network-metrics-daemon-jxmnc openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-dns/node-resolver-zwlsb openshift-network-node-identity/network-node-identity-vrzqb openshift-machine-config-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.697991 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.720700 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.725123 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.725373 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.725577 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.725725 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.725852 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:59Z","lastTransitionTime":"2025-11-24T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.737397 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2019326a15074e65a11467a45f9a1d8a0ac97d7669b071898b5f80ddc6984b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909115dbbacbaefddfe18d3c3d9c0860352c95ed9c1e478155813d24d2d8dd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxfp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.751772 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxmnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxmnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:59Z is after 2025-08-24T17:21:41Z" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.828235 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.828261 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.828269 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.828284 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.828293 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:59Z","lastTransitionTime":"2025-11-24T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.930707 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.931071 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.931229 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.931427 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.931596 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:17:59Z","lastTransitionTime":"2025-11-24T19:17:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.965272 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:17:59 crc kubenswrapper[4812]: E1124 19:17:59.965421 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:17:59 crc kubenswrapper[4812]: I1124 19:17:59.966853 4812 scope.go:117] "RemoveContainer" containerID="ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.034396 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.034616 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.034718 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.034800 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.034874 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:00Z","lastTransitionTime":"2025-11-24T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.137774 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.137827 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.137845 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.137869 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.137886 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:00Z","lastTransitionTime":"2025-11-24T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.240926 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.241023 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.241041 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.241066 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.241083 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:00Z","lastTransitionTime":"2025-11-24T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.343394 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.343432 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.343442 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.343456 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.343467 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:00Z","lastTransitionTime":"2025-11-24T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.444435 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dgm54_b24bf762-6020-46b4-b9e8-589eb8ed0650/ovnkube-controller/2.log" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.453976 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.454015 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.454026 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.454043 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.454053 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:00Z","lastTransitionTime":"2025-11-24T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.456742 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerStarted","Data":"e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63"} Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.458856 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lhgj5_c270cb89-97c2-48c4-94c3-9b8420d81cfd/kube-multus/0.log" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.458887 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lhgj5" event={"ID":"c270cb89-97c2-48c4-94c3-9b8420d81cfd","Type":"ContainerStarted","Data":"a3218c66661f13d16c61591f5b6e09f9aecf9fcad3093917cf7f65d64c5756a0"} Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.479469 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.498290 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.509861 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1e76a3-3e3a-4cf3-a230-ba862eb87bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b99fb73ebd02df37d8967752ad94ce4488e9690f5c163efa4a30592849e3e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c91670e98bc282ff883848aee029e9ff085db7efbd1ed8a91c5fe1f159eda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ffc4cb4286a683020d574f40064acf95c34e6c28bde7742ce70d88f9a4fbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.524203 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.537171 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.548798 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.556835 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.556869 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.556878 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.556893 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.556903 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:00Z","lastTransitionTime":"2025-11-24T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.560762 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.580770 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:34Z\\\",\\\"message\\\":\\\"art network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z]\\\\nI1124 19:17:34.965800 6469 obj_retry.go:409] Going to retry *v1.Pod resource setup for 15 objects: [openshift-image-registry/node-ca-v9d5f openshift-kube-apiserver/kube-apiserver-crc openshift-multus/multus-additional-cni-plugins-qj8tt openshift-multus/network-metrics-daemon-jxmnc openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-dns/node-resolver-zwlsb openshift-network-node-identity/network-node-identity-vrzqb openshift-machine-config-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:18:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.591469 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.602761 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.615259 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.643017 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.659546 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:59Z\\\",\\\"message\\\":\\\"2025-11-24T19:17:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b722c2d6-b54a-47ae-b686-d03de7e9e151\\\\n2025-11-24T19:17:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b722c2d6-b54a-47ae-b686-d03de7e9e151 to /host/opt/cni/bin/\\\\n2025-11-24T19:17:14Z [verbose] multus-daemon started\\\\n2025-11-24T19:17:14Z [verbose] Readiness Indicator file check\\\\n2025-11-24T19:17:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.660257 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.660300 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.660312 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.660329 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.660367 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:00Z","lastTransitionTime":"2025-11-24T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.672200 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.689441 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.702870 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2019326a15074e65a11467a45f9a1d8a0ac97d7669b071898b5f80ddc6984b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909115dbbacbaefddfe18d3c3d9c0860352c95ed9c1e478155813d24d2d8dd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxfp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.716519 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxmnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxmnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.730217 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.751811 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.762986 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.763055 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.763075 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.763376 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.763415 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:00Z","lastTransitionTime":"2025-11-24T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.768698 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1e76a3-3e3a-4cf3-a230-ba862eb87bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b99fb73ebd02df37d8967752ad94ce4488e9690f5c163efa4a30592849e3e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c91670e98bc282ff883848aee029e9ff085db7efbd1ed8a91c5fe1f159eda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ffc4cb4286a683020d574f40064acf95c34e6c28bde7742ce70d88f9a4fbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.785849 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.806317 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.825526 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.840900 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.863441 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.867186 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.867233 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.867250 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.867277 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.867293 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:00Z","lastTransitionTime":"2025-11-24T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.884037 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.905252 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.925136 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3218c66661f13d16c61591f5b6e09f9aecf9fcad3093917cf7f65d64c5756a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:59Z\\\",\\\"message\\\":\\\"2025-11-24T19:17:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b722c2d6-b54a-47ae-b686-d03de7e9e151\\\\n2025-11-24T19:17:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b722c2d6-b54a-47ae-b686-d03de7e9e151 to /host/opt/cni/bin/\\\\n2025-11-24T19:17:14Z [verbose] multus-daemon started\\\\n2025-11-24T19:17:14Z [verbose] Readiness Indicator file check\\\\n2025-11-24T19:17:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.938786 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.961918 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:34Z\\\",\\\"message\\\":\\\"art network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z]\\\\nI1124 19:17:34.965800 6469 obj_retry.go:409] Going to retry *v1.Pod resource setup for 15 objects: [openshift-image-registry/node-ca-v9d5f openshift-kube-apiserver/kube-apiserver-crc openshift-multus/multus-additional-cni-plugins-qj8tt openshift-multus/network-metrics-daemon-jxmnc openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-dns/node-resolver-zwlsb openshift-network-node-identity/network-node-identity-vrzqb openshift-machine-config-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:18:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.965065 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.965156 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.965074 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:00 crc kubenswrapper[4812]: E1124 19:18:00.965283 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:00 crc kubenswrapper[4812]: E1124 19:18:00.965438 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:00 crc kubenswrapper[4812]: E1124 19:18:00.965545 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.970607 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.970658 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.970676 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.970700 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.970717 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:00Z","lastTransitionTime":"2025-11-24T19:18:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.979473 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:00 crc kubenswrapper[4812]: I1124 19:18:00.994968 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2019326a15074e65a11467a45f9a1d8a0ac97d7669b071898b5f80ddc6984b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909115dbbacbaefddfe18d3c3d9c0860352c95ed9c1e478155813d24d2d8dd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxfp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:00Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.006415 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxmnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxmnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:01Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.021536 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:01Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.073987 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.074041 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.074059 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.074084 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.074102 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:01Z","lastTransitionTime":"2025-11-24T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.177389 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.177482 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.177507 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.177624 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.177706 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:01Z","lastTransitionTime":"2025-11-24T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.280692 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.280751 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.280768 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.280792 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.280808 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:01Z","lastTransitionTime":"2025-11-24T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.383731 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.383779 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.383796 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.383818 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.383834 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:01Z","lastTransitionTime":"2025-11-24T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.465062 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dgm54_b24bf762-6020-46b4-b9e8-589eb8ed0650/ovnkube-controller/3.log" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.466069 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dgm54_b24bf762-6020-46b4-b9e8-589eb8ed0650/ovnkube-controller/2.log" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.470077 4812 generic.go:334] "Generic (PLEG): container finished" podID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerID="e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63" exitCode=1 Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.470136 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerDied","Data":"e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63"} Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.470182 4812 scope.go:117] "RemoveContainer" containerID="ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.471387 4812 scope.go:117] "RemoveContainer" containerID="e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63" Nov 24 19:18:01 crc kubenswrapper[4812]: E1124 19:18:01.471707 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.486600 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.486642 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.486658 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.486683 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.486702 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:01Z","lastTransitionTime":"2025-11-24T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.505041 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:01Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.524289 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:01Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.539179 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1e76a3-3e3a-4cf3-a230-ba862eb87bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b99fb73ebd02df37d8967752ad94ce4488e9690f5c163efa4a30592849e3e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c91670e98bc282ff883848aee029e9ff085db7efbd1ed8a91c5fe1f159eda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ffc4cb4286a683020d574f40064acf95c34e6c28bde7742ce70d88f9a4fbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:01Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.556990 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:01Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.571999 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:01Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.588247 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:01Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.589959 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.589998 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.590015 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.590040 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.590057 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:01Z","lastTransitionTime":"2025-11-24T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.603969 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:01Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.618736 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:01Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.638364 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:01Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.656212 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:01Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.674530 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:01Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.692670 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.692723 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.692746 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.692776 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.692799 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:01Z","lastTransitionTime":"2025-11-24T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.694992 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3218c66661f13d16c61591f5b6e09f9aecf9fcad3093917cf7f65d64c5756a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:59Z\\\",\\\"message\\\":\\\"2025-11-24T19:17:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b722c2d6-b54a-47ae-b686-d03de7e9e151\\\\n2025-11-24T19:17:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b722c2d6-b54a-47ae-b686-d03de7e9e151 to /host/opt/cni/bin/\\\\n2025-11-24T19:17:14Z [verbose] multus-daemon started\\\\n2025-11-24T19:17:14Z [verbose] Readiness Indicator file check\\\\n2025-11-24T19:17:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:01Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.710319 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:01Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.739096 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:34Z\\\",\\\"message\\\":\\\"art network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z]\\\\nI1124 19:17:34.965800 6469 obj_retry.go:409] Going to retry *v1.Pod resource setup for 15 objects: [openshift-image-registry/node-ca-v9d5f openshift-kube-apiserver/kube-apiserver-crc openshift-multus/multus-additional-cni-plugins-qj8tt openshift-multus/network-metrics-daemon-jxmnc openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-dns/node-resolver-zwlsb openshift-network-node-identity/network-node-identity-vrzqb openshift-machine-config-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:18:00Z\\\",\\\"message\\\":\\\"mers/factory.go:160\\\\nI1124 19:18:00.954245 6798 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.954607 6798 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 19:18:00.954696 6798 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.954839 6798 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.955051 6798 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.955255 6798 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.955427 6798 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 19:18:00.955732 6798 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 19:18:00.955788 6798 factory.go:656] Stopping watch factory\\\\nI1124 19:18:00.955807 6798 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:18:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:01Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.765207 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:01Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.783571 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2019326a15074e65a11467a45f9a1d8a0ac97d7669b071898b5f80ddc6984b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909115dbbacbaefddfe18d3c3d9c0860352c95ed9c1e478155813d24d2d8dd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxfp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:01Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.795989 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.796053 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.796072 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.796097 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.796118 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:01Z","lastTransitionTime":"2025-11-24T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.798462 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxmnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxmnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:01Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.898701 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.898957 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.899128 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.899303 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.899504 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:01Z","lastTransitionTime":"2025-11-24T19:18:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:01 crc kubenswrapper[4812]: I1124 19:18:01.965278 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:01 crc kubenswrapper[4812]: E1124 19:18:01.965395 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.002867 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.002951 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.002972 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.003018 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.003040 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:02Z","lastTransitionTime":"2025-11-24T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.106142 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.106186 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.106203 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.106224 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.106242 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:02Z","lastTransitionTime":"2025-11-24T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.208783 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.208871 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.208896 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.208941 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.208969 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:02Z","lastTransitionTime":"2025-11-24T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.311871 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.311915 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.311931 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.311954 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.311973 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:02Z","lastTransitionTime":"2025-11-24T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.414274 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.414382 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.414410 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.414437 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.414460 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:02Z","lastTransitionTime":"2025-11-24T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.477706 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dgm54_b24bf762-6020-46b4-b9e8-589eb8ed0650/ovnkube-controller/3.log" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.517826 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.518056 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.518088 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.518116 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.518134 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:02Z","lastTransitionTime":"2025-11-24T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.621888 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.621992 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.622053 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.622075 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.622128 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:02Z","lastTransitionTime":"2025-11-24T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.724681 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.724756 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.724781 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.724816 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.724838 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:02Z","lastTransitionTime":"2025-11-24T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.827879 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.827932 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.827949 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.827972 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.827990 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:02Z","lastTransitionTime":"2025-11-24T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.930978 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.931031 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.931047 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.931068 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.931087 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:02Z","lastTransitionTime":"2025-11-24T19:18:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.965091 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:02 crc kubenswrapper[4812]: E1124 19:18:02.965253 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.965100 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:02 crc kubenswrapper[4812]: I1124 19:18:02.965411 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:02 crc kubenswrapper[4812]: E1124 19:18:02.965555 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:02 crc kubenswrapper[4812]: E1124 19:18:02.965629 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.034216 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.034285 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.034310 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.034389 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.034418 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:03Z","lastTransitionTime":"2025-11-24T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.138646 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.138747 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.138770 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.138811 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.138855 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:03Z","lastTransitionTime":"2025-11-24T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.242498 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.242586 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.242604 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.242628 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.242645 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:03Z","lastTransitionTime":"2025-11-24T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.345920 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.346049 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.346067 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.346092 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.346108 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:03Z","lastTransitionTime":"2025-11-24T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.449829 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.449925 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.449943 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.449998 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.450017 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:03Z","lastTransitionTime":"2025-11-24T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.553323 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.553408 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.553461 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.553487 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.553505 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:03Z","lastTransitionTime":"2025-11-24T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.657872 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.657942 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.657959 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.657986 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.658003 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:03Z","lastTransitionTime":"2025-11-24T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.761467 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.761534 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.761557 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.761590 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.761614 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:03Z","lastTransitionTime":"2025-11-24T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.864314 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.864402 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.864418 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.864442 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.864459 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:03Z","lastTransitionTime":"2025-11-24T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.965696 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:03 crc kubenswrapper[4812]: E1124 19:18:03.965923 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.968182 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.968232 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.968249 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.968269 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:03 crc kubenswrapper[4812]: I1124 19:18:03.968283 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:03Z","lastTransitionTime":"2025-11-24T19:18:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.072188 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.072262 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.072288 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.072382 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.072405 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:04Z","lastTransitionTime":"2025-11-24T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.175182 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.175328 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.175373 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.175402 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.175421 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:04Z","lastTransitionTime":"2025-11-24T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.279519 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.279647 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.279668 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.279701 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.279764 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:04Z","lastTransitionTime":"2025-11-24T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.383592 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.383655 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.383675 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.383700 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.383719 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:04Z","lastTransitionTime":"2025-11-24T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.488149 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.488228 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.488246 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.488272 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.488290 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:04Z","lastTransitionTime":"2025-11-24T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.533884 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.533986 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.534014 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.534054 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.534077 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:04Z","lastTransitionTime":"2025-11-24T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:04 crc kubenswrapper[4812]: E1124 19:18:04.557905 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:04Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.564291 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.564402 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.564423 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.564450 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.564476 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:04Z","lastTransitionTime":"2025-11-24T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:04 crc kubenswrapper[4812]: E1124 19:18:04.585732 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:04Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.591535 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.591597 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.591616 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.591646 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.591667 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:04Z","lastTransitionTime":"2025-11-24T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:04 crc kubenswrapper[4812]: E1124 19:18:04.612441 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:04Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.618448 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.618519 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.618538 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.618568 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.618592 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:04Z","lastTransitionTime":"2025-11-24T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:04 crc kubenswrapper[4812]: E1124 19:18:04.641630 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:04Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.647926 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.648006 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.648024 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.648052 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.648069 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:04Z","lastTransitionTime":"2025-11-24T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:04 crc kubenswrapper[4812]: E1124 19:18:04.670427 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:04Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:04 crc kubenswrapper[4812]: E1124 19:18:04.670719 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.674145 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.674214 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.674234 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.674264 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.674296 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:04Z","lastTransitionTime":"2025-11-24T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.778744 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.778805 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.778829 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.778864 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.778883 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:04Z","lastTransitionTime":"2025-11-24T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.881804 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.881879 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.881902 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.881934 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.881956 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:04Z","lastTransitionTime":"2025-11-24T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.965684 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:04 crc kubenswrapper[4812]: E1124 19:18:04.965879 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.966211 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:04 crc kubenswrapper[4812]: E1124 19:18:04.966314 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.966727 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:04 crc kubenswrapper[4812]: E1124 19:18:04.966950 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.984277 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.984352 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.984363 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.984386 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:04 crc kubenswrapper[4812]: I1124 19:18:04.984402 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:04Z","lastTransitionTime":"2025-11-24T19:18:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.088390 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.088474 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.088496 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.088526 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.088550 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:05Z","lastTransitionTime":"2025-11-24T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.192621 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.192661 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.192673 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.192690 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.192703 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:05Z","lastTransitionTime":"2025-11-24T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.296650 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.296704 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.296726 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.296757 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.296777 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:05Z","lastTransitionTime":"2025-11-24T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.399928 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.400031 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.400051 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.400081 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.400107 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:05Z","lastTransitionTime":"2025-11-24T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.503212 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.503285 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.503308 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.503379 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.503406 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:05Z","lastTransitionTime":"2025-11-24T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.607024 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.607092 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.607117 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.607148 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.607173 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:05Z","lastTransitionTime":"2025-11-24T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.710075 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.710133 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.710153 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.710179 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.710199 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:05Z","lastTransitionTime":"2025-11-24T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.813989 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.814054 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.814071 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.814096 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.814116 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:05Z","lastTransitionTime":"2025-11-24T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.916711 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.916764 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.916782 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.916803 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.916817 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:05Z","lastTransitionTime":"2025-11-24T19:18:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:05 crc kubenswrapper[4812]: I1124 19:18:05.965109 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:05 crc kubenswrapper[4812]: E1124 19:18:05.965332 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.020440 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.020554 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.020575 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.020598 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.020615 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:06Z","lastTransitionTime":"2025-11-24T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.123843 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.123915 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.123935 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.123979 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.124011 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:06Z","lastTransitionTime":"2025-11-24T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.227089 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.227172 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.227209 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.227241 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.227263 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:06Z","lastTransitionTime":"2025-11-24T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.330609 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.330692 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.330725 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.330754 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.330775 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:06Z","lastTransitionTime":"2025-11-24T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.433393 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.433458 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.433476 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.433499 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.433518 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:06Z","lastTransitionTime":"2025-11-24T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.537240 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.537309 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.537402 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.537437 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.537459 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:06Z","lastTransitionTime":"2025-11-24T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.641880 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.642064 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.642127 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.642162 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.642186 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:06Z","lastTransitionTime":"2025-11-24T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.744693 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.745077 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.745094 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.745117 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.745147 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:06Z","lastTransitionTime":"2025-11-24T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.848161 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.848228 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.848245 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.848270 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.848290 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:06Z","lastTransitionTime":"2025-11-24T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.950844 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.950902 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.950913 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.950932 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.950947 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:06Z","lastTransitionTime":"2025-11-24T19:18:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.964772 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.964785 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:06 crc kubenswrapper[4812]: E1124 19:18:06.964895 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.964946 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:06 crc kubenswrapper[4812]: E1124 19:18:06.964959 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:06 crc kubenswrapper[4812]: E1124 19:18:06.965160 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.981232 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:06Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:06 crc kubenswrapper[4812]: I1124 19:18:06.997114 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:06Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.014676 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:07Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.029646 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1e76a3-3e3a-4cf3-a230-ba862eb87bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b99fb73ebd02df37d8967752ad94ce4488e9690f5c163efa4a30592849e3e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c91670e98bc282ff883848aee029e9ff085db7efbd1ed8a91c5fe1f159eda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ffc4cb4286a683020d574f40064acf95c34e6c28bde7742ce70d88f9a4fbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:07Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.044177 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:07Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.053256 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.053323 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.053361 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.053383 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.053398 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:07Z","lastTransitionTime":"2025-11-24T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.057440 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:07Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.072173 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:07Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.085511 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:07Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.107963 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec8045f4c734c44fd26ba607bdda18716a7d6f37939a8a342ead5c58338d4e77\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:34Z\\\",\\\"message\\\":\\\"art network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:17:34Z is after 2025-08-24T17:21:41Z]\\\\nI1124 19:17:34.965800 6469 obj_retry.go:409] Going to retry *v1.Pod resource setup for 15 objects: [openshift-image-registry/node-ca-v9d5f openshift-kube-apiserver/kube-apiserver-crc openshift-multus/multus-additional-cni-plugins-qj8tt openshift-multus/network-metrics-daemon-jxmnc openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-diagnostics/network-check-target-xd92c openshift-dns/node-resolver-zwlsb openshift-network-node-identity/network-node-identity-vrzqb openshift-machine-config-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:18:00Z\\\",\\\"message\\\":\\\"mers/factory.go:160\\\\nI1124 19:18:00.954245 6798 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.954607 6798 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 19:18:00.954696 6798 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.954839 6798 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.955051 6798 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.955255 6798 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.955427 6798 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 19:18:00.955732 6798 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 19:18:00.955788 6798 factory.go:656] Stopping watch factory\\\\nI1124 19:18:00.955807 6798 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:18:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:07Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.119439 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:07Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.133398 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:07Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.148143 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:07Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.156908 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.157006 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.157030 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.157057 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.157110 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:07Z","lastTransitionTime":"2025-11-24T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.163820 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:07Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.180306 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3218c66661f13d16c61591f5b6e09f9aecf9fcad3093917cf7f65d64c5756a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:59Z\\\",\\\"message\\\":\\\"2025-11-24T19:17:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b722c2d6-b54a-47ae-b686-d03de7e9e151\\\\n2025-11-24T19:17:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b722c2d6-b54a-47ae-b686-d03de7e9e151 to /host/opt/cni/bin/\\\\n2025-11-24T19:17:14Z [verbose] multus-daemon started\\\\n2025-11-24T19:17:14Z [verbose] Readiness Indicator file check\\\\n2025-11-24T19:17:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:07Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.199771 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:07Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.217551 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2019326a15074e65a11467a45f9a1d8a0ac97d7669b071898b5f80ddc6984b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909115dbbacbaefddfe18d3c3d9c0860352c95ed9c1e478155813d24d2d8dd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxfp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:07Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.233233 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxmnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxmnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:07Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.259721 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.259763 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.259777 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.259795 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.259810 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:07Z","lastTransitionTime":"2025-11-24T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.362974 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.363016 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.363028 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.363044 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.363059 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:07Z","lastTransitionTime":"2025-11-24T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.465875 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.465958 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.465985 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.466020 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.466046 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:07Z","lastTransitionTime":"2025-11-24T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.569634 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.569705 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.569726 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.569751 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.569770 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:07Z","lastTransitionTime":"2025-11-24T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.672798 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.672927 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.672946 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.672971 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.672988 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:07Z","lastTransitionTime":"2025-11-24T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.775709 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.775767 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.775787 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.775813 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.775831 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:07Z","lastTransitionTime":"2025-11-24T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.879528 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.879612 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.879636 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.879669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.879690 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:07Z","lastTransitionTime":"2025-11-24T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.965157 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:07 crc kubenswrapper[4812]: E1124 19:18:07.965395 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.982654 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.982727 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.982750 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.982775 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:07 crc kubenswrapper[4812]: I1124 19:18:07.982796 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:07Z","lastTransitionTime":"2025-11-24T19:18:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.086483 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.086557 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.086587 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.086618 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.086642 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:08Z","lastTransitionTime":"2025-11-24T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.190119 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.190178 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.190202 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.190224 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.190242 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:08Z","lastTransitionTime":"2025-11-24T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.293753 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.293848 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.293867 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.293891 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.293910 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:08Z","lastTransitionTime":"2025-11-24T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.396812 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.396910 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.396930 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.396955 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.396973 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:08Z","lastTransitionTime":"2025-11-24T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.499748 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.499831 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.499850 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.499875 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.499893 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:08Z","lastTransitionTime":"2025-11-24T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.602535 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.602585 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.602603 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.602625 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.602642 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:08Z","lastTransitionTime":"2025-11-24T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.705808 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.705868 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.705903 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.705928 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.705949 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:08Z","lastTransitionTime":"2025-11-24T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.808870 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.808937 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.808954 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.808990 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.809008 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:08Z","lastTransitionTime":"2025-11-24T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.912234 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.912293 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.912309 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.912366 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.912385 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:08Z","lastTransitionTime":"2025-11-24T19:18:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.964824 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.964906 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:08 crc kubenswrapper[4812]: E1124 19:18:08.964986 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:08 crc kubenswrapper[4812]: I1124 19:18:08.965001 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:08 crc kubenswrapper[4812]: E1124 19:18:08.965115 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:08 crc kubenswrapper[4812]: E1124 19:18:08.965231 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.015672 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.015735 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.015762 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.015792 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.015809 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:09Z","lastTransitionTime":"2025-11-24T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.118646 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.118767 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.118785 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.118808 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.118825 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:09Z","lastTransitionTime":"2025-11-24T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.222253 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.222310 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.222328 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.222379 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.222397 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:09Z","lastTransitionTime":"2025-11-24T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.325762 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.325819 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.325835 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.325859 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.325885 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:09Z","lastTransitionTime":"2025-11-24T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.428901 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.428944 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.428958 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.428975 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.428989 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:09Z","lastTransitionTime":"2025-11-24T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.531719 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.531784 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.531807 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.531856 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.531877 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:09Z","lastTransitionTime":"2025-11-24T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.635047 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.635113 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.635130 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.635153 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.635170 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:09Z","lastTransitionTime":"2025-11-24T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.738142 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.738205 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.738222 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.738248 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.738265 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:09Z","lastTransitionTime":"2025-11-24T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.841290 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.841373 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.841392 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.841417 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.841436 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:09Z","lastTransitionTime":"2025-11-24T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.944929 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.944992 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.945010 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.945033 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.945053 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:09Z","lastTransitionTime":"2025-11-24T19:18:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:09 crc kubenswrapper[4812]: I1124 19:18:09.964729 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:09 crc kubenswrapper[4812]: E1124 19:18:09.964963 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.048762 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.048834 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.048855 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.048883 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.048901 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:10Z","lastTransitionTime":"2025-11-24T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.155757 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.155850 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.155876 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.155911 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.155945 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:10Z","lastTransitionTime":"2025-11-24T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.260053 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.260112 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.260132 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.260155 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.260174 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:10Z","lastTransitionTime":"2025-11-24T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.363103 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.363171 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.363188 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.363213 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.363275 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:10Z","lastTransitionTime":"2025-11-24T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.466280 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.466396 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.466423 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.466454 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.466480 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:10Z","lastTransitionTime":"2025-11-24T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.569613 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.569687 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.569706 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.569732 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.569751 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:10Z","lastTransitionTime":"2025-11-24T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.673437 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.673499 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.673515 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.673542 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.673566 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:10Z","lastTransitionTime":"2025-11-24T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.707956 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:18:10 crc kubenswrapper[4812]: E1124 19:18:10.708172 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:14.708130696 +0000 UTC m=+148.497083097 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.708378 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:10 crc kubenswrapper[4812]: E1124 19:18:10.708554 4812 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 19:18:10 crc kubenswrapper[4812]: E1124 19:18:10.708664 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 19:19:14.708635079 +0000 UTC m=+148.497587480 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.777419 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.777490 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.777513 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.777541 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.777565 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:10Z","lastTransitionTime":"2025-11-24T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.885774 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.885837 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.885856 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.885882 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.885901 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:10Z","lastTransitionTime":"2025-11-24T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.911370 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.911433 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.911476 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:10 crc kubenswrapper[4812]: E1124 19:18:10.911623 4812 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 19:18:10 crc kubenswrapper[4812]: E1124 19:18:10.911709 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 19:19:14.911693006 +0000 UTC m=+148.700645397 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 19:18:10 crc kubenswrapper[4812]: E1124 19:18:10.911730 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 19:18:10 crc kubenswrapper[4812]: E1124 19:18:10.911771 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 19:18:10 crc kubenswrapper[4812]: E1124 19:18:10.911772 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 19:18:10 crc kubenswrapper[4812]: E1124 19:18:10.911822 4812 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 19:18:10 crc kubenswrapper[4812]: E1124 19:18:10.911844 4812 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:18:10 crc kubenswrapper[4812]: E1124 19:18:10.911792 4812 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:18:10 crc kubenswrapper[4812]: E1124 19:18:10.911931 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 19:19:14.911899122 +0000 UTC m=+148.700851553 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:18:10 crc kubenswrapper[4812]: E1124 19:18:10.912110 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 19:19:14.912041016 +0000 UTC m=+148.700993417 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.965092 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.965167 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.965104 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:10 crc kubenswrapper[4812]: E1124 19:18:10.965291 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:10 crc kubenswrapper[4812]: E1124 19:18:10.966007 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:10 crc kubenswrapper[4812]: E1124 19:18:10.966109 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.979691 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.989550 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.989615 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.989640 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.989667 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:10 crc kubenswrapper[4812]: I1124 19:18:10.989688 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:10Z","lastTransitionTime":"2025-11-24T19:18:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.092468 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.092566 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.092589 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.092620 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.092643 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:11Z","lastTransitionTime":"2025-11-24T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.195641 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.195750 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.195772 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.195794 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.195810 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:11Z","lastTransitionTime":"2025-11-24T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.298844 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.298920 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.298947 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.298980 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.299003 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:11Z","lastTransitionTime":"2025-11-24T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.403377 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.403630 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.403650 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.403676 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.403699 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:11Z","lastTransitionTime":"2025-11-24T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.507283 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.507375 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.507394 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.507419 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.507440 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:11Z","lastTransitionTime":"2025-11-24T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.610237 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.610289 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.610308 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.610373 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.610392 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:11Z","lastTransitionTime":"2025-11-24T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.714101 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.714163 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.714180 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.714204 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.714220 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:11Z","lastTransitionTime":"2025-11-24T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.817632 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.817683 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.817702 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.817727 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.817747 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:11Z","lastTransitionTime":"2025-11-24T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.920448 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.920484 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.920496 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.920511 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.920522 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:11Z","lastTransitionTime":"2025-11-24T19:18:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:11 crc kubenswrapper[4812]: I1124 19:18:11.965246 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:11 crc kubenswrapper[4812]: E1124 19:18:11.965490 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.024158 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.024226 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.024257 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.024288 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.024312 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:12Z","lastTransitionTime":"2025-11-24T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.127803 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.127886 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.127905 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.127939 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.127965 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:12Z","lastTransitionTime":"2025-11-24T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.236727 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.236825 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.236849 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.236881 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.236902 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:12Z","lastTransitionTime":"2025-11-24T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.340941 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.341020 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.341040 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.341079 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.341103 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:12Z","lastTransitionTime":"2025-11-24T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.445262 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.445377 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.445394 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.445424 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.445442 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:12Z","lastTransitionTime":"2025-11-24T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.548145 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.548238 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.548259 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.548287 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.548308 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:12Z","lastTransitionTime":"2025-11-24T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.652485 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.652581 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.652603 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.652662 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.652682 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:12Z","lastTransitionTime":"2025-11-24T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.755712 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.755769 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.755783 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.755803 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.755818 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:12Z","lastTransitionTime":"2025-11-24T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.859530 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.859586 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.859601 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.859625 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.859638 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:12Z","lastTransitionTime":"2025-11-24T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.963896 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.963959 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.963980 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.964007 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.964028 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:12Z","lastTransitionTime":"2025-11-24T19:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.964840 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.964906 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:12 crc kubenswrapper[4812]: I1124 19:18:12.964975 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:12 crc kubenswrapper[4812]: E1124 19:18:12.964993 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:12 crc kubenswrapper[4812]: E1124 19:18:12.965156 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:12 crc kubenswrapper[4812]: E1124 19:18:12.966535 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.066791 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.066864 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.066882 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.066907 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.066927 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:13Z","lastTransitionTime":"2025-11-24T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.172798 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.172859 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.172880 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.172905 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.172925 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:13Z","lastTransitionTime":"2025-11-24T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.276803 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.276866 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.276886 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.276912 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.276929 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:13Z","lastTransitionTime":"2025-11-24T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.380127 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.380191 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.380210 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.380240 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.380261 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:13Z","lastTransitionTime":"2025-11-24T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.483451 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.483513 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.483530 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.483558 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.483577 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:13Z","lastTransitionTime":"2025-11-24T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.587702 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.587789 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.587823 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.587854 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.587879 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:13Z","lastTransitionTime":"2025-11-24T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.691212 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.691311 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.691364 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.691393 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.691412 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:13Z","lastTransitionTime":"2025-11-24T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.796595 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.796679 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.796702 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.796736 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.796761 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:13Z","lastTransitionTime":"2025-11-24T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.900404 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.900458 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.900474 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.900500 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.900519 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:13Z","lastTransitionTime":"2025-11-24T19:18:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.965499 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:13 crc kubenswrapper[4812]: E1124 19:18:13.965715 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.966867 4812 scope.go:117] "RemoveContainer" containerID="e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63" Nov 24 19:18:13 crc kubenswrapper[4812]: E1124 19:18:13.967167 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" Nov 24 19:18:13 crc kubenswrapper[4812]: I1124 19:18:13.990472 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:13Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.005620 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.005696 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.005714 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.005741 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.005764 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:14Z","lastTransitionTime":"2025-11-24T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.014843 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.033409 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05837620-7093-4d7b-a1f9-878b480a17a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d638be4feba4e12b69f18ce811971301474f2cc3fcac38d24389f4db9bb921f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cff157fe6a1158c6284824597c12979bd1c0b49d9014b18ecde558cecabf626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cff157fe6a1158c6284824597c12979bd1c0b49d9014b18ecde558cecabf626\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.052559 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1e76a3-3e3a-4cf3-a230-ba862eb87bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b99fb73ebd02df37d8967752ad94ce4488e9690f5c163efa4a30592849e3e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c91670e98bc282ff883848aee029e9ff085db7efbd1ed8a91c5fe1f159eda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ffc4cb4286a683020d574f40064acf95c34e6c28bde7742ce70d88f9a4fbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.073143 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.092497 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.110409 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.110460 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.110477 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.110513 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.110532 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:14Z","lastTransitionTime":"2025-11-24T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.115987 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.137318 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.170004 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:18:00Z\\\",\\\"message\\\":\\\"mers/factory.go:160\\\\nI1124 19:18:00.954245 6798 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.954607 6798 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 19:18:00.954696 6798 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.954839 6798 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.955051 6798 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.955255 6798 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.955427 6798 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 19:18:00.955732 6798 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 19:18:00.955788 6798 factory.go:656] Stopping watch factory\\\\nI1124 19:18:00.955807 6798 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:18:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.188778 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.210295 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.213803 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.213863 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.213881 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.213907 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.213925 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:14Z","lastTransitionTime":"2025-11-24T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.231490 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.253380 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.275691 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3218c66661f13d16c61591f5b6e09f9aecf9fcad3093917cf7f65d64c5756a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:59Z\\\",\\\"message\\\":\\\"2025-11-24T19:17:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b722c2d6-b54a-47ae-b686-d03de7e9e151\\\\n2025-11-24T19:17:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b722c2d6-b54a-47ae-b686-d03de7e9e151 to /host/opt/cni/bin/\\\\n2025-11-24T19:17:14Z [verbose] multus-daemon started\\\\n2025-11-24T19:17:14Z [verbose] Readiness Indicator file check\\\\n2025-11-24T19:17:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.290887 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.316930 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.317733 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.317796 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.317814 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.317840 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.317858 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:14Z","lastTransitionTime":"2025-11-24T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.335681 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2019326a15074e65a11467a45f9a1d8a0ac97d7669b071898b5f80ddc6984b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909115dbbacbaefddfe18d3c3d9c0860352c95ed9c1e478155813d24d2d8dd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxfp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.353138 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxmnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxmnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.421370 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.421464 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.421481 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.421507 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.421527 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:14Z","lastTransitionTime":"2025-11-24T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.524965 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.525083 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.525109 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.525184 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.525208 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:14Z","lastTransitionTime":"2025-11-24T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.628500 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.628600 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.628636 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.628667 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.628690 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:14Z","lastTransitionTime":"2025-11-24T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.732556 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.732627 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.732646 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.732674 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.732695 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:14Z","lastTransitionTime":"2025-11-24T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.836248 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.836299 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.836315 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.836380 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.836405 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:14Z","lastTransitionTime":"2025-11-24T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.858289 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.858383 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.858409 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.858435 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.858456 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:14Z","lastTransitionTime":"2025-11-24T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:14 crc kubenswrapper[4812]: E1124 19:18:14.879448 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.884432 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.884500 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.884525 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.884556 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.884580 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:14Z","lastTransitionTime":"2025-11-24T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:14 crc kubenswrapper[4812]: E1124 19:18:14.904475 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.909264 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.909322 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.909370 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.909397 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.909417 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:14Z","lastTransitionTime":"2025-11-24T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:14 crc kubenswrapper[4812]: E1124 19:18:14.928165 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.932573 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.932648 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.932675 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.932706 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.932729 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:14Z","lastTransitionTime":"2025-11-24T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:14 crc kubenswrapper[4812]: E1124 19:18:14.951401 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.956755 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.956821 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.956846 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.956874 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.956897 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:14Z","lastTransitionTime":"2025-11-24T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.965014 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.965014 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.965047 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:14 crc kubenswrapper[4812]: E1124 19:18:14.965523 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:14 crc kubenswrapper[4812]: E1124 19:18:14.966502 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:14 crc kubenswrapper[4812]: E1124 19:18:14.966739 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:14 crc kubenswrapper[4812]: E1124 19:18:14.983126 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:14Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:14 crc kubenswrapper[4812]: E1124 19:18:14.983456 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.985642 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.985849 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.985871 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.985901 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:14 crc kubenswrapper[4812]: I1124 19:18:14.985923 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:14Z","lastTransitionTime":"2025-11-24T19:18:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.088476 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.088554 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.088578 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.088608 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.088631 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:15Z","lastTransitionTime":"2025-11-24T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.190972 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.191041 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.191064 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.191096 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.191117 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:15Z","lastTransitionTime":"2025-11-24T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.294128 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.294217 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.294240 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.294271 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.294294 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:15Z","lastTransitionTime":"2025-11-24T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.367440 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.368560 4812 scope.go:117] "RemoveContainer" containerID="e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63" Nov 24 19:18:15 crc kubenswrapper[4812]: E1124 19:18:15.368758 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.397727 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.397776 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.397849 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.397868 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.397901 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:15Z","lastTransitionTime":"2025-11-24T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.501830 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.501915 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.501940 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.501974 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.501997 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:15Z","lastTransitionTime":"2025-11-24T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.605205 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.605785 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.606281 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.606503 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.606697 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:15Z","lastTransitionTime":"2025-11-24T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.709670 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.709724 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.709742 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.709766 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.709782 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:15Z","lastTransitionTime":"2025-11-24T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.813013 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.813065 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.813091 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.813116 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.813132 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:15Z","lastTransitionTime":"2025-11-24T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.916225 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.916279 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.916295 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.916318 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.916364 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:15Z","lastTransitionTime":"2025-11-24T19:18:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:15 crc kubenswrapper[4812]: I1124 19:18:15.964971 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:15 crc kubenswrapper[4812]: E1124 19:18:15.965180 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.018957 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.019029 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.019054 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.019083 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.019106 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:16Z","lastTransitionTime":"2025-11-24T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.121820 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.121907 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.121933 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.121966 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.121989 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:16Z","lastTransitionTime":"2025-11-24T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.225814 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.225938 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.225964 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.225997 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.226019 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:16Z","lastTransitionTime":"2025-11-24T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.328902 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.328968 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.328984 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.329015 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.329032 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:16Z","lastTransitionTime":"2025-11-24T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.432092 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.432159 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.432181 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.432215 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.432238 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:16Z","lastTransitionTime":"2025-11-24T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.535036 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.535126 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.535151 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.535180 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.535202 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:16Z","lastTransitionTime":"2025-11-24T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.637731 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.637808 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.637832 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.637859 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.637880 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:16Z","lastTransitionTime":"2025-11-24T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.741070 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.741146 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.741172 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.741202 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.741225 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:16Z","lastTransitionTime":"2025-11-24T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.843792 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.843855 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.843877 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.843901 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.843923 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:16Z","lastTransitionTime":"2025-11-24T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.946624 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.946682 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.946698 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.946724 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.946745 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:16Z","lastTransitionTime":"2025-11-24T19:18:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.965215 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.965299 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:16 crc kubenswrapper[4812]: E1124 19:18:16.965458 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.965496 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:16 crc kubenswrapper[4812]: E1124 19:18:16.966391 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:16 crc kubenswrapper[4812]: E1124 19:18:16.966664 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.990112 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 24 19:18:16 crc kubenswrapper[4812]: I1124 19:18:16.996134 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:16Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.015884 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2019326a15074e65a11467a45f9a1d8a0ac97d7669b071898b5f80ddc6984b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909115dbbacbaefddfe18d3c3d9c0860352c95ed9c1e478155813d24d2d8dd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxfp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.029150 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxmnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxmnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.046295 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.048930 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.048975 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.048984 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.048999 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.049008 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:17Z","lastTransitionTime":"2025-11-24T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.060647 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.070252 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05837620-7093-4d7b-a1f9-878b480a17a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d638be4feba4e12b69f18ce811971301474f2cc3fcac38d24389f4db9bb921f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cff157fe6a1158c6284824597c12979bd1c0b49d9014b18ecde558cecabf626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cff157fe6a1158c6284824597c12979bd1c0b49d9014b18ecde558cecabf626\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.086440 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1e76a3-3e3a-4cf3-a230-ba862eb87bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b99fb73ebd02df37d8967752ad94ce4488e9690f5c163efa4a30592849e3e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c91670e98bc282ff883848aee029e9ff085db7efbd1ed8a91c5fe1f159eda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ffc4cb4286a683020d574f40064acf95c34e6c28bde7742ce70d88f9a4fbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.106285 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.125855 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.143919 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.152290 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.152361 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.152376 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.152397 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.152411 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:17Z","lastTransitionTime":"2025-11-24T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.158935 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.178512 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.200133 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.221130 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.241240 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3218c66661f13d16c61591f5b6e09f9aecf9fcad3093917cf7f65d64c5756a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:59Z\\\",\\\"message\\\":\\\"2025-11-24T19:17:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b722c2d6-b54a-47ae-b686-d03de7e9e151\\\\n2025-11-24T19:17:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b722c2d6-b54a-47ae-b686-d03de7e9e151 to /host/opt/cni/bin/\\\\n2025-11-24T19:17:14Z [verbose] multus-daemon started\\\\n2025-11-24T19:17:14Z [verbose] Readiness Indicator file check\\\\n2025-11-24T19:17:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.255292 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.255358 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.255369 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.255386 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.255396 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:17Z","lastTransitionTime":"2025-11-24T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.257499 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.288459 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:18:00Z\\\",\\\"message\\\":\\\"mers/factory.go:160\\\\nI1124 19:18:00.954245 6798 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.954607 6798 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 19:18:00.954696 6798 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.954839 6798 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.955051 6798 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.955255 6798 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.955427 6798 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 19:18:00.955732 6798 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 19:18:00.955788 6798 factory.go:656] Stopping watch factory\\\\nI1124 19:18:00.955807 6798 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:18:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.304693 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:17Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.358720 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.358772 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.358788 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.358807 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.358824 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:17Z","lastTransitionTime":"2025-11-24T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.462186 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.462263 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.462282 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.462307 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.462325 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:17Z","lastTransitionTime":"2025-11-24T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.564575 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.564630 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.564644 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.564665 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.564683 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:17Z","lastTransitionTime":"2025-11-24T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.668119 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.668180 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.668199 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.668226 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.668245 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:17Z","lastTransitionTime":"2025-11-24T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.771944 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.772017 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.772035 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.772063 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.772081 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:17Z","lastTransitionTime":"2025-11-24T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.875184 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.875241 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.875258 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.875281 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.875298 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:17Z","lastTransitionTime":"2025-11-24T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.965222 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:17 crc kubenswrapper[4812]: E1124 19:18:17.965726 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.978223 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.978293 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.978317 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.978382 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:17 crc kubenswrapper[4812]: I1124 19:18:17.978410 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:17Z","lastTransitionTime":"2025-11-24T19:18:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.081466 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.081523 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.081541 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.081567 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.081585 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:18Z","lastTransitionTime":"2025-11-24T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.184978 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.185051 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.185063 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.185081 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.185093 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:18Z","lastTransitionTime":"2025-11-24T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.288011 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.288073 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.288095 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.288121 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.288139 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:18Z","lastTransitionTime":"2025-11-24T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.391217 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.391271 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.391289 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.391317 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.391378 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:18Z","lastTransitionTime":"2025-11-24T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.494112 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.494191 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.494217 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.494253 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.494275 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:18Z","lastTransitionTime":"2025-11-24T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.597531 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.597619 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.597646 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.597696 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.597723 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:18Z","lastTransitionTime":"2025-11-24T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.700810 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.700868 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.700885 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.700911 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.700927 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:18Z","lastTransitionTime":"2025-11-24T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.804391 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.804465 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.804482 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.804507 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.804526 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:18Z","lastTransitionTime":"2025-11-24T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.907891 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.907950 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.907966 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.907988 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.908005 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:18Z","lastTransitionTime":"2025-11-24T19:18:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.965162 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.965197 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:18 crc kubenswrapper[4812]: E1124 19:18:18.965325 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:18 crc kubenswrapper[4812]: I1124 19:18:18.965410 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:18 crc kubenswrapper[4812]: E1124 19:18:18.965588 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:18 crc kubenswrapper[4812]: E1124 19:18:18.965677 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.010727 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.010804 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.010825 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.010856 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.011039 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:19Z","lastTransitionTime":"2025-11-24T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.113876 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.113952 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.113970 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.113994 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.114011 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:19Z","lastTransitionTime":"2025-11-24T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.217415 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.217485 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.217499 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.217517 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.217532 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:19Z","lastTransitionTime":"2025-11-24T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.320883 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.320923 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.320934 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.320949 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.320961 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:19Z","lastTransitionTime":"2025-11-24T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.423618 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.423688 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.423705 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.423733 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.423752 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:19Z","lastTransitionTime":"2025-11-24T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.527285 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.527378 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.527397 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.527422 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.527439 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:19Z","lastTransitionTime":"2025-11-24T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.631557 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.631645 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.631665 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.631690 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.631710 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:19Z","lastTransitionTime":"2025-11-24T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.734980 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.735064 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.735086 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.735114 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.735135 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:19Z","lastTransitionTime":"2025-11-24T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.838628 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.838710 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.838727 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.838764 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.838804 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:19Z","lastTransitionTime":"2025-11-24T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.941966 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.942032 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.942048 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.942077 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.942097 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:19Z","lastTransitionTime":"2025-11-24T19:18:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:19 crc kubenswrapper[4812]: I1124 19:18:19.964693 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:19 crc kubenswrapper[4812]: E1124 19:18:19.964918 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.045576 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.045642 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.045660 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.045687 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.045706 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:20Z","lastTransitionTime":"2025-11-24T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.149652 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.149712 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.149747 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.149777 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.149804 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:20Z","lastTransitionTime":"2025-11-24T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.253778 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.253908 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.253927 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.253955 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.253975 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:20Z","lastTransitionTime":"2025-11-24T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.357302 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.357396 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.357415 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.357440 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.357459 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:20Z","lastTransitionTime":"2025-11-24T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.460962 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.461042 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.461060 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.461088 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.461107 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:20Z","lastTransitionTime":"2025-11-24T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.563568 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.563624 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.563640 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.563663 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.563681 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:20Z","lastTransitionTime":"2025-11-24T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.667274 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.667370 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.667388 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.667414 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.667433 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:20Z","lastTransitionTime":"2025-11-24T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.770618 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.770682 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.770698 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.770736 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.770755 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:20Z","lastTransitionTime":"2025-11-24T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.873982 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.874101 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.874119 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.874144 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.874163 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:20Z","lastTransitionTime":"2025-11-24T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.964709 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.964849 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.964731 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:20 crc kubenswrapper[4812]: E1124 19:18:20.965020 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:20 crc kubenswrapper[4812]: E1124 19:18:20.965144 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:20 crc kubenswrapper[4812]: E1124 19:18:20.965497 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.976658 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.976710 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.976727 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.976754 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:20 crc kubenswrapper[4812]: I1124 19:18:20.976772 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:20Z","lastTransitionTime":"2025-11-24T19:18:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.079975 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.080062 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.080087 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.080123 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.080149 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:21Z","lastTransitionTime":"2025-11-24T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.183253 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.183330 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.183392 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.183418 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.183435 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:21Z","lastTransitionTime":"2025-11-24T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.286280 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.286367 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.286381 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.286401 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.286418 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:21Z","lastTransitionTime":"2025-11-24T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.390777 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.390852 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.390874 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.390904 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.390925 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:21Z","lastTransitionTime":"2025-11-24T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.494245 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.494328 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.494396 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.494433 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.494458 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:21Z","lastTransitionTime":"2025-11-24T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.598141 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.598215 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.598238 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.598266 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.598333 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:21Z","lastTransitionTime":"2025-11-24T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.701401 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.701477 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.701500 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.701535 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.701559 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:21Z","lastTransitionTime":"2025-11-24T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.804750 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.804811 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.804828 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.804855 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.804874 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:21Z","lastTransitionTime":"2025-11-24T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.908801 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.908905 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.908923 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.908948 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.908965 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:21Z","lastTransitionTime":"2025-11-24T19:18:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:21 crc kubenswrapper[4812]: I1124 19:18:21.965127 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:21 crc kubenswrapper[4812]: E1124 19:18:21.965457 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.012128 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.012178 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.012192 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.012209 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.012220 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:22Z","lastTransitionTime":"2025-11-24T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.115859 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.115919 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.115938 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.115964 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.115985 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:22Z","lastTransitionTime":"2025-11-24T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.220178 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.220247 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.220265 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.220291 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.220308 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:22Z","lastTransitionTime":"2025-11-24T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.323531 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.323601 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.323617 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.323638 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.323649 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:22Z","lastTransitionTime":"2025-11-24T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.426498 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.426573 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.426590 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.426620 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.426637 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:22Z","lastTransitionTime":"2025-11-24T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.529592 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.529666 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.529687 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.529717 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.529742 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:22Z","lastTransitionTime":"2025-11-24T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.639626 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.640556 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.640595 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.640625 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.640647 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:22Z","lastTransitionTime":"2025-11-24T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.743483 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.743535 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.743554 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.743576 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.743593 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:22Z","lastTransitionTime":"2025-11-24T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.846536 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.846600 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.846618 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.846643 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.846663 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:22Z","lastTransitionTime":"2025-11-24T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.949942 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.950006 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.950025 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.950050 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.950070 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:22Z","lastTransitionTime":"2025-11-24T19:18:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.964771 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:22 crc kubenswrapper[4812]: E1124 19:18:22.964939 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.965151 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:22 crc kubenswrapper[4812]: E1124 19:18:22.965221 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:22 crc kubenswrapper[4812]: I1124 19:18:22.965358 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:22 crc kubenswrapper[4812]: E1124 19:18:22.965405 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.053234 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.053556 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.053756 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.053960 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.054111 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:23Z","lastTransitionTime":"2025-11-24T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.157813 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.157881 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.157891 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.157913 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.157926 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:23Z","lastTransitionTime":"2025-11-24T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.261081 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.261145 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.261162 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.261186 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.261205 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:23Z","lastTransitionTime":"2025-11-24T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.364024 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.364090 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.364112 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.364138 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.364156 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:23Z","lastTransitionTime":"2025-11-24T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.467918 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.467979 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.467988 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.468006 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.468017 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:23Z","lastTransitionTime":"2025-11-24T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.571380 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.571463 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.571478 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.571493 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.571503 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:23Z","lastTransitionTime":"2025-11-24T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.674802 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.674869 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.674888 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.674911 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.674929 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:23Z","lastTransitionTime":"2025-11-24T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.778428 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.778490 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.778508 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.778535 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.778553 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:23Z","lastTransitionTime":"2025-11-24T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.881993 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.882053 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.882070 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.882097 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.882115 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:23Z","lastTransitionTime":"2025-11-24T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.965162 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:23 crc kubenswrapper[4812]: E1124 19:18:23.965391 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.986516 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.986580 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.986599 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.986629 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:23 crc kubenswrapper[4812]: I1124 19:18:23.986649 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:23Z","lastTransitionTime":"2025-11-24T19:18:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.089230 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.089286 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.089303 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.089328 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.089379 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:24Z","lastTransitionTime":"2025-11-24T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.192392 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.192577 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.192597 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.192622 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.192639 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:24Z","lastTransitionTime":"2025-11-24T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.295399 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.295463 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.295483 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.295510 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.295529 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:24Z","lastTransitionTime":"2025-11-24T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.398833 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.398890 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.398905 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.398927 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.398944 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:24Z","lastTransitionTime":"2025-11-24T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.502095 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.502144 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.502159 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.502183 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.502199 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:24Z","lastTransitionTime":"2025-11-24T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.605779 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.605839 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.605855 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.605877 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.605901 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:24Z","lastTransitionTime":"2025-11-24T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.708543 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.708627 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.708647 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.708678 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.708697 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:24Z","lastTransitionTime":"2025-11-24T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.811988 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.812071 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.812094 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.812121 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.812140 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:24Z","lastTransitionTime":"2025-11-24T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.915822 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.915892 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.915909 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.915935 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.915954 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:24Z","lastTransitionTime":"2025-11-24T19:18:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.965677 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.965677 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:24 crc kubenswrapper[4812]: E1124 19:18:24.965886 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:24 crc kubenswrapper[4812]: I1124 19:18:24.965952 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:24 crc kubenswrapper[4812]: E1124 19:18:24.966035 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:24 crc kubenswrapper[4812]: E1124 19:18:24.966630 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.019939 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.020065 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.020147 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.020231 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.020315 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:25Z","lastTransitionTime":"2025-11-24T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.124730 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.124793 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.124811 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.124840 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.124859 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:25Z","lastTransitionTime":"2025-11-24T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.228231 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.228305 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.228326 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.228391 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.228410 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:25Z","lastTransitionTime":"2025-11-24T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.266808 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.266874 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.266893 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.266921 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.266939 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:25Z","lastTransitionTime":"2025-11-24T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:25 crc kubenswrapper[4812]: E1124 19:18:25.287590 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:25Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.292758 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.292842 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.292865 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.292897 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.292920 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:25Z","lastTransitionTime":"2025-11-24T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:25 crc kubenswrapper[4812]: E1124 19:18:25.314104 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:25Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.319991 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.320041 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.320058 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.320079 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.320096 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:25Z","lastTransitionTime":"2025-11-24T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:25 crc kubenswrapper[4812]: E1124 19:18:25.341107 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:25Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.345450 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.345500 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.345518 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.345545 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.345564 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:25Z","lastTransitionTime":"2025-11-24T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:25 crc kubenswrapper[4812]: E1124 19:18:25.364801 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:25Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.369913 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.369974 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.369999 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.370025 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.370042 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:25Z","lastTransitionTime":"2025-11-24T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:25 crc kubenswrapper[4812]: E1124 19:18:25.390127 4812 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"4c3675a7-88da-4626-a835-681b7b2f3a7b\\\",\\\"systemUUID\\\":\\\"c55eafaa-534f-4192-bbf2-c31e1d9c2aed\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:25Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:25 crc kubenswrapper[4812]: E1124 19:18:25.390488 4812 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.392912 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.392995 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.393014 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.393039 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.393086 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:25Z","lastTransitionTime":"2025-11-24T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.496040 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.496115 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.496144 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.496171 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.496191 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:25Z","lastTransitionTime":"2025-11-24T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.599240 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.599320 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.599391 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.599418 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.599436 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:25Z","lastTransitionTime":"2025-11-24T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.703153 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.703204 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.703224 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.703248 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.703266 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:25Z","lastTransitionTime":"2025-11-24T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.806273 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.806378 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.806398 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.806426 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.806449 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:25Z","lastTransitionTime":"2025-11-24T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.910491 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.910588 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.910620 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.910655 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.910676 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:25Z","lastTransitionTime":"2025-11-24T19:18:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:25 crc kubenswrapper[4812]: I1124 19:18:25.965603 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:25 crc kubenswrapper[4812]: E1124 19:18:25.965792 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.013890 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.013941 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.013958 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.013982 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.014000 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:26Z","lastTransitionTime":"2025-11-24T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.117078 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.117142 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.117163 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.117189 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.117207 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:26Z","lastTransitionTime":"2025-11-24T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.220311 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.220398 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.220416 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.220440 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.220457 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:26Z","lastTransitionTime":"2025-11-24T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.323079 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.323139 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.323155 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.323178 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.323197 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:26Z","lastTransitionTime":"2025-11-24T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.425957 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.426005 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.426022 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.426044 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.426060 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:26Z","lastTransitionTime":"2025-11-24T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.528279 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.528373 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.528393 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.528423 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.528438 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:26Z","lastTransitionTime":"2025-11-24T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.632477 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.632575 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.632603 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.632655 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.632692 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:26Z","lastTransitionTime":"2025-11-24T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.735834 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.736243 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.736472 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.736663 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.736844 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:26Z","lastTransitionTime":"2025-11-24T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.840601 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.840923 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.841047 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.841190 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.841315 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:26Z","lastTransitionTime":"2025-11-24T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.944304 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.944466 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.944534 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.944562 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.944615 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:26Z","lastTransitionTime":"2025-11-24T19:18:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.965528 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.965615 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.965666 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:26 crc kubenswrapper[4812]: E1124 19:18:26.965964 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:26 crc kubenswrapper[4812]: E1124 19:18:26.966113 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:26 crc kubenswrapper[4812]: E1124 19:18:26.966419 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.968111 4812 scope.go:117] "RemoveContainer" containerID="e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63" Nov 24 19:18:26 crc kubenswrapper[4812]: E1124 19:18:26.968534 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" Nov 24 19:18:26 crc kubenswrapper[4812]: I1124 19:18:26.983567 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zwlsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1e7fb7b6-a04c-4594-a2ed-b81aa6bcced6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de302ae9ca1eda12913951f633d46009e3f3c6052404106b989cf61376ec3346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2brdr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zwlsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:26Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.016923 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b24bf762-6020-46b4-b9e8-589eb8ed0650\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:18:00Z\\\",\\\"message\\\":\\\"mers/factory.go:160\\\\nI1124 19:18:00.954245 6798 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.954607 6798 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 19:18:00.954696 6798 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.954839 6798 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.955051 6798 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.955255 6798 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1124 19:18:00.955427 6798 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 19:18:00.955732 6798 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 19:18:00.955788 6798 factory.go:656] Stopping watch factory\\\\nI1124 19:18:00.955807 6798 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:18:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x4zx7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-dgm54\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.033267 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v9d5f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28fff088-73f1-4f9c-b240-e3bb4b704b07\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30623e51ef17d7044cf58426db8142dd25df525bf998be89a821aa0635150aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw6wv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:15Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v9d5f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.047731 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.047794 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.047805 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.047824 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.047836 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:27Z","lastTransitionTime":"2025-11-24T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.067083 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75c2a73c-65e8-4347-8c7c-f542a70052a7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abf216fa83ef5fa98d2ff37110ab603c5b2ba3ad8a7058d2c2dfa841fd62ea48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70b6d5f5190f8221fa3f7722e5a588150fed89d00fc7642d9bc79a2b33a9f08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27f8e3348ff228f38a656f0a2f6af9023fa101953b96b8ff565a73d237587513\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea26a8e57647f1910976dfe5afdc45869a513a108d780cbc81e92fed2a61021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b2c0976caab5b6dd5e2876a4b2a103769b9d4f5c7e0d821bdbbb77a37f9cfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a5b106331ecbdf6223c0a417dc103af41e3d44e823fe435b8cbffb34819445a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a5b106331ecbdf6223c0a417dc103af41e3d44e823fe435b8cbffb34819445a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3100cef9269b8932c3796554abaaa73b6083a53c903ea3c255dfb4806c65838e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3100cef9269b8932c3796554abaaa73b6083a53c903ea3c255dfb4806c65838e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://763563e40455af3ff76a193d3836ff1642aefa3bbe5d8859465b9245da8c9fa6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://763563e40455af3ff76a193d3836ff1642aefa3bbe5d8859465b9245da8c9fa6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.082772 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2eab8b66bd31c1f694efd3455fcdb5bcf253c8a528dbe40689489c827e82df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.099153 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.115829 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://166ec839c9f5835191295e1b792d07776d5550badb8a22d9517e5c8602435d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4aa6472bdc8bbde903ee0195ccb09dcbc73ba87c548a99a87736291f96b556a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.130050 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lhgj5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c270cb89-97c2-48c4-94c3-9b8420d81cfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:18:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3218c66661f13d16c61591f5b6e09f9aecf9fcad3093917cf7f65d64c5756a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T19:17:59Z\\\",\\\"message\\\":\\\"2025-11-24T19:17:13+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b722c2d6-b54a-47ae-b686-d03de7e9e151\\\\n2025-11-24T19:17:13+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b722c2d6-b54a-47ae-b686-d03de7e9e151 to /host/opt/cni/bin/\\\\n2025-11-24T19:17:14Z [verbose] multus-daemon started\\\\n2025-11-24T19:17:14Z [verbose] Readiness Indicator file check\\\\n2025-11-24T19:17:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m8n2v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lhgj5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.150477 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.150550 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.150574 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.150603 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.150624 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:27Z","lastTransitionTime":"2025-11-24T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.152142 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed62377b-bc6c-45ab-9c18-a91c1b8fb56d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e11c0eee209ea98d4dceb36eb6b15a5189a3c2f6b481391292e7f634e0f364b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://297685b681c7bace3ea9daf43184bd43adf565996cde69a39f5af45ae9a0fef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://159552ca5c56438fa89fe36fbf621edf52e4d59306f1a634ea8e1a59fdbe36f3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0beb4891e826d83967f7946815704397afbae6fa005af368012953b2d91a1749\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://87cca6eb5d3d53437e2f8cf73ab6e572cec371e4038320efaff6b423839a17b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4776a54a20272ad5ccf730af679bd49e5190a9ec59e10bd5a325f13bbf600419\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1e871604683ad20d0378ce056ff3f46ceb54cd2be4abbeae91916f5f83d6103\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:17:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:17:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qkmss\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-qj8tt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.167652 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23a6d3e0-f315-42dd-bfc0-1dcf90de3a56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2019326a15074e65a11467a45f9a1d8a0ac97d7669b071898b5f80ddc6984b7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909115dbbacbaefddfe18d3c3d9c0860352c95ed9c1e478155813d24d2d8dd4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s6k5m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sxfp4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.184033 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jxmnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:26Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xb9mn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:26Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jxmnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.205946 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df7493f1-3bc7-4efc-bc0e-10a0fb54bb93\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa4184f60fa477aab74bf492527be8db823ae94afd0c053bd2b725fe5ec4af8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793117c53f557dd4a560679428f86ee54232edd8468ec78d7ccdcadf6ee27c06\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28f85f935c6e341253a7b126b2adb082c1e258ed41080592a120a62d7ddfb94d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9e6d78bfb59a5df30b6a8ec78f8a702dafa31d385875e1953ccfabfb77c9e15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://37561804666c4ea65eb69e62bb58d63749fcfaacfe279eb475f3f3e406957c4e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"ed_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1764011820\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1764011820\\\\\\\\\\\\\\\" (2025-11-24 18:17:00 +0000 UTC to 2026-11-24 18:17:00 +0000 UTC (now=2025-11-24 19:17:06.480382096 +0000 UTC))\\\\\\\"\\\\nI1124 19:17:06.480414 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI1124 19:17:06.480436 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI1124 19:17:06.481149 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481213 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI1124 19:17:06.481224 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1576226882/tls.crt::/tmp/serving-cert-1576226882/tls.key\\\\\\\"\\\\nI1124 19:17:06.481147 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\nI1124 19:17:06.481292 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481369 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI1124 19:17:06.481402 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI1124 19:17:06.481413 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI1124 19:17:06.481534 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI1124 19:17:06.481555 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF1124 19:17:06.481667 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:50Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e9e18427039c73e0fe182cf1f01608321d57ece75f4c4505051a5725dc7c764\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0337c3263f2b182c4ed74bab915b27d305843f275af04d1254165255a91e4380\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.223691 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e72458f90be0fcb8bb1ffb8ad3c6aaee7e3b65bb35e7876100eed0f679debf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.237687 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcb3ad4b-5afb-47fe-8963-9f79489d45d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://266df0541aac55bc6e4dc24199e4e3fa06ef7af8afed01aad66f8c6837116820\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:17:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r8fl9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:17:12Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-nscsk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.252285 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05837620-7093-4d7b-a1f9-878b480a17a0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0d638be4feba4e12b69f18ce811971301474f2cc3fcac38d24389f4db9bb921f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1cff157fe6a1158c6284824597c12979bd1c0b49d9014b18ecde558cecabf626\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1cff157fe6a1158c6284824597c12979bd1c0b49d9014b18ecde558cecabf626\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.254097 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.254140 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.254150 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.254169 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.254180 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:27Z","lastTransitionTime":"2025-11-24T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.267461 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e1e76a3-3e3a-4cf3-a230-ba862eb87bd7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d1b99fb73ebd02df37d8967752ad94ce4488e9690f5c163efa4a30592849e3e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2c91670e98bc282ff883848aee029e9ff085db7efbd1ed8a91c5fe1f159eda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://79ffc4cb4286a683020d574f40064acf95c34e6c28bde7742ce70d88f9a4fbaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://894b0425540c13e40d4c3a8192590e64ec8d2e8749d23f94d22370324ac27b4c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.284875 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c043396e-0f76-45e0-bcae-157fffd134e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T19:16:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f74116a5466978c8afd41743fd293e31923f564b433bb596353ca115f2cb4c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea22f1aa0d37158c0c37b28be396018fcfca35f015f10b351dda30b1d87f70ff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4b672bb684eb00bf051efbbda90f2281b3d7e8dcd2b459f07a482ad3d1d3f51\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e6d86b14458d408d9fec85152c04ceac429333d9dc1450c06d46761a8cf8978\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T19:16:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T19:16:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.300760 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.317283 4812 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T19:17:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T19:18:27Z is after 2025-08-24T17:21:41Z" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.356787 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.356828 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.356861 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.356879 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.356891 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:27Z","lastTransitionTime":"2025-11-24T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.459971 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.460036 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.460053 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.460079 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.460097 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:27Z","lastTransitionTime":"2025-11-24T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.563211 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.563290 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.563313 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.563365 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.563383 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:27Z","lastTransitionTime":"2025-11-24T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.666192 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.666256 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.666274 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.666301 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.666321 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:27Z","lastTransitionTime":"2025-11-24T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.768927 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.769389 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.769593 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.769899 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.770191 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:27Z","lastTransitionTime":"2025-11-24T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.873024 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.873202 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.873232 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.873266 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.873296 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:27Z","lastTransitionTime":"2025-11-24T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.965423 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:27 crc kubenswrapper[4812]: E1124 19:18:27.965870 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.976888 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.976955 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.976977 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.977004 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:27 crc kubenswrapper[4812]: I1124 19:18:27.977022 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:27Z","lastTransitionTime":"2025-11-24T19:18:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.079856 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.079921 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.079938 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.079962 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.079977 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:28Z","lastTransitionTime":"2025-11-24T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.183383 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.183455 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.183473 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.183498 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.183517 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:28Z","lastTransitionTime":"2025-11-24T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.286722 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.286779 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.286796 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.286823 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.286847 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:28Z","lastTransitionTime":"2025-11-24T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.390804 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.390871 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.390890 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.390914 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.390932 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:28Z","lastTransitionTime":"2025-11-24T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.493888 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.493949 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.493978 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.494010 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.494031 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:28Z","lastTransitionTime":"2025-11-24T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.596633 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.596679 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.596690 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.596708 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.596721 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:28Z","lastTransitionTime":"2025-11-24T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.699741 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.699903 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.699919 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.699964 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.699982 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:28Z","lastTransitionTime":"2025-11-24T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.802861 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.802976 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.802993 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.803016 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.803033 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:28Z","lastTransitionTime":"2025-11-24T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.905529 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.905579 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.905591 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.905607 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.905618 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:28Z","lastTransitionTime":"2025-11-24T19:18:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.965137 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.965285 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:28 crc kubenswrapper[4812]: I1124 19:18:28.965176 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:28 crc kubenswrapper[4812]: E1124 19:18:28.965387 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:28 crc kubenswrapper[4812]: E1124 19:18:28.965455 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:28 crc kubenswrapper[4812]: E1124 19:18:28.965526 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.008264 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.008315 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.008326 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.008372 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.008385 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:29Z","lastTransitionTime":"2025-11-24T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.111655 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.111704 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.111724 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.111752 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.111774 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:29Z","lastTransitionTime":"2025-11-24T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.214875 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.214945 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.214967 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.214992 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.215008 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:29Z","lastTransitionTime":"2025-11-24T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.318242 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.318281 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.318292 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.318307 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.318319 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:29Z","lastTransitionTime":"2025-11-24T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.420164 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.420221 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.420231 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.420264 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.420275 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:29Z","lastTransitionTime":"2025-11-24T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.523684 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.523732 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.523743 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.523755 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.523764 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:29Z","lastTransitionTime":"2025-11-24T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.626109 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.626174 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.626196 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.626226 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.626253 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:29Z","lastTransitionTime":"2025-11-24T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.729476 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.729542 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.729569 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.729597 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.729618 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:29Z","lastTransitionTime":"2025-11-24T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.832616 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.832717 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.832754 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.832779 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.832797 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:29Z","lastTransitionTime":"2025-11-24T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.936716 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.936785 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.936832 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.936869 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.936895 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:29Z","lastTransitionTime":"2025-11-24T19:18:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:29 crc kubenswrapper[4812]: I1124 19:18:29.965745 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:29 crc kubenswrapper[4812]: E1124 19:18:29.966086 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.040822 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.040921 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.040944 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.040977 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.041004 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:30Z","lastTransitionTime":"2025-11-24T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.145077 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.145174 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.145193 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.145218 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.145234 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:30Z","lastTransitionTime":"2025-11-24T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.248945 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.249029 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.249046 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.249080 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.249095 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:30Z","lastTransitionTime":"2025-11-24T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.352940 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.353026 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.353049 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.353087 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.353111 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:30Z","lastTransitionTime":"2025-11-24T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.430751 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs\") pod \"network-metrics-daemon-jxmnc\" (UID: \"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\") " pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:30 crc kubenswrapper[4812]: E1124 19:18:30.431007 4812 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 19:18:30 crc kubenswrapper[4812]: E1124 19:18:30.431157 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs podName:2d681a94-8d4a-45cc-8559-dfd15b6d0b1e nodeName:}" failed. No retries permitted until 2025-11-24 19:19:34.431129215 +0000 UTC m=+168.220081586 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs") pod "network-metrics-daemon-jxmnc" (UID: "2d681a94-8d4a-45cc-8559-dfd15b6d0b1e") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.456823 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.456865 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.456883 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.456908 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.456927 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:30Z","lastTransitionTime":"2025-11-24T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.560177 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.560240 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.560260 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.560285 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.560303 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:30Z","lastTransitionTime":"2025-11-24T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.663908 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.663962 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.663972 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.663991 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.664004 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:30Z","lastTransitionTime":"2025-11-24T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.767434 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.767505 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.767526 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.767551 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.767569 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:30Z","lastTransitionTime":"2025-11-24T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.871142 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.871208 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.871218 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.871240 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.871251 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:30Z","lastTransitionTime":"2025-11-24T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.965456 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.965518 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.965517 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:30 crc kubenswrapper[4812]: E1124 19:18:30.965656 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:30 crc kubenswrapper[4812]: E1124 19:18:30.965896 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:30 crc kubenswrapper[4812]: E1124 19:18:30.966093 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.973898 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.973961 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.973979 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.974006 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:30 crc kubenswrapper[4812]: I1124 19:18:30.974027 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:30Z","lastTransitionTime":"2025-11-24T19:18:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.077292 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.077398 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.077419 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.077442 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.077461 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:31Z","lastTransitionTime":"2025-11-24T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.180189 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.180250 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.180267 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.180294 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.180311 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:31Z","lastTransitionTime":"2025-11-24T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.282728 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.282771 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.282780 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.282796 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.282805 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:31Z","lastTransitionTime":"2025-11-24T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.385553 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.385616 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.385635 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.385659 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.385677 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:31Z","lastTransitionTime":"2025-11-24T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.489218 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.489291 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.489309 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.489379 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.489418 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:31Z","lastTransitionTime":"2025-11-24T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.592669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.592720 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.592740 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.592763 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.592781 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:31Z","lastTransitionTime":"2025-11-24T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.695669 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.695716 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.695724 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.695738 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.695749 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:31Z","lastTransitionTime":"2025-11-24T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.798730 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.798809 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.798826 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.798855 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.798874 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:31Z","lastTransitionTime":"2025-11-24T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.901815 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.901861 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.901872 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.901890 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.901902 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:31Z","lastTransitionTime":"2025-11-24T19:18:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:31 crc kubenswrapper[4812]: I1124 19:18:31.964877 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:31 crc kubenswrapper[4812]: E1124 19:18:31.965553 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.005225 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.005297 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.005320 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.005383 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.005402 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:32Z","lastTransitionTime":"2025-11-24T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.108088 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.108192 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.108213 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.108241 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.108262 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:32Z","lastTransitionTime":"2025-11-24T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.211778 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.211862 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.211884 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.211913 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.211931 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:32Z","lastTransitionTime":"2025-11-24T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.315209 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.315292 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.315312 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.315390 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.315411 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:32Z","lastTransitionTime":"2025-11-24T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.418387 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.418455 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.418466 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.418482 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.418493 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:32Z","lastTransitionTime":"2025-11-24T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.521846 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.521911 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.521932 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.521957 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.521975 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:32Z","lastTransitionTime":"2025-11-24T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.624418 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.624466 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.624483 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.624505 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.624541 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:32Z","lastTransitionTime":"2025-11-24T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.727836 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.727895 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.727912 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.727936 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.727953 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:32Z","lastTransitionTime":"2025-11-24T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.831154 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.831236 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.831250 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.831272 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.831287 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:32Z","lastTransitionTime":"2025-11-24T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.934088 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.934145 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.934153 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.934166 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.934190 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:32Z","lastTransitionTime":"2025-11-24T19:18:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.965194 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.965288 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:32 crc kubenswrapper[4812]: I1124 19:18:32.965288 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:32 crc kubenswrapper[4812]: E1124 19:18:32.965475 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:32 crc kubenswrapper[4812]: E1124 19:18:32.965712 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:32 crc kubenswrapper[4812]: E1124 19:18:32.965804 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.038158 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.038231 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.038252 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.038278 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.038297 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:33Z","lastTransitionTime":"2025-11-24T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.141843 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.141884 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.141892 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.141906 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.141915 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:33Z","lastTransitionTime":"2025-11-24T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.245211 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.245275 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.245300 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.245368 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.245395 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:33Z","lastTransitionTime":"2025-11-24T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.348911 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.349004 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.349039 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.349072 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.349099 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:33Z","lastTransitionTime":"2025-11-24T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.451632 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.451709 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.451728 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.451755 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.451775 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:33Z","lastTransitionTime":"2025-11-24T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.554838 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.554905 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.554921 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.554945 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.555004 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:33Z","lastTransitionTime":"2025-11-24T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.658229 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.658290 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.658307 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.658368 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.658394 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:33Z","lastTransitionTime":"2025-11-24T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.761695 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.761745 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.761757 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.761775 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.761790 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:33Z","lastTransitionTime":"2025-11-24T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.865182 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.865240 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.865256 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.865280 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.865299 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:33Z","lastTransitionTime":"2025-11-24T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.965330 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:33 crc kubenswrapper[4812]: E1124 19:18:33.965551 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.968198 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.968290 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.968312 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.968387 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:33 crc kubenswrapper[4812]: I1124 19:18:33.968408 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:33Z","lastTransitionTime":"2025-11-24T19:18:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.071379 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.071489 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.071509 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.071537 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.071555 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:34Z","lastTransitionTime":"2025-11-24T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.174413 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.174484 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.174506 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.174535 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.174556 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:34Z","lastTransitionTime":"2025-11-24T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.277326 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.277449 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.277471 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.277501 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.277524 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:34Z","lastTransitionTime":"2025-11-24T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.380528 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.380579 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.380602 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.380624 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.380641 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:34Z","lastTransitionTime":"2025-11-24T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.483578 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.483653 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.483674 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.483706 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.483728 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:34Z","lastTransitionTime":"2025-11-24T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.587023 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.587069 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.587085 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.587111 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.587136 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:34Z","lastTransitionTime":"2025-11-24T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.689759 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.689817 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.689836 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.689859 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.689882 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:34Z","lastTransitionTime":"2025-11-24T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.792784 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.792842 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.792863 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.792888 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.792907 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:34Z","lastTransitionTime":"2025-11-24T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.897016 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.897107 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.897130 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.897163 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.897188 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:34Z","lastTransitionTime":"2025-11-24T19:18:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.965653 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.965751 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:34 crc kubenswrapper[4812]: E1124 19:18:34.965847 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:34 crc kubenswrapper[4812]: I1124 19:18:34.965901 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:34 crc kubenswrapper[4812]: E1124 19:18:34.966148 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:34 crc kubenswrapper[4812]: E1124 19:18:34.966238 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.000257 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.000546 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.000730 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.000877 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.001017 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:35Z","lastTransitionTime":"2025-11-24T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.104395 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.104468 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.104487 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.104511 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.104528 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:35Z","lastTransitionTime":"2025-11-24T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.207647 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.207735 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.207786 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.207810 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.207827 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:35Z","lastTransitionTime":"2025-11-24T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.310710 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.310784 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.310808 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.310833 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.310852 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:35Z","lastTransitionTime":"2025-11-24T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.414439 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.414510 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.414533 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.414560 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.414580 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:35Z","lastTransitionTime":"2025-11-24T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.488059 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.488114 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.488131 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.488156 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.488176 4812 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T19:18:35Z","lastTransitionTime":"2025-11-24T19:18:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.559163 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-6f6rc"] Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.560327 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6f6rc" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.563832 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.563832 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.564787 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.566163 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.597517 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=55.597493424 podStartE2EDuration="55.597493424s" podCreationTimestamp="2025-11-24 19:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:18:35.597071053 +0000 UTC m=+109.386023464" watchObservedRunningTime="2025-11-24 19:18:35.597493424 +0000 UTC m=+109.386445845" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.597775 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=25.597767202 podStartE2EDuration="25.597767202s" podCreationTimestamp="2025-11-24 19:18:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:18:35.581845614 +0000 UTC m=+109.370798035" watchObservedRunningTime="2025-11-24 19:18:35.597767202 +0000 UTC m=+109.386719613" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.636150 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=85.636119327 podStartE2EDuration="1m25.636119327s" podCreationTimestamp="2025-11-24 19:17:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:18:35.617204127 +0000 UTC m=+109.406156538" watchObservedRunningTime="2025-11-24 19:18:35.636119327 +0000 UTC m=+109.425071738" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.689272 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/250b1a9e-cc49-4205-8a1c-fc919d13ecf8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6f6rc\" (UID: \"250b1a9e-cc49-4205-8a1c-fc919d13ecf8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6f6rc" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.689360 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/250b1a9e-cc49-4205-8a1c-fc919d13ecf8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6f6rc\" (UID: \"250b1a9e-cc49-4205-8a1c-fc919d13ecf8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6f6rc" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.689407 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/250b1a9e-cc49-4205-8a1c-fc919d13ecf8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6f6rc\" (UID: \"250b1a9e-cc49-4205-8a1c-fc919d13ecf8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6f6rc" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.689438 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/250b1a9e-cc49-4205-8a1c-fc919d13ecf8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6f6rc\" (UID: \"250b1a9e-cc49-4205-8a1c-fc919d13ecf8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6f6rc" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.689454 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/250b1a9e-cc49-4205-8a1c-fc919d13ecf8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6f6rc\" (UID: \"250b1a9e-cc49-4205-8a1c-fc919d13ecf8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6f6rc" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.707621 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podStartSLOduration=84.707602824 podStartE2EDuration="1m24.707602824s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:18:35.707145951 +0000 UTC m=+109.496098372" watchObservedRunningTime="2025-11-24 19:18:35.707602824 +0000 UTC m=+109.496555195" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.717033 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-v9d5f" podStartSLOduration=84.717013713 podStartE2EDuration="1m24.717013713s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:18:35.716679594 +0000 UTC m=+109.505631965" watchObservedRunningTime="2025-11-24 19:18:35.717013713 +0000 UTC m=+109.505966084" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.763446 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=19.76342403 podStartE2EDuration="19.76342403s" podCreationTimestamp="2025-11-24 19:18:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:18:35.748411247 +0000 UTC m=+109.537363618" watchObservedRunningTime="2025-11-24 19:18:35.76342403 +0000 UTC m=+109.552376411" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.790475 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/250b1a9e-cc49-4205-8a1c-fc919d13ecf8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6f6rc\" (UID: \"250b1a9e-cc49-4205-8a1c-fc919d13ecf8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6f6rc" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.790565 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/250b1a9e-cc49-4205-8a1c-fc919d13ecf8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6f6rc\" (UID: \"250b1a9e-cc49-4205-8a1c-fc919d13ecf8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6f6rc" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.790617 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/250b1a9e-cc49-4205-8a1c-fc919d13ecf8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6f6rc\" (UID: \"250b1a9e-cc49-4205-8a1c-fc919d13ecf8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6f6rc" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.790599 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/250b1a9e-cc49-4205-8a1c-fc919d13ecf8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-6f6rc\" (UID: \"250b1a9e-cc49-4205-8a1c-fc919d13ecf8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6f6rc" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.790667 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/250b1a9e-cc49-4205-8a1c-fc919d13ecf8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6f6rc\" (UID: \"250b1a9e-cc49-4205-8a1c-fc919d13ecf8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6f6rc" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.791176 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/250b1a9e-cc49-4205-8a1c-fc919d13ecf8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6f6rc\" (UID: \"250b1a9e-cc49-4205-8a1c-fc919d13ecf8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6f6rc" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.791282 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/250b1a9e-cc49-4205-8a1c-fc919d13ecf8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-6f6rc\" (UID: \"250b1a9e-cc49-4205-8a1c-fc919d13ecf8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6f6rc" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.791706 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/250b1a9e-cc49-4205-8a1c-fc919d13ecf8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-6f6rc\" (UID: \"250b1a9e-cc49-4205-8a1c-fc919d13ecf8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6f6rc" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.798958 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/250b1a9e-cc49-4205-8a1c-fc919d13ecf8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-6f6rc\" (UID: \"250b1a9e-cc49-4205-8a1c-fc919d13ecf8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6f6rc" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.810831 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-lhgj5" podStartSLOduration=84.810796453 podStartE2EDuration="1m24.810796453s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:18:35.80995652 +0000 UTC m=+109.598908901" watchObservedRunningTime="2025-11-24 19:18:35.810796453 +0000 UTC m=+109.599748834" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.815521 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/250b1a9e-cc49-4205-8a1c-fc919d13ecf8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-6f6rc\" (UID: \"250b1a9e-cc49-4205-8a1c-fc919d13ecf8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6f6rc" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.827238 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-zwlsb" podStartSLOduration=84.827218265 podStartE2EDuration="1m24.827218265s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:18:35.82632265 +0000 UTC m=+109.615275041" watchObservedRunningTime="2025-11-24 19:18:35.827218265 +0000 UTC m=+109.616170666" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.885037 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-qj8tt" podStartSLOduration=84.885019655 podStartE2EDuration="1m24.885019655s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:18:35.884066729 +0000 UTC m=+109.673019120" watchObservedRunningTime="2025-11-24 19:18:35.885019655 +0000 UTC m=+109.673972026" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.886831 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6f6rc" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.928321 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sxfp4" podStartSLOduration=83.928265925 podStartE2EDuration="1m23.928265925s" podCreationTimestamp="2025-11-24 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:18:35.90992534 +0000 UTC m=+109.698877721" watchObservedRunningTime="2025-11-24 19:18:35.928265925 +0000 UTC m=+109.717218336" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.953005 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.952977055 podStartE2EDuration="1m28.952977055s" podCreationTimestamp="2025-11-24 19:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:18:35.951274858 +0000 UTC m=+109.740227279" watchObservedRunningTime="2025-11-24 19:18:35.952977055 +0000 UTC m=+109.741929456" Nov 24 19:18:35 crc kubenswrapper[4812]: I1124 19:18:35.965682 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:35 crc kubenswrapper[4812]: E1124 19:18:35.966032 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:36 crc kubenswrapper[4812]: I1124 19:18:36.614617 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6f6rc" event={"ID":"250b1a9e-cc49-4205-8a1c-fc919d13ecf8","Type":"ContainerStarted","Data":"262d2fa718be8191df842c75be430513a99a6bda9b5a532fa7765f5501876e7e"} Nov 24 19:18:36 crc kubenswrapper[4812]: I1124 19:18:36.615476 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6f6rc" event={"ID":"250b1a9e-cc49-4205-8a1c-fc919d13ecf8","Type":"ContainerStarted","Data":"bbd84b2bcc1c6a365c412744aad84afcf9d6dfa0de73196f9212b34c93911e1d"} Nov 24 19:18:36 crc kubenswrapper[4812]: I1124 19:18:36.637013 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-6f6rc" podStartSLOduration=85.636987264 podStartE2EDuration="1m25.636987264s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:18:36.636002257 +0000 UTC m=+110.424954708" watchObservedRunningTime="2025-11-24 19:18:36.636987264 +0000 UTC m=+110.425939685" Nov 24 19:18:36 crc kubenswrapper[4812]: I1124 19:18:36.965380 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:36 crc kubenswrapper[4812]: I1124 19:18:36.965496 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:36 crc kubenswrapper[4812]: I1124 19:18:36.965535 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:36 crc kubenswrapper[4812]: E1124 19:18:36.967561 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:36 crc kubenswrapper[4812]: E1124 19:18:36.967737 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:36 crc kubenswrapper[4812]: E1124 19:18:36.967878 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:37 crc kubenswrapper[4812]: I1124 19:18:37.964570 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:37 crc kubenswrapper[4812]: E1124 19:18:37.964755 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:38 crc kubenswrapper[4812]: I1124 19:18:38.968029 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:38 crc kubenswrapper[4812]: I1124 19:18:38.968147 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:38 crc kubenswrapper[4812]: I1124 19:18:38.968029 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:38 crc kubenswrapper[4812]: E1124 19:18:38.968227 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:38 crc kubenswrapper[4812]: E1124 19:18:38.968606 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:38 crc kubenswrapper[4812]: E1124 19:18:38.968711 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:39 crc kubenswrapper[4812]: I1124 19:18:39.964778 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:39 crc kubenswrapper[4812]: E1124 19:18:39.965292 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:39 crc kubenswrapper[4812]: I1124 19:18:39.965708 4812 scope.go:117] "RemoveContainer" containerID="e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63" Nov 24 19:18:39 crc kubenswrapper[4812]: E1124 19:18:39.965925 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-dgm54_openshift-ovn-kubernetes(b24bf762-6020-46b4-b9e8-589eb8ed0650)\"" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" Nov 24 19:18:40 crc kubenswrapper[4812]: I1124 19:18:40.965611 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:40 crc kubenswrapper[4812]: I1124 19:18:40.965688 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:40 crc kubenswrapper[4812]: E1124 19:18:40.965772 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:40 crc kubenswrapper[4812]: I1124 19:18:40.965892 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:40 crc kubenswrapper[4812]: E1124 19:18:40.966033 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:40 crc kubenswrapper[4812]: E1124 19:18:40.966061 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:41 crc kubenswrapper[4812]: I1124 19:18:41.965058 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:41 crc kubenswrapper[4812]: E1124 19:18:41.965326 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:42 crc kubenswrapper[4812]: I1124 19:18:42.964997 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:42 crc kubenswrapper[4812]: I1124 19:18:42.965039 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:42 crc kubenswrapper[4812]: I1124 19:18:42.965146 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:42 crc kubenswrapper[4812]: E1124 19:18:42.966024 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:42 crc kubenswrapper[4812]: E1124 19:18:42.966510 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:42 crc kubenswrapper[4812]: E1124 19:18:42.966798 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:43 crc kubenswrapper[4812]: I1124 19:18:43.965412 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:43 crc kubenswrapper[4812]: E1124 19:18:43.965577 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:44 crc kubenswrapper[4812]: I1124 19:18:44.965302 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:44 crc kubenswrapper[4812]: I1124 19:18:44.965353 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:44 crc kubenswrapper[4812]: E1124 19:18:44.965965 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:44 crc kubenswrapper[4812]: I1124 19:18:44.965479 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:44 crc kubenswrapper[4812]: E1124 19:18:44.966418 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:44 crc kubenswrapper[4812]: E1124 19:18:44.966506 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:45 crc kubenswrapper[4812]: I1124 19:18:45.648991 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lhgj5_c270cb89-97c2-48c4-94c3-9b8420d81cfd/kube-multus/1.log" Nov 24 19:18:45 crc kubenswrapper[4812]: I1124 19:18:45.649734 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lhgj5_c270cb89-97c2-48c4-94c3-9b8420d81cfd/kube-multus/0.log" Nov 24 19:18:45 crc kubenswrapper[4812]: I1124 19:18:45.649793 4812 generic.go:334] "Generic (PLEG): container finished" podID="c270cb89-97c2-48c4-94c3-9b8420d81cfd" containerID="a3218c66661f13d16c61591f5b6e09f9aecf9fcad3093917cf7f65d64c5756a0" exitCode=1 Nov 24 19:18:45 crc kubenswrapper[4812]: I1124 19:18:45.649831 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lhgj5" event={"ID":"c270cb89-97c2-48c4-94c3-9b8420d81cfd","Type":"ContainerDied","Data":"a3218c66661f13d16c61591f5b6e09f9aecf9fcad3093917cf7f65d64c5756a0"} Nov 24 19:18:45 crc kubenswrapper[4812]: I1124 19:18:45.649875 4812 scope.go:117] "RemoveContainer" containerID="c5e234babac633ed6d3a267bc8b58bb9501331ad17e50ab995da1c473f172d73" Nov 24 19:18:45 crc kubenswrapper[4812]: I1124 19:18:45.650500 4812 scope.go:117] "RemoveContainer" containerID="a3218c66661f13d16c61591f5b6e09f9aecf9fcad3093917cf7f65d64c5756a0" Nov 24 19:18:45 crc kubenswrapper[4812]: E1124 19:18:45.650740 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-lhgj5_openshift-multus(c270cb89-97c2-48c4-94c3-9b8420d81cfd)\"" pod="openshift-multus/multus-lhgj5" podUID="c270cb89-97c2-48c4-94c3-9b8420d81cfd" Nov 24 19:18:45 crc kubenswrapper[4812]: I1124 19:18:45.965673 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:45 crc kubenswrapper[4812]: E1124 19:18:45.965872 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:46 crc kubenswrapper[4812]: I1124 19:18:46.657001 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lhgj5_c270cb89-97c2-48c4-94c3-9b8420d81cfd/kube-multus/1.log" Nov 24 19:18:46 crc kubenswrapper[4812]: E1124 19:18:46.907974 4812 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 24 19:18:46 crc kubenswrapper[4812]: I1124 19:18:46.964732 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:46 crc kubenswrapper[4812]: I1124 19:18:46.964853 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:46 crc kubenswrapper[4812]: I1124 19:18:46.964871 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:46 crc kubenswrapper[4812]: E1124 19:18:46.967086 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:46 crc kubenswrapper[4812]: E1124 19:18:46.967211 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:46 crc kubenswrapper[4812]: E1124 19:18:46.967413 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:47 crc kubenswrapper[4812]: E1124 19:18:47.087531 4812 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 19:18:47 crc kubenswrapper[4812]: I1124 19:18:47.965207 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:47 crc kubenswrapper[4812]: E1124 19:18:47.965455 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:48 crc kubenswrapper[4812]: I1124 19:18:48.965095 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:48 crc kubenswrapper[4812]: I1124 19:18:48.965182 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:48 crc kubenswrapper[4812]: I1124 19:18:48.965131 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:48 crc kubenswrapper[4812]: E1124 19:18:48.965310 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:48 crc kubenswrapper[4812]: E1124 19:18:48.965522 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:48 crc kubenswrapper[4812]: E1124 19:18:48.965692 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:49 crc kubenswrapper[4812]: I1124 19:18:49.965255 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:49 crc kubenswrapper[4812]: E1124 19:18:49.965542 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:50 crc kubenswrapper[4812]: I1124 19:18:50.964936 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:50 crc kubenswrapper[4812]: I1124 19:18:50.965012 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:50 crc kubenswrapper[4812]: I1124 19:18:50.965008 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:50 crc kubenswrapper[4812]: E1124 19:18:50.965190 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:50 crc kubenswrapper[4812]: E1124 19:18:50.965285 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:50 crc kubenswrapper[4812]: E1124 19:18:50.965451 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:51 crc kubenswrapper[4812]: I1124 19:18:51.965142 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:51 crc kubenswrapper[4812]: E1124 19:18:51.965373 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:51 crc kubenswrapper[4812]: I1124 19:18:51.966493 4812 scope.go:117] "RemoveContainer" containerID="e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63" Nov 24 19:18:52 crc kubenswrapper[4812]: E1124 19:18:52.088495 4812 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 19:18:52 crc kubenswrapper[4812]: I1124 19:18:52.681626 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dgm54_b24bf762-6020-46b4-b9e8-589eb8ed0650/ovnkube-controller/3.log" Nov 24 19:18:52 crc kubenswrapper[4812]: I1124 19:18:52.684424 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerStarted","Data":"dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a"} Nov 24 19:18:52 crc kubenswrapper[4812]: I1124 19:18:52.684908 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:18:52 crc kubenswrapper[4812]: I1124 19:18:52.830884 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" podStartSLOduration=101.830862249 podStartE2EDuration="1m41.830862249s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:18:52.714390104 +0000 UTC m=+126.503342495" watchObservedRunningTime="2025-11-24 19:18:52.830862249 +0000 UTC m=+126.619814630" Nov 24 19:18:52 crc kubenswrapper[4812]: I1124 19:18:52.831609 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jxmnc"] Nov 24 19:18:52 crc kubenswrapper[4812]: I1124 19:18:52.831732 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:52 crc kubenswrapper[4812]: E1124 19:18:52.831838 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:52 crc kubenswrapper[4812]: I1124 19:18:52.965409 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:52 crc kubenswrapper[4812]: E1124 19:18:52.965586 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:52 crc kubenswrapper[4812]: I1124 19:18:52.965626 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:52 crc kubenswrapper[4812]: I1124 19:18:52.965690 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:52 crc kubenswrapper[4812]: E1124 19:18:52.965795 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:52 crc kubenswrapper[4812]: E1124 19:18:52.965931 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:54 crc kubenswrapper[4812]: I1124 19:18:54.964676 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:54 crc kubenswrapper[4812]: I1124 19:18:54.964781 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:54 crc kubenswrapper[4812]: I1124 19:18:54.964688 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:54 crc kubenswrapper[4812]: E1124 19:18:54.964913 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:54 crc kubenswrapper[4812]: I1124 19:18:54.964955 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:54 crc kubenswrapper[4812]: E1124 19:18:54.965067 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:54 crc kubenswrapper[4812]: E1124 19:18:54.965287 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:54 crc kubenswrapper[4812]: E1124 19:18:54.965515 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:56 crc kubenswrapper[4812]: I1124 19:18:56.965624 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:56 crc kubenswrapper[4812]: I1124 19:18:56.965680 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:56 crc kubenswrapper[4812]: I1124 19:18:56.965740 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:56 crc kubenswrapper[4812]: E1124 19:18:56.967817 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:56 crc kubenswrapper[4812]: I1124 19:18:56.967881 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:56 crc kubenswrapper[4812]: E1124 19:18:56.968024 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:56 crc kubenswrapper[4812]: E1124 19:18:56.968002 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:56 crc kubenswrapper[4812]: E1124 19:18:56.968222 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:18:57 crc kubenswrapper[4812]: E1124 19:18:57.090317 4812 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 19:18:58 crc kubenswrapper[4812]: I1124 19:18:58.964957 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:18:58 crc kubenswrapper[4812]: I1124 19:18:58.965015 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:18:58 crc kubenswrapper[4812]: I1124 19:18:58.965062 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:18:58 crc kubenswrapper[4812]: E1124 19:18:58.965160 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:18:58 crc kubenswrapper[4812]: E1124 19:18:58.965270 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:18:58 crc kubenswrapper[4812]: I1124 19:18:58.965296 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:18:58 crc kubenswrapper[4812]: E1124 19:18:58.965391 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:18:58 crc kubenswrapper[4812]: E1124 19:18:58.965484 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:19:00 crc kubenswrapper[4812]: I1124 19:19:00.965509 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:19:00 crc kubenswrapper[4812]: E1124 19:19:00.965731 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:19:00 crc kubenswrapper[4812]: I1124 19:19:00.965770 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:19:00 crc kubenswrapper[4812]: I1124 19:19:00.965875 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:19:00 crc kubenswrapper[4812]: I1124 19:19:00.965906 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:19:00 crc kubenswrapper[4812]: E1124 19:19:00.966030 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:19:00 crc kubenswrapper[4812]: E1124 19:19:00.966216 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:19:00 crc kubenswrapper[4812]: E1124 19:19:00.966445 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:19:00 crc kubenswrapper[4812]: I1124 19:19:00.966904 4812 scope.go:117] "RemoveContainer" containerID="a3218c66661f13d16c61591f5b6e09f9aecf9fcad3093917cf7f65d64c5756a0" Nov 24 19:19:01 crc kubenswrapper[4812]: I1124 19:19:01.719661 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lhgj5_c270cb89-97c2-48c4-94c3-9b8420d81cfd/kube-multus/1.log" Nov 24 19:19:01 crc kubenswrapper[4812]: I1124 19:19:01.719733 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lhgj5" event={"ID":"c270cb89-97c2-48c4-94c3-9b8420d81cfd","Type":"ContainerStarted","Data":"3bb81f02ea22620bd0101c7d8fdcad7d10ed08c41d42d32cd204a571e256cf0f"} Nov 24 19:19:02 crc kubenswrapper[4812]: E1124 19:19:02.091947 4812 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 19:19:02 crc kubenswrapper[4812]: I1124 19:19:02.965625 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:19:02 crc kubenswrapper[4812]: I1124 19:19:02.965670 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:19:02 crc kubenswrapper[4812]: I1124 19:19:02.965693 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:19:02 crc kubenswrapper[4812]: I1124 19:19:02.965768 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:19:02 crc kubenswrapper[4812]: E1124 19:19:02.965981 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:19:02 crc kubenswrapper[4812]: E1124 19:19:02.966453 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:19:02 crc kubenswrapper[4812]: E1124 19:19:02.966692 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:19:02 crc kubenswrapper[4812]: E1124 19:19:02.966854 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:19:04 crc kubenswrapper[4812]: I1124 19:19:04.965146 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:19:04 crc kubenswrapper[4812]: I1124 19:19:04.965282 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:19:04 crc kubenswrapper[4812]: E1124 19:19:04.965385 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:19:04 crc kubenswrapper[4812]: I1124 19:19:04.965444 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:19:04 crc kubenswrapper[4812]: E1124 19:19:04.965503 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:19:04 crc kubenswrapper[4812]: I1124 19:19:04.965557 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:19:04 crc kubenswrapper[4812]: E1124 19:19:04.965755 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:19:04 crc kubenswrapper[4812]: E1124 19:19:04.965892 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:19:06 crc kubenswrapper[4812]: I1124 19:19:06.965434 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:19:06 crc kubenswrapper[4812]: I1124 19:19:06.965504 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:19:06 crc kubenswrapper[4812]: E1124 19:19:06.967915 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 19:19:06 crc kubenswrapper[4812]: I1124 19:19:06.967951 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:19:06 crc kubenswrapper[4812]: I1124 19:19:06.968252 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:19:06 crc kubenswrapper[4812]: E1124 19:19:06.968267 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 19:19:06 crc kubenswrapper[4812]: E1124 19:19:06.968379 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 19:19:06 crc kubenswrapper[4812]: E1124 19:19:06.968704 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jxmnc" podUID="2d681a94-8d4a-45cc-8559-dfd15b6d0b1e" Nov 24 19:19:08 crc kubenswrapper[4812]: I1124 19:19:08.965436 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:19:08 crc kubenswrapper[4812]: I1124 19:19:08.965450 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:19:08 crc kubenswrapper[4812]: I1124 19:19:08.965472 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:19:08 crc kubenswrapper[4812]: I1124 19:19:08.965535 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:19:08 crc kubenswrapper[4812]: I1124 19:19:08.970569 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 24 19:19:08 crc kubenswrapper[4812]: I1124 19:19:08.970621 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 24 19:19:08 crc kubenswrapper[4812]: I1124 19:19:08.972481 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 24 19:19:08 crc kubenswrapper[4812]: I1124 19:19:08.972577 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 24 19:19:08 crc kubenswrapper[4812]: I1124 19:19:08.972487 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 24 19:19:08 crc kubenswrapper[4812]: I1124 19:19:08.973441 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 24 19:19:14 crc kubenswrapper[4812]: I1124 19:19:14.775118 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:14 crc kubenswrapper[4812]: E1124 19:19:14.775383 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:21:16.775296681 +0000 UTC m=+270.564249102 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:14 crc kubenswrapper[4812]: I1124 19:19:14.776766 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:19:14 crc kubenswrapper[4812]: I1124 19:19:14.778481 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:19:14 crc kubenswrapper[4812]: I1124 19:19:14.978942 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:19:14 crc kubenswrapper[4812]: I1124 19:19:14.978983 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:19:14 crc kubenswrapper[4812]: I1124 19:19:14.979016 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:19:14 crc kubenswrapper[4812]: I1124 19:19:14.988086 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:19:14 crc kubenswrapper[4812]: I1124 19:19:14.988598 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:19:14 crc kubenswrapper[4812]: I1124 19:19:14.990591 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:19:14 crc kubenswrapper[4812]: I1124 19:19:14.993752 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:19:15 crc kubenswrapper[4812]: I1124 19:19:15.008456 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 19:19:15 crc kubenswrapper[4812]: I1124 19:19:15.027352 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 19:19:15 crc kubenswrapper[4812]: W1124 19:19:15.245826 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-ffae54ac942282cf67fdfd8bdba163ca29d8596b252fcfc8ea98aad5d3ccaec8 WatchSource:0}: Error finding container ffae54ac942282cf67fdfd8bdba163ca29d8596b252fcfc8ea98aad5d3ccaec8: Status 404 returned error can't find the container with id ffae54ac942282cf67fdfd8bdba163ca29d8596b252fcfc8ea98aad5d3ccaec8 Nov 24 19:19:15 crc kubenswrapper[4812]: W1124 19:19:15.271128 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-c8930bf27a691f9198d478e45236eaf82d93b3268de4253770528a16f8daaa60 WatchSource:0}: Error finding container c8930bf27a691f9198d478e45236eaf82d93b3268de4253770528a16f8daaa60: Status 404 returned error can't find the container with id c8930bf27a691f9198d478e45236eaf82d93b3268de4253770528a16f8daaa60 Nov 24 19:19:15 crc kubenswrapper[4812]: I1124 19:19:15.386524 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:19:15 crc kubenswrapper[4812]: I1124 19:19:15.779083 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"182a78e227aa95d2c1f5b83f8fba44d347da1f72d400773985e7f7b048898fca"} Nov 24 19:19:15 crc kubenswrapper[4812]: I1124 19:19:15.779181 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"75ab38ae8bd935c381883c8cc6f57cf0ecd463da5ddd11c81e0e64cdfc42a976"} Nov 24 19:19:15 crc kubenswrapper[4812]: I1124 19:19:15.779427 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:19:15 crc kubenswrapper[4812]: I1124 19:19:15.782761 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"aff6b5d752e45d38b221a2931ecbbf9ff02976c644d254338c001bec647b9e3e"} Nov 24 19:19:15 crc kubenswrapper[4812]: I1124 19:19:15.782991 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ffae54ac942282cf67fdfd8bdba163ca29d8596b252fcfc8ea98aad5d3ccaec8"} Nov 24 19:19:15 crc kubenswrapper[4812]: I1124 19:19:15.786487 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6e92e78afd881759d30c6a9831ee4375f77669445b430fc7ab231274a3599ada"} Nov 24 19:19:15 crc kubenswrapper[4812]: I1124 19:19:15.786543 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c8930bf27a691f9198d478e45236eaf82d93b3268de4253770528a16f8daaa60"} Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.121289 4812 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.178781 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.179435 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.182906 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9lvg4"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.183663 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9lvg4" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.185082 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.185901 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.189290 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-jb4lb"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.190093 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.192738 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-krdpg"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.193668 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-krdpg" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.194127 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6zn22"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.194744 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.197868 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7ndrw"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.198799 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7ndrw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.201879 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.202455 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.202695 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.202871 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.202909 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzd88"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.203506 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xtdvk"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.204030 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.206999 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.207457 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.207871 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.208516 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzd88" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.209034 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-4spkn"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.210225 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4spkn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.220294 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zv5wk"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.221598 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.223536 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zv5wk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.229020 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bd6qw"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.230078 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.254995 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-22pnq"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.255618 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.255828 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kqn98"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.257375 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.257648 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.257834 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.258008 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.258193 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.258535 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-kqn98" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.260684 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.260857 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fqzwn"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.261379 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.261555 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.280619 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.281322 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.281710 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.285027 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.285069 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.285393 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.285601 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.285656 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.285693 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.285766 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.285844 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.285904 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.286044 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.286249 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.286459 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.286663 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.286972 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.287116 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.287216 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.287249 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.287345 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.287466 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.287636 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.287749 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.287690 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.287793 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.287845 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.287907 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.288002 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.288026 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.287223 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.286461 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.288267 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.287518 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.288413 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zs9kb"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.285849 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.286973 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.287361 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.287369 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.289528 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zs9kb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.287761 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.288519 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.288592 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.288602 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.288657 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.288728 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.285036 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.289729 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.288764 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.288789 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.288855 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.288929 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.288972 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.289059 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.289062 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.289108 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.289109 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.289151 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.289196 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.289208 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.289243 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.289327 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.289347 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.289361 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.289265 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290436 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290460 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74f627e0-8a46-4876-b456-9efda3c4ad41-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290479 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d7kg\" (UniqueName: \"kubernetes.io/projected/74f627e0-8a46-4876-b456-9efda3c4ad41-kube-api-access-2d7kg\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290498 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16e4032b-e804-4cf0-9a9f-0c23319c06df-serving-cert\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290513 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdhkd\" (UniqueName: \"kubernetes.io/projected/16e4032b-e804-4cf0-9a9f-0c23319c06df-kube-api-access-xdhkd\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290528 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e28dbd25-fd09-440e-b2d8-4560d43ea079-etcd-service-ca\") pod \"etcd-operator-b45778765-22pnq\" (UID: \"e28dbd25-fd09-440e-b2d8-4560d43ea079\") " pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290546 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74f627e0-8a46-4876-b456-9efda3c4ad41-audit-dir\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290563 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/16e4032b-e804-4cf0-9a9f-0c23319c06df-etcd-client\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290580 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bprtz\" (UniqueName: \"kubernetes.io/projected/7dab4563-6b36-4d8e-a789-58459d65cb7c-kube-api-access-bprtz\") pod \"openshift-apiserver-operator-796bbdcf4f-gzd88\" (UID: \"7dab4563-6b36-4d8e-a789-58459d65cb7c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzd88" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290599 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/16e4032b-e804-4cf0-9a9f-0c23319c06df-node-pullsecrets\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290613 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/16e4032b-e804-4cf0-9a9f-0c23319c06df-encryption-config\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290620 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290632 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/74f627e0-8a46-4876-b456-9efda3c4ad41-etcd-client\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290656 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dab4563-6b36-4d8e-a789-58459d65cb7c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gzd88\" (UID: \"7dab4563-6b36-4d8e-a789-58459d65cb7c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzd88" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290678 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290693 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290713 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16e4032b-e804-4cf0-9a9f-0c23319c06df-config\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290734 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290749 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dab4563-6b36-4d8e-a789-58459d65cb7c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gzd88\" (UID: \"7dab4563-6b36-4d8e-a789-58459d65cb7c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzd88" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290764 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msmmt\" (UniqueName: \"kubernetes.io/projected/afeebfe0-572d-4c6e-b706-2a35b40e25e0-kube-api-access-msmmt\") pod \"cluster-samples-operator-665b6dd947-7ndrw\" (UID: \"afeebfe0-572d-4c6e-b706-2a35b40e25e0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7ndrw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290781 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdsn5\" (UniqueName: \"kubernetes.io/projected/bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0-kube-api-access-zdsn5\") pod \"authentication-operator-69f744f599-9lvg4\" (UID: \"bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lvg4" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290797 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16e4032b-e804-4cf0-9a9f-0c23319c06df-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290814 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16e4032b-e804-4cf0-9a9f-0c23319c06df-audit-dir\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290830 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0-serving-cert\") pod \"authentication-operator-69f744f599-9lvg4\" (UID: \"bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lvg4" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290845 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74f627e0-8a46-4876-b456-9efda3c4ad41-audit-policies\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290863 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9lvg4\" (UID: \"bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lvg4" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290880 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/74f627e0-8a46-4876-b456-9efda3c4ad41-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290895 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290912 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngt69\" (UniqueName: \"kubernetes.io/projected/1b540550-77e7-4545-9e34-b972ab5ec677-kube-api-access-ngt69\") pod \"downloads-7954f5f757-4spkn\" (UID: \"1b540550-77e7-4545-9e34-b972ab5ec677\") " pod="openshift-console/downloads-7954f5f757-4spkn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290938 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/74f627e0-8a46-4876-b456-9efda3c4ad41-encryption-config\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290973 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e28dbd25-fd09-440e-b2d8-4560d43ea079-serving-cert\") pod \"etcd-operator-b45778765-22pnq\" (UID: \"e28dbd25-fd09-440e-b2d8-4560d43ea079\") " pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.290992 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-audit-policies\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291007 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfbhn\" (UniqueName: \"kubernetes.io/projected/03ad7961-295b-4e12-82b0-b75f196049b0-kube-api-access-sfbhn\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291026 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e28dbd25-fd09-440e-b2d8-4560d43ea079-config\") pod \"etcd-operator-b45778765-22pnq\" (UID: \"e28dbd25-fd09-440e-b2d8-4560d43ea079\") " pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291046 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e28dbd25-fd09-440e-b2d8-4560d43ea079-etcd-client\") pod \"etcd-operator-b45778765-22pnq\" (UID: \"e28dbd25-fd09-440e-b2d8-4560d43ea079\") " pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291067 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291087 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291105 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74f627e0-8a46-4876-b456-9efda3c4ad41-serving-cert\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291122 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03ad7961-295b-4e12-82b0-b75f196049b0-audit-dir\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291176 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291242 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291323 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291366 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e28dbd25-fd09-440e-b2d8-4560d43ea079-etcd-ca\") pod \"etcd-operator-b45778765-22pnq\" (UID: \"e28dbd25-fd09-440e-b2d8-4560d43ea079\") " pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291406 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291436 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/afeebfe0-572d-4c6e-b706-2a35b40e25e0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7ndrw\" (UID: \"afeebfe0-572d-4c6e-b706-2a35b40e25e0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7ndrw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291463 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/16e4032b-e804-4cf0-9a9f-0c23319c06df-audit\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291493 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/16e4032b-e804-4cf0-9a9f-0c23319c06df-etcd-serving-ca\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291513 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291532 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291564 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j46xv\" (UniqueName: \"kubernetes.io/projected/e28dbd25-fd09-440e-b2d8-4560d43ea079-kube-api-access-j46xv\") pod \"etcd-operator-b45778765-22pnq\" (UID: \"e28dbd25-fd09-440e-b2d8-4560d43ea079\") " pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291598 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0-config\") pod \"authentication-operator-69f744f599-9lvg4\" (UID: \"bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lvg4" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291618 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0-service-ca-bundle\") pod \"authentication-operator-69f744f599-9lvg4\" (UID: \"bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lvg4" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291639 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-qln2c"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291623 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.291642 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/16e4032b-e804-4cf0-9a9f-0c23319c06df-image-import-ca\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.292294 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qln2c" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.294685 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.296035 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-qrrxs"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.296763 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.297071 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.299284 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.299648 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.299777 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.299825 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.299788 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.300597 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-277zs"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.318059 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.318832 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qrrxs" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.321660 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.325228 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2knf7"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.325839 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2knf7" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.326261 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-277zs" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.326734 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.326983 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.333985 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.334525 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.335097 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.336170 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.336550 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.337431 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-xv79f"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.341318 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.342318 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.344215 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.345695 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.347834 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.356805 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bsr2j"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.358856 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8zwdr"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.359263 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.359287 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-6nn48"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.356924 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xv79f" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.359665 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8zwdr" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.359738 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bsr2j" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.360309 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f6574"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.361325 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qb8jp"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.361401 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6574" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.361730 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-6nn48" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.362001 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bnzn9"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.362083 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qb8jp" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.362539 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5bljf"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.362917 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnzn9" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.363350 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nvpxb"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.363895 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5bljf" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.363967 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.364131 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56qgd"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.364597 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qr7zk"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.364799 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.365228 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-d2q8r"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.365317 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56qgd" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.365681 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9hgkr"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.365703 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qr7zk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.366594 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v8jpj"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.367013 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-d2q8r" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.367296 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9hgkr" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.367849 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v8jpj" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.370749 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.371413 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.374200 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jb4lb"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.378257 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-7dndk"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.379935 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7ndrw"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.380035 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dndk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.380570 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-qjq4v"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.381875 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-qjq4v" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.393822 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395152 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngt69\" (UniqueName: \"kubernetes.io/projected/1b540550-77e7-4545-9e34-b972ab5ec677-kube-api-access-ngt69\") pod \"downloads-7954f5f757-4spkn\" (UID: \"1b540550-77e7-4545-9e34-b972ab5ec677\") " pod="openshift-console/downloads-7954f5f757-4spkn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395297 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/74f627e0-8a46-4876-b456-9efda3c4ad41-encryption-config\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395382 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e28dbd25-fd09-440e-b2d8-4560d43ea079-serving-cert\") pod \"etcd-operator-b45778765-22pnq\" (UID: \"e28dbd25-fd09-440e-b2d8-4560d43ea079\") " pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395415 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-audit-policies\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395435 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfbhn\" (UniqueName: \"kubernetes.io/projected/03ad7961-295b-4e12-82b0-b75f196049b0-kube-api-access-sfbhn\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395467 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e28dbd25-fd09-440e-b2d8-4560d43ea079-config\") pod \"etcd-operator-b45778765-22pnq\" (UID: \"e28dbd25-fd09-440e-b2d8-4560d43ea079\") " pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395489 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e28dbd25-fd09-440e-b2d8-4560d43ea079-etcd-client\") pod \"etcd-operator-b45778765-22pnq\" (UID: \"e28dbd25-fd09-440e-b2d8-4560d43ea079\") " pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395507 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395532 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395557 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74f627e0-8a46-4876-b456-9efda3c4ad41-serving-cert\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395665 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03ad7961-295b-4e12-82b0-b75f196049b0-audit-dir\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395686 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395705 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395725 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e28dbd25-fd09-440e-b2d8-4560d43ea079-etcd-ca\") pod \"etcd-operator-b45778765-22pnq\" (UID: \"e28dbd25-fd09-440e-b2d8-4560d43ea079\") " pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395776 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395800 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/afeebfe0-572d-4c6e-b706-2a35b40e25e0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7ndrw\" (UID: \"afeebfe0-572d-4c6e-b706-2a35b40e25e0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7ndrw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395816 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/16e4032b-e804-4cf0-9a9f-0c23319c06df-audit\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395836 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/16e4032b-e804-4cf0-9a9f-0c23319c06df-etcd-serving-ca\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395854 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395872 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j46xv\" (UniqueName: \"kubernetes.io/projected/e28dbd25-fd09-440e-b2d8-4560d43ea079-kube-api-access-j46xv\") pod \"etcd-operator-b45778765-22pnq\" (UID: \"e28dbd25-fd09-440e-b2d8-4560d43ea079\") " pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395893 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0-config\") pod \"authentication-operator-69f744f599-9lvg4\" (UID: \"bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lvg4" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395907 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0-service-ca-bundle\") pod \"authentication-operator-69f744f599-9lvg4\" (UID: \"bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lvg4" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395925 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/16e4032b-e804-4cf0-9a9f-0c23319c06df-image-import-ca\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395946 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395972 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74f627e0-8a46-4876-b456-9efda3c4ad41-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.395990 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d7kg\" (UniqueName: \"kubernetes.io/projected/74f627e0-8a46-4876-b456-9efda3c4ad41-kube-api-access-2d7kg\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.396136 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16e4032b-e804-4cf0-9a9f-0c23319c06df-serving-cert\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.396157 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdhkd\" (UniqueName: \"kubernetes.io/projected/16e4032b-e804-4cf0-9a9f-0c23319c06df-kube-api-access-xdhkd\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.396179 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e28dbd25-fd09-440e-b2d8-4560d43ea079-etcd-service-ca\") pod \"etcd-operator-b45778765-22pnq\" (UID: \"e28dbd25-fd09-440e-b2d8-4560d43ea079\") " pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.396203 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74f627e0-8a46-4876-b456-9efda3c4ad41-audit-dir\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.396223 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/16e4032b-e804-4cf0-9a9f-0c23319c06df-etcd-client\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.396244 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bprtz\" (UniqueName: \"kubernetes.io/projected/7dab4563-6b36-4d8e-a789-58459d65cb7c-kube-api-access-bprtz\") pod \"openshift-apiserver-operator-796bbdcf4f-gzd88\" (UID: \"7dab4563-6b36-4d8e-a789-58459d65cb7c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzd88" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.396299 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.396470 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/16e4032b-e804-4cf0-9a9f-0c23319c06df-node-pullsecrets\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.399227 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9lvg4"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.399551 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.399640 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74f627e0-8a46-4876-b456-9efda3c4ad41-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.396296 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/16e4032b-e804-4cf0-9a9f-0c23319c06df-node-pullsecrets\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.400058 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/16e4032b-e804-4cf0-9a9f-0c23319c06df-encryption-config\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.400109 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/74f627e0-8a46-4876-b456-9efda3c4ad41-etcd-client\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.400511 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e28dbd25-fd09-440e-b2d8-4560d43ea079-etcd-service-ca\") pod \"etcd-operator-b45778765-22pnq\" (UID: \"e28dbd25-fd09-440e-b2d8-4560d43ea079\") " pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.400791 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0-config\") pod \"authentication-operator-69f744f599-9lvg4\" (UID: \"bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lvg4" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.401182 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0-service-ca-bundle\") pod \"authentication-operator-69f744f599-9lvg4\" (UID: \"bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lvg4" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.401276 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/16e4032b-e804-4cf0-9a9f-0c23319c06df-etcd-serving-ca\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.402787 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/16e4032b-e804-4cf0-9a9f-0c23319c06df-image-import-ca\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.400632 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dab4563-6b36-4d8e-a789-58459d65cb7c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gzd88\" (UID: \"7dab4563-6b36-4d8e-a789-58459d65cb7c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzd88" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.403889 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.404431 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74f627e0-8a46-4876-b456-9efda3c4ad41-audit-dir\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.404624 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16e4032b-e804-4cf0-9a9f-0c23319c06df-config\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.404904 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-audit-policies\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.405173 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e28dbd25-fd09-440e-b2d8-4560d43ea079-config\") pod \"etcd-operator-b45778765-22pnq\" (UID: \"e28dbd25-fd09-440e-b2d8-4560d43ea079\") " pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.409372 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e28dbd25-fd09-440e-b2d8-4560d43ea079-etcd-client\") pod \"etcd-operator-b45778765-22pnq\" (UID: \"e28dbd25-fd09-440e-b2d8-4560d43ea079\") " pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.409863 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.410248 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03ad7961-295b-4e12-82b0-b75f196049b0-audit-dir\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.410802 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/16e4032b-e804-4cf0-9a9f-0c23319c06df-audit\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.411764 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.413246 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.413672 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e28dbd25-fd09-440e-b2d8-4560d43ea079-etcd-ca\") pod \"etcd-operator-b45778765-22pnq\" (UID: \"e28dbd25-fd09-440e-b2d8-4560d43ea079\") " pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.413763 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.414221 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.414332 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.414629 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dab4563-6b36-4d8e-a789-58459d65cb7c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gzd88\" (UID: \"7dab4563-6b36-4d8e-a789-58459d65cb7c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzd88" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.414724 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msmmt\" (UniqueName: \"kubernetes.io/projected/afeebfe0-572d-4c6e-b706-2a35b40e25e0-kube-api-access-msmmt\") pod \"cluster-samples-operator-665b6dd947-7ndrw\" (UID: \"afeebfe0-572d-4c6e-b706-2a35b40e25e0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7ndrw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.414773 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdsn5\" (UniqueName: \"kubernetes.io/projected/bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0-kube-api-access-zdsn5\") pod \"authentication-operator-69f744f599-9lvg4\" (UID: \"bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lvg4" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.414800 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16e4032b-e804-4cf0-9a9f-0c23319c06df-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.414826 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16e4032b-e804-4cf0-9a9f-0c23319c06df-audit-dir\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.414800 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-krdpg"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.415670 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.415731 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0-serving-cert\") pod \"authentication-operator-69f744f599-9lvg4\" (UID: \"bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lvg4" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.415770 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74f627e0-8a46-4876-b456-9efda3c4ad41-audit-policies\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.417352 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.417707 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9lvg4\" (UID: \"bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lvg4" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.417783 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/74f627e0-8a46-4876-b456-9efda3c4ad41-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.418199 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16e4032b-e804-4cf0-9a9f-0c23319c06df-config\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.421561 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.421738 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7dab4563-6b36-4d8e-a789-58459d65cb7c-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gzd88\" (UID: \"7dab4563-6b36-4d8e-a789-58459d65cb7c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzd88" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.422036 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/16e4032b-e804-4cf0-9a9f-0c23319c06df-etcd-client\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.422325 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74f627e0-8a46-4876-b456-9efda3c4ad41-audit-policies\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.422364 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16e4032b-e804-4cf0-9a9f-0c23319c06df-audit-dir\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.422928 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74f627e0-8a46-4876-b456-9efda3c4ad41-serving-cert\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.423143 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7dab4563-6b36-4d8e-a789-58459d65cb7c-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gzd88\" (UID: \"7dab4563-6b36-4d8e-a789-58459d65cb7c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzd88" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.423222 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.423696 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.423909 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e28dbd25-fd09-440e-b2d8-4560d43ea079-serving-cert\") pod \"etcd-operator-b45778765-22pnq\" (UID: \"e28dbd25-fd09-440e-b2d8-4560d43ea079\") " pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.423959 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-22pnq"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.424004 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16e4032b-e804-4cf0-9a9f-0c23319c06df-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.424402 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0-serving-cert\") pod \"authentication-operator-69f744f599-9lvg4\" (UID: \"bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lvg4" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.424650 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/74f627e0-8a46-4876-b456-9efda3c4ad41-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.424668 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/74f627e0-8a46-4876-b456-9efda3c4ad41-encryption-config\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.424932 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9lvg4\" (UID: \"bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lvg4" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.424938 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.425041 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16e4032b-e804-4cf0-9a9f-0c23319c06df-serving-cert\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.425132 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.425206 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.425254 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/74f627e0-8a46-4876-b456-9efda3c4ad41-etcd-client\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.425632 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.425811 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/16e4032b-e804-4cf0-9a9f-0c23319c06df-encryption-config\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.425901 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kqn98"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.426319 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.426905 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.427602 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4spkn"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.428253 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/afeebfe0-572d-4c6e-b706-2a35b40e25e0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-7ndrw\" (UID: \"afeebfe0-572d-4c6e-b706-2a35b40e25e0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7ndrw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.429394 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2knf7"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.431184 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xtdvk"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.432356 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f6574"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.434130 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-6nn48"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.435772 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zv5wk"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.436932 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6zn22"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.438421 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzd88"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.439868 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8zwdr"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.441015 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nvpxb"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.442170 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bnzn9"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.443142 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.444391 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-d2q8r"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.446209 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-277zs"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.446240 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5bljf"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.449988 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-xv79f"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.450020 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bd6qw"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.450029 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v8jpj"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.452461 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bsr2j"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.452522 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qb8jp"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.454403 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fqzwn"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.454485 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qr7zk"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.456555 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-7dndk"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.458111 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zs9kb"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.459058 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-qjq4v"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.460166 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.461430 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9hgkr"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.462985 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.463174 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.464493 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-nzz22"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.465367 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nzz22" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.465504 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pgq95"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.466279 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pgq95" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.466911 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56qgd"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.467973 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nzz22"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.469258 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pgq95"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.470962 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-cs84n"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.471455 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-ccmv2"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.471878 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ccmv2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.472070 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cs84n" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.473100 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ccmv2"] Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.484253 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.504382 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.523996 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.543978 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.563981 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.583996 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.603153 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.625055 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.643928 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.665019 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.683675 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.703822 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.742171 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.744794 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.764863 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.824412 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.841789 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62zgl\" (UniqueName: \"kubernetes.io/projected/2ce9900d-791b-48db-ad36-1fcdc3200dbf-kube-api-access-62zgl\") pod \"console-operator-58897d9998-krdpg\" (UID: \"2ce9900d-791b-48db-ad36-1fcdc3200dbf\") " pod="openshift-console-operator/console-operator-58897d9998-krdpg" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.841845 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f120455-5d97-4a64-a279-3992aeba7663-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zv5wk\" (UID: \"6f120455-5d97-4a64-a279-3992aeba7663\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zv5wk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.841871 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-bound-sa-token\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.841894 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6zn22\" (UID: \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.841953 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ce9900d-791b-48db-ad36-1fcdc3200dbf-config\") pod \"console-operator-58897d9998-krdpg\" (UID: \"2ce9900d-791b-48db-ad36-1fcdc3200dbf\") " pod="openshift-console-operator/console-operator-58897d9998-krdpg" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.841989 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgq9m\" (UniqueName: \"kubernetes.io/projected/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-kube-api-access-mgq9m\") pod \"controller-manager-879f6c89f-6zn22\" (UID: \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.842026 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-service-ca\") pod \"console-f9d7485db-jb4lb\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.842088 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/68125ac7-cf1b-4461-820a-b7318076e62d-console-serving-cert\") pod \"console-f9d7485db-jb4lb\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.842163 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.842186 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f120455-5d97-4a64-a279-3992aeba7663-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zv5wk\" (UID: \"6f120455-5d97-4a64-a279-3992aeba7663\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zv5wk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.842211 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfsnq\" (UniqueName: \"kubernetes.io/projected/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-kube-api-access-zfsnq\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.842917 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-console-config\") pod \"console-f9d7485db-jb4lb\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.842948 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-oauth-serving-cert\") pod \"console-f9d7485db-jb4lb\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.842983 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ce9900d-791b-48db-ad36-1fcdc3200dbf-serving-cert\") pod \"console-operator-58897d9998-krdpg\" (UID: \"2ce9900d-791b-48db-ad36-1fcdc3200dbf\") " pod="openshift-console-operator/console-operator-58897d9998-krdpg" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.843100 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ldxd\" (UniqueName: \"kubernetes.io/projected/6432557f-df32-40cb-a0b5-6ee2c652b10f-kube-api-access-7ldxd\") pod \"dns-operator-744455d44c-kqn98\" (UID: \"6432557f-df32-40cb-a0b5-6ee2c652b10f\") " pod="openshift-dns-operator/dns-operator-744455d44c-kqn98" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.843321 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9392d98-21d0-4761-a4e4-0a75367b4c31-config\") pod \"route-controller-manager-6576b87f9c-574hr\" (UID: \"a9392d98-21d0-4761-a4e4-0a75367b4c31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.843505 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.843512 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtswf\" (UniqueName: \"kubernetes.io/projected/68125ac7-cf1b-4461-820a-b7318076e62d-kube-api-access-dtswf\") pod \"console-f9d7485db-jb4lb\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.843601 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-registry-certificates\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.843658 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6432557f-df32-40cb-a0b5-6ee2c652b10f-metrics-tls\") pod \"dns-operator-744455d44c-kqn98\" (UID: \"6432557f-df32-40cb-a0b5-6ee2c652b10f\") " pod="openshift-dns-operator/dns-operator-744455d44c-kqn98" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.843810 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-trusted-ca-bundle\") pod \"console-f9d7485db-jb4lb\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.843902 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx6d9\" (UniqueName: \"kubernetes.io/projected/6f120455-5d97-4a64-a279-3992aeba7663-kube-api-access-vx6d9\") pod \"openshift-controller-manager-operator-756b6f6bc6-zv5wk\" (UID: \"6f120455-5d97-4a64-a279-3992aeba7663\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zv5wk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.843949 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-config\") pod \"controller-manager-879f6c89f-6zn22\" (UID: \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.843981 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9392d98-21d0-4761-a4e4-0a75367b4c31-serving-cert\") pod \"route-controller-manager-6576b87f9c-574hr\" (UID: \"a9392d98-21d0-4761-a4e4-0a75367b4c31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.843998 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ce9900d-791b-48db-ad36-1fcdc3200dbf-trusted-ca\") pod \"console-operator-58897d9998-krdpg\" (UID: \"2ce9900d-791b-48db-ad36-1fcdc3200dbf\") " pod="openshift-console-operator/console-operator-58897d9998-krdpg" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.844020 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9392d98-21d0-4761-a4e4-0a75367b4c31-client-ca\") pod \"route-controller-manager-6576b87f9c-574hr\" (UID: \"a9392d98-21d0-4761-a4e4-0a75367b4c31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.844058 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.844073 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-serving-cert\") pod \"controller-manager-879f6c89f-6zn22\" (UID: \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.844090 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jssqm\" (UniqueName: \"kubernetes.io/projected/a9392d98-21d0-4761-a4e4-0a75367b4c31-kube-api-access-jssqm\") pod \"route-controller-manager-6576b87f9c-574hr\" (UID: \"a9392d98-21d0-4761-a4e4-0a75367b4c31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.844128 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-trusted-ca\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.844143 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-client-ca\") pod \"controller-manager-879f6c89f-6zn22\" (UID: \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.844160 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/68125ac7-cf1b-4461-820a-b7318076e62d-console-oauth-config\") pod \"console-f9d7485db-jb4lb\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.844190 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.844207 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-registry-tls\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: E1124 19:19:16.844915 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:17.344898997 +0000 UTC m=+151.133851368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.863736 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.901602 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.904662 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.924618 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.943912 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.945214 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:16 crc kubenswrapper[4812]: E1124 19:19:16.945377 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:17.445358156 +0000 UTC m=+151.234310527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.945826 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6zn22\" (UID: \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.946014 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ce9900d-791b-48db-ad36-1fcdc3200dbf-config\") pod \"console-operator-58897d9998-krdpg\" (UID: \"2ce9900d-791b-48db-ad36-1fcdc3200dbf\") " pod="openshift-console-operator/console-operator-58897d9998-krdpg" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.946157 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjmrs\" (UniqueName: \"kubernetes.io/projected/a2fd27c6-f3db-49bd-b347-b00275c78e6e-kube-api-access-jjmrs\") pod \"package-server-manager-789f6589d5-9hgkr\" (UID: \"a2fd27c6-f3db-49bd-b347-b00275c78e6e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9hgkr" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.946325 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmlsh\" (UniqueName: \"kubernetes.io/projected/d4a06222-8c3c-46c0-a29c-acf87a56c0db-kube-api-access-tmlsh\") pod \"cluster-image-registry-operator-dc59b4c8b-277zs\" (UID: \"d4a06222-8c3c-46c0-a29c-acf87a56c0db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-277zs" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.946496 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e8777c17-ca96-4124-9420-0d41d4d458a9-webhook-cert\") pod \"packageserver-d55dfcdfc-7kfm7\" (UID: \"e8777c17-ca96-4124-9420-0d41d4d458a9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.946712 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/89c6a1cc-e545-44a6-a7e3-8e900ca0858f-machine-approver-tls\") pod \"machine-approver-56656f9798-qln2c\" (UID: \"89c6a1cc-e545-44a6-a7e3-8e900ca0858f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qln2c" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.946885 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60c29281-e44c-44f2-9981-e76e97baf4a8-config\") pod \"kube-apiserver-operator-766d6c64bb-8zwdr\" (UID: \"60c29281-e44c-44f2-9981-e76e97baf4a8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8zwdr" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.946989 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ce9900d-791b-48db-ad36-1fcdc3200dbf-config\") pod \"console-operator-58897d9998-krdpg\" (UID: \"2ce9900d-791b-48db-ad36-1fcdc3200dbf\") " pod="openshift-console-operator/console-operator-58897d9998-krdpg" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.947148 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/68125ac7-cf1b-4461-820a-b7318076e62d-console-serving-cert\") pod \"console-f9d7485db-jb4lb\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.947317 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md485\" (UniqueName: \"kubernetes.io/projected/29bdcdf7-cbd6-4ec4-93b3-84e1424bdaa8-kube-api-access-md485\") pod \"migrator-59844c95c7-bnzn9\" (UID: \"29bdcdf7-cbd6-4ec4-93b3-84e1424bdaa8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnzn9" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.947582 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.947717 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9h28\" (UniqueName: \"kubernetes.io/projected/89c6a1cc-e545-44a6-a7e3-8e900ca0858f-kube-api-access-v9h28\") pod \"machine-approver-56656f9798-qln2c\" (UID: \"89c6a1cc-e545-44a6-a7e3-8e900ca0858f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qln2c" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.947861 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e972f42f-2bbb-4c1a-83a8-56b3fab82b60-cert\") pod \"ingress-canary-ccmv2\" (UID: \"e972f42f-2bbb-4c1a-83a8-56b3fab82b60\") " pod="openshift-ingress-canary/ingress-canary-ccmv2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.947997 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.948016 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6zn22\" (UID: \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.948175 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/139070ff-0113-4e25-93ef-d1861e0c4318-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2knf7\" (UID: \"139070ff-0113-4e25-93ef-d1861e0c4318\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2knf7" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.948302 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfsnq\" (UniqueName: \"kubernetes.io/projected/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-kube-api-access-zfsnq\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.948456 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-console-config\") pod \"console-f9d7485db-jb4lb\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.948586 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/726e8147-8d74-4797-8989-fdb12e97bd01-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bsr2j\" (UID: \"726e8147-8d74-4797-8989-fdb12e97bd01\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bsr2j" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.948714 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c35074ed-4b34-451a-aea5-9022f6f8c685-images\") pod \"machine-api-operator-5694c8668f-6nn48\" (UID: \"c35074ed-4b34-451a-aea5-9022f6f8c685\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6nn48" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.948867 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ce9900d-791b-48db-ad36-1fcdc3200dbf-serving-cert\") pod \"console-operator-58897d9998-krdpg\" (UID: \"2ce9900d-791b-48db-ad36-1fcdc3200dbf\") " pod="openshift-console-operator/console-operator-58897d9998-krdpg" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.948995 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ldxd\" (UniqueName: \"kubernetes.io/projected/6432557f-df32-40cb-a0b5-6ee2c652b10f-kube-api-access-7ldxd\") pod \"dns-operator-744455d44c-kqn98\" (UID: \"6432557f-df32-40cb-a0b5-6ee2c652b10f\") " pod="openshift-dns-operator/dns-operator-744455d44c-kqn98" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.949142 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/47cb9fd2-9706-464a-9360-901e3da7aa86-csi-data-dir\") pod \"csi-hostpathplugin-pgq95\" (UID: \"47cb9fd2-9706-464a-9360-901e3da7aa86\") " pod="hostpath-provisioner/csi-hostpathplugin-pgq95" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.949275 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0787504c-4f5c-4089-8f03-679eb74d7b50-config\") pod \"service-ca-operator-777779d784-7dndk\" (UID: \"0787504c-4f5c-4089-8f03-679eb74d7b50\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dndk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.949503 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a153dfd1-0dbb-4913-bcf4-768496d5db9a-serving-cert\") pod \"openshift-config-operator-7777fb866f-xv79f\" (UID: \"a153dfd1-0dbb-4913-bcf4-768496d5db9a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xv79f" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.949675 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/793a0705-3d09-4760-aa80-481eddf0ad45-trusted-ca\") pod \"ingress-operator-5b745b69d9-zs9kb\" (UID: \"793a0705-3d09-4760-aa80-481eddf0ad45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zs9kb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.949827 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c35074ed-4b34-451a-aea5-9022f6f8c685-config\") pod \"machine-api-operator-5694c8668f-6nn48\" (UID: \"c35074ed-4b34-451a-aea5-9022f6f8c685\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6nn48" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.949374 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-console-config\") pod \"console-f9d7485db-jb4lb\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.950026 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89c6a1cc-e545-44a6-a7e3-8e900ca0858f-config\") pod \"machine-approver-56656f9798-qln2c\" (UID: \"89c6a1cc-e545-44a6-a7e3-8e900ca0858f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qln2c" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.950279 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzrxj\" (UniqueName: \"kubernetes.io/projected/8d30a2dd-d2ba-463e-8d95-6f47271bac81-kube-api-access-lzrxj\") pod \"router-default-5444994796-qrrxs\" (UID: \"8d30a2dd-d2ba-463e-8d95-6f47271bac81\") " pod="openshift-ingress/router-default-5444994796-qrrxs" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.950441 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crzxl\" (UniqueName: \"kubernetes.io/projected/0787504c-4f5c-4089-8f03-679eb74d7b50-kube-api-access-crzxl\") pod \"service-ca-operator-777779d784-7dndk\" (UID: \"0787504c-4f5c-4089-8f03-679eb74d7b50\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dndk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.950606 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb6l6\" (UniqueName: \"kubernetes.io/projected/04267c57-8e76-48a4-b768-ce571525ed62-kube-api-access-xb6l6\") pod \"marketplace-operator-79b997595-nvpxb\" (UID: \"04267c57-8e76-48a4-b768-ce571525ed62\") " pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.950744 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jdqd\" (UniqueName: \"kubernetes.io/projected/9bb2abbd-0aca-4038-91a7-4fff299bfb45-kube-api-access-4jdqd\") pod \"dns-default-nzz22\" (UID: \"9bb2abbd-0aca-4038-91a7-4fff299bfb45\") " pod="openshift-dns/dns-default-nzz22" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.950889 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nmn6\" (UniqueName: \"kubernetes.io/projected/fb5635f7-027d-4411-ad86-2e27d7e74efb-kube-api-access-4nmn6\") pod \"service-ca-9c57cc56f-qjq4v\" (UID: \"fb5635f7-027d-4411-ad86-2e27d7e74efb\") " pod="openshift-service-ca/service-ca-9c57cc56f-qjq4v" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.951038 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-png85\" (UniqueName: \"kubernetes.io/projected/dd6fed1e-59d9-400c-b9cd-561ec215a48e-kube-api-access-png85\") pod \"machine-config-controller-84d6567774-f6574\" (UID: \"dd6fed1e-59d9-400c-b9cd-561ec215a48e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6574" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.951195 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60c29281-e44c-44f2-9981-e76e97baf4a8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8zwdr\" (UID: \"60c29281-e44c-44f2-9981-e76e97baf4a8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8zwdr" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.951375 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d003f73-5e4b-475f-bb57-66f13916a0c5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qr7zk\" (UID: \"0d003f73-5e4b-475f-bb57-66f13916a0c5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qr7zk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.951562 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/793a0705-3d09-4760-aa80-481eddf0ad45-metrics-tls\") pod \"ingress-operator-5b745b69d9-zs9kb\" (UID: \"793a0705-3d09-4760-aa80-481eddf0ad45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zs9kb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.951725 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/139070ff-0113-4e25-93ef-d1861e0c4318-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2knf7\" (UID: \"139070ff-0113-4e25-93ef-d1861e0c4318\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2knf7" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.951871 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a153dfd1-0dbb-4913-bcf4-768496d5db9a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-xv79f\" (UID: \"a153dfd1-0dbb-4913-bcf4-768496d5db9a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xv79f" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.952035 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2bmh\" (UniqueName: \"kubernetes.io/projected/7b8ace94-12b2-4176-ba9f-e14f0b55e3c1-kube-api-access-t2bmh\") pod \"kube-storage-version-migrator-operator-b67b599dd-5bljf\" (UID: \"7b8ace94-12b2-4176-ba9f-e14f0b55e3c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5bljf" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.950314 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/68125ac7-cf1b-4461-820a-b7318076e62d-console-serving-cert\") pod \"console-f9d7485db-jb4lb\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.952360 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/47cb9fd2-9706-464a-9360-901e3da7aa86-socket-dir\") pod \"csi-hostpathplugin-pgq95\" (UID: \"47cb9fd2-9706-464a-9360-901e3da7aa86\") " pod="hostpath-provisioner/csi-hostpathplugin-pgq95" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.952509 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8d30a2dd-d2ba-463e-8d95-6f47271bac81-stats-auth\") pod \"router-default-5444994796-qrrxs\" (UID: \"8d30a2dd-d2ba-463e-8d95-6f47271bac81\") " pod="openshift-ingress/router-default-5444994796-qrrxs" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.952680 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.952813 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-serving-cert\") pod \"controller-manager-879f6c89f-6zn22\" (UID: \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.952942 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jssqm\" (UniqueName: \"kubernetes.io/projected/a9392d98-21d0-4761-a4e4-0a75367b4c31-kube-api-access-jssqm\") pod \"route-controller-manager-6576b87f9c-574hr\" (UID: \"a9392d98-21d0-4761-a4e4-0a75367b4c31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.953109 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh7b4\" (UniqueName: \"kubernetes.io/projected/a153dfd1-0dbb-4913-bcf4-768496d5db9a-kube-api-access-hh7b4\") pod \"openshift-config-operator-7777fb866f-xv79f\" (UID: \"a153dfd1-0dbb-4913-bcf4-768496d5db9a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xv79f" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.953289 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fb5635f7-027d-4411-ad86-2e27d7e74efb-signing-cabundle\") pod \"service-ca-9c57cc56f-qjq4v\" (UID: \"fb5635f7-027d-4411-ad86-2e27d7e74efb\") " pod="openshift-service-ca/service-ca-9c57cc56f-qjq4v" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.953488 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d30a2dd-d2ba-463e-8d95-6f47271bac81-service-ca-bundle\") pod \"router-default-5444994796-qrrxs\" (UID: \"8d30a2dd-d2ba-463e-8d95-6f47271bac81\") " pod="openshift-ingress/router-default-5444994796-qrrxs" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.953625 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/85cc9fad-228f-4d80-8348-8743b79cf691-proxy-tls\") pod \"machine-config-operator-74547568cd-qb8jp\" (UID: \"85cc9fad-228f-4d80-8348-8743b79cf691\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qb8jp" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.953816 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.953962 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-registry-tls\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.954101 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/68125ac7-cf1b-4461-820a-b7318076e62d-console-oauth-config\") pod \"console-f9d7485db-jb4lb\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.954305 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4a06222-8c3c-46c0-a29c-acf87a56c0db-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-277zs\" (UID: \"d4a06222-8c3c-46c0-a29c-acf87a56c0db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-277zs" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.954480 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0787504c-4f5c-4089-8f03-679eb74d7b50-serving-cert\") pod \"service-ca-operator-777779d784-7dndk\" (UID: \"0787504c-4f5c-4089-8f03-679eb74d7b50\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dndk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.954619 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b8ace94-12b2-4176-ba9f-e14f0b55e3c1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5bljf\" (UID: \"7b8ace94-12b2-4176-ba9f-e14f0b55e3c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5bljf" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.954755 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd9bc\" (UniqueName: \"kubernetes.io/projected/e8777c17-ca96-4124-9420-0d41d4d458a9-kube-api-access-fd9bc\") pod \"packageserver-d55dfcdfc-7kfm7\" (UID: \"e8777c17-ca96-4124-9420-0d41d4d458a9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.954914 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b781aa9b-e5aa-43c5-b9a5-4962fe210ad7-node-bootstrap-token\") pod \"machine-config-server-cs84n\" (UID: \"b781aa9b-e5aa-43c5-b9a5-4962fe210ad7\") " pod="openshift-machine-config-operator/machine-config-server-cs84n" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.955074 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f120455-5d97-4a64-a279-3992aeba7663-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zv5wk\" (UID: \"6f120455-5d97-4a64-a279-3992aeba7663\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zv5wk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.955202 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b8ace94-12b2-4176-ba9f-e14f0b55e3c1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5bljf\" (UID: \"7b8ace94-12b2-4176-ba9f-e14f0b55e3c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5bljf" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.955363 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/139070ff-0113-4e25-93ef-d1861e0c4318-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2knf7\" (UID: \"139070ff-0113-4e25-93ef-d1861e0c4318\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2knf7" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.955505 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9bb2abbd-0aca-4038-91a7-4fff299bfb45-config-volume\") pod \"dns-default-nzz22\" (UID: \"9bb2abbd-0aca-4038-91a7-4fff299bfb45\") " pod="openshift-dns/dns-default-nzz22" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.955628 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/726e8147-8d74-4797-8989-fdb12e97bd01-config\") pod \"kube-controller-manager-operator-78b949d7b-bsr2j\" (UID: \"726e8147-8d74-4797-8989-fdb12e97bd01\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bsr2j" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.955768 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/726e8147-8d74-4797-8989-fdb12e97bd01-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bsr2j\" (UID: \"726e8147-8d74-4797-8989-fdb12e97bd01\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bsr2j" Nov 24 19:19:16 crc kubenswrapper[4812]: E1124 19:19:16.958309 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:17.458292506 +0000 UTC m=+151.247244887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.958779 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chp89\" (UniqueName: \"kubernetes.io/projected/e972f42f-2bbb-4c1a-83a8-56b3fab82b60-kube-api-access-chp89\") pod \"ingress-canary-ccmv2\" (UID: \"e972f42f-2bbb-4c1a-83a8-56b3fab82b60\") " pod="openshift-ingress-canary/ingress-canary-ccmv2" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.958814 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn6kv\" (UniqueName: \"kubernetes.io/projected/85cc9fad-228f-4d80-8348-8743b79cf691-kube-api-access-gn6kv\") pod \"machine-config-operator-74547568cd-qb8jp\" (UID: \"85cc9fad-228f-4d80-8348-8743b79cf691\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qb8jp" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.958859 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2c4fc3d9-eec6-483b-94ac-626032f65ff6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-56qgd\" (UID: \"2c4fc3d9-eec6-483b-94ac-626032f65ff6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56qgd" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.958879 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85cc9fad-228f-4d80-8348-8743b79cf691-images\") pod \"machine-config-operator-74547568cd-qb8jp\" (UID: \"85cc9fad-228f-4d80-8348-8743b79cf691\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qb8jp" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.958900 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9bb2abbd-0aca-4038-91a7-4fff299bfb45-metrics-tls\") pod \"dns-default-nzz22\" (UID: \"9bb2abbd-0aca-4038-91a7-4fff299bfb45\") " pod="openshift-dns/dns-default-nzz22" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.958920 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/793a0705-3d09-4760-aa80-481eddf0ad45-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zs9kb\" (UID: \"793a0705-3d09-4760-aa80-481eddf0ad45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zs9kb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.958943 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgq9m\" (UniqueName: \"kubernetes.io/projected/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-kube-api-access-mgq9m\") pod \"controller-manager-879f6c89f-6zn22\" (UID: \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.958948 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ce9900d-791b-48db-ad36-1fcdc3200dbf-serving-cert\") pod \"console-operator-58897d9998-krdpg\" (UID: \"2ce9900d-791b-48db-ad36-1fcdc3200dbf\") " pod="openshift-console-operator/console-operator-58897d9998-krdpg" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.958963 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-service-ca\") pod \"console-f9d7485db-jb4lb\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.959032 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f120455-5d97-4a64-a279-3992aeba7663-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zv5wk\" (UID: \"6f120455-5d97-4a64-a279-3992aeba7663\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zv5wk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.959043 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/04267c57-8e76-48a4-b768-ce571525ed62-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nvpxb\" (UID: \"04267c57-8e76-48a4-b768-ce571525ed62\") " pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.959085 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2c4fc3d9-eec6-483b-94ac-626032f65ff6-srv-cert\") pod \"olm-operator-6b444d44fb-56qgd\" (UID: \"2c4fc3d9-eec6-483b-94ac-626032f65ff6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56qgd" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.959702 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsvmq\" (UniqueName: \"kubernetes.io/projected/2c4fc3d9-eec6-483b-94ac-626032f65ff6-kube-api-access-tsvmq\") pod \"olm-operator-6b444d44fb-56qgd\" (UID: \"2c4fc3d9-eec6-483b-94ac-626032f65ff6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56qgd" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.959737 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f95538f5-bbe6-43f4-a40c-0a60b3a2b828-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-d2q8r\" (UID: \"f95538f5-bbe6-43f4-a40c-0a60b3a2b828\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d2q8r" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.959764 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f120455-5d97-4a64-a279-3992aeba7663-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zv5wk\" (UID: \"6f120455-5d97-4a64-a279-3992aeba7663\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zv5wk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.960070 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfmsp\" (UniqueName: \"kubernetes.io/projected/3fdb6f5e-7a9b-4dca-a612-07aa2e4751a3-kube-api-access-mfmsp\") pod \"catalog-operator-68c6474976-v8jpj\" (UID: \"3fdb6f5e-7a9b-4dca-a612-07aa2e4751a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v8jpj" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.960174 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-oauth-serving-cert\") pod \"console-f9d7485db-jb4lb\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.960306 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/47cb9fd2-9706-464a-9360-901e3da7aa86-mountpoint-dir\") pod \"csi-hostpathplugin-pgq95\" (UID: \"47cb9fd2-9706-464a-9360-901e3da7aa86\") " pod="hostpath-provisioner/csi-hostpathplugin-pgq95" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.960396 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60c29281-e44c-44f2-9981-e76e97baf4a8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8zwdr\" (UID: \"60c29281-e44c-44f2-9981-e76e97baf4a8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8zwdr" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.960489 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89a685cc-973c-4317-b6fe-5bc842e32099-config-volume\") pod \"collect-profiles-29400195-lwbwx\" (UID: \"89a685cc-973c-4317-b6fe-5bc842e32099\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.960535 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgjj5\" (UniqueName: \"kubernetes.io/projected/c35074ed-4b34-451a-aea5-9022f6f8c685-kube-api-access-vgjj5\") pod \"machine-api-operator-5694c8668f-6nn48\" (UID: \"c35074ed-4b34-451a-aea5-9022f6f8c685\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6nn48" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.960571 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/47cb9fd2-9706-464a-9360-901e3da7aa86-registration-dir\") pod \"csi-hostpathplugin-pgq95\" (UID: \"47cb9fd2-9706-464a-9360-901e3da7aa86\") " pod="hostpath-provisioner/csi-hostpathplugin-pgq95" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.960598 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/47cb9fd2-9706-464a-9360-901e3da7aa86-plugins-dir\") pod \"csi-hostpathplugin-pgq95\" (UID: \"47cb9fd2-9706-464a-9360-901e3da7aa86\") " pod="hostpath-provisioner/csi-hostpathplugin-pgq95" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.960621 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d30a2dd-d2ba-463e-8d95-6f47271bac81-metrics-certs\") pod \"router-default-5444994796-qrrxs\" (UID: \"8d30a2dd-d2ba-463e-8d95-6f47271bac81\") " pod="openshift-ingress/router-default-5444994796-qrrxs" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.960662 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtswf\" (UniqueName: \"kubernetes.io/projected/68125ac7-cf1b-4461-820a-b7318076e62d-kube-api-access-dtswf\") pod \"console-f9d7485db-jb4lb\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.960711 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9392d98-21d0-4761-a4e4-0a75367b4c31-config\") pod \"route-controller-manager-6576b87f9c-574hr\" (UID: \"a9392d98-21d0-4761-a4e4-0a75367b4c31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.960907 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-oauth-serving-cert\") pod \"console-f9d7485db-jb4lb\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.961003 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vltcl\" (UniqueName: \"kubernetes.io/projected/0d003f73-5e4b-475f-bb57-66f13916a0c5-kube-api-access-vltcl\") pod \"control-plane-machine-set-operator-78cbb6b69f-qr7zk\" (UID: \"0d003f73-5e4b-475f-bb57-66f13916a0c5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qr7zk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.961045 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d4a06222-8c3c-46c0-a29c-acf87a56c0db-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-277zs\" (UID: \"d4a06222-8c3c-46c0-a29c-acf87a56c0db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-277zs" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.961081 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3fdb6f5e-7a9b-4dca-a612-07aa2e4751a3-profile-collector-cert\") pod \"catalog-operator-68c6474976-v8jpj\" (UID: \"3fdb6f5e-7a9b-4dca-a612-07aa2e4751a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v8jpj" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.961109 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7787q\" (UniqueName: \"kubernetes.io/projected/793a0705-3d09-4760-aa80-481eddf0ad45-kube-api-access-7787q\") pod \"ingress-operator-5b745b69d9-zs9kb\" (UID: \"793a0705-3d09-4760-aa80-481eddf0ad45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zs9kb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.961147 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-registry-certificates\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.961173 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6432557f-df32-40cb-a0b5-6ee2c652b10f-metrics-tls\") pod \"dns-operator-744455d44c-kqn98\" (UID: \"6432557f-df32-40cb-a0b5-6ee2c652b10f\") " pod="openshift-dns-operator/dns-operator-744455d44c-kqn98" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.961183 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-service-ca\") pod \"console-f9d7485db-jb4lb\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.961196 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-trusted-ca-bundle\") pod \"console-f9d7485db-jb4lb\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.961230 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx6d9\" (UniqueName: \"kubernetes.io/projected/6f120455-5d97-4a64-a279-3992aeba7663-kube-api-access-vx6d9\") pod \"openshift-controller-manager-operator-756b6f6bc6-zv5wk\" (UID: \"6f120455-5d97-4a64-a279-3992aeba7663\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zv5wk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.961656 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnbtz\" (UniqueName: \"kubernetes.io/projected/89a685cc-973c-4317-b6fe-5bc842e32099-kube-api-access-dnbtz\") pod \"collect-profiles-29400195-lwbwx\" (UID: \"89a685cc-973c-4317-b6fe-5bc842e32099\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.961702 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85cc9fad-228f-4d80-8348-8743b79cf691-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qb8jp\" (UID: \"85cc9fad-228f-4d80-8348-8743b79cf691\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qb8jp" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.961857 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e8777c17-ca96-4124-9420-0d41d4d458a9-apiservice-cert\") pod \"packageserver-d55dfcdfc-7kfm7\" (UID: \"e8777c17-ca96-4124-9420-0d41d4d458a9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.961862 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9392d98-21d0-4761-a4e4-0a75367b4c31-config\") pod \"route-controller-manager-6576b87f9c-574hr\" (UID: \"a9392d98-21d0-4761-a4e4-0a75367b4c31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.961918 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2fd27c6-f3db-49bd-b347-b00275c78e6e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9hgkr\" (UID: \"a2fd27c6-f3db-49bd-b347-b00275c78e6e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9hgkr" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.961976 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-config\") pod \"controller-manager-879f6c89f-6zn22\" (UID: \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962005 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4a06222-8c3c-46c0-a29c-acf87a56c0db-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-277zs\" (UID: \"d4a06222-8c3c-46c0-a29c-acf87a56c0db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-277zs" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962031 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e8777c17-ca96-4124-9420-0d41d4d458a9-tmpfs\") pod \"packageserver-d55dfcdfc-7kfm7\" (UID: \"e8777c17-ca96-4124-9420-0d41d4d458a9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962051 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh8xc\" (UniqueName: \"kubernetes.io/projected/b781aa9b-e5aa-43c5-b9a5-4962fe210ad7-kube-api-access-mh8xc\") pod \"machine-config-server-cs84n\" (UID: \"b781aa9b-e5aa-43c5-b9a5-4962fe210ad7\") " pod="openshift-machine-config-operator/machine-config-server-cs84n" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962069 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3fdb6f5e-7a9b-4dca-a612-07aa2e4751a3-srv-cert\") pod \"catalog-operator-68c6474976-v8jpj\" (UID: \"3fdb6f5e-7a9b-4dca-a612-07aa2e4751a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v8jpj" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962091 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9392d98-21d0-4761-a4e4-0a75367b4c31-serving-cert\") pod \"route-controller-manager-6576b87f9c-574hr\" (UID: \"a9392d98-21d0-4761-a4e4-0a75367b4c31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962115 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ce9900d-791b-48db-ad36-1fcdc3200dbf-trusted-ca\") pod \"console-operator-58897d9998-krdpg\" (UID: \"2ce9900d-791b-48db-ad36-1fcdc3200dbf\") " pod="openshift-console-operator/console-operator-58897d9998-krdpg" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962139 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c35074ed-4b34-451a-aea5-9022f6f8c685-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6nn48\" (UID: \"c35074ed-4b34-451a-aea5-9022f6f8c685\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6nn48" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962167 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dd6fed1e-59d9-400c-b9cd-561ec215a48e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f6574\" (UID: \"dd6fed1e-59d9-400c-b9cd-561ec215a48e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6574" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962194 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9392d98-21d0-4761-a4e4-0a75367b4c31-client-ca\") pod \"route-controller-manager-6576b87f9c-574hr\" (UID: \"a9392d98-21d0-4761-a4e4-0a75367b4c31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962240 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-trusted-ca-bundle\") pod \"console-f9d7485db-jb4lb\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962273 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q5mh\" (UniqueName: \"kubernetes.io/projected/47cb9fd2-9706-464a-9360-901e3da7aa86-kube-api-access-2q5mh\") pod \"csi-hostpathplugin-pgq95\" (UID: \"47cb9fd2-9706-464a-9360-901e3da7aa86\") " pod="hostpath-provisioner/csi-hostpathplugin-pgq95" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962317 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-client-ca\") pod \"controller-manager-879f6c89f-6zn22\" (UID: \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962361 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/89c6a1cc-e545-44a6-a7e3-8e900ca0858f-auth-proxy-config\") pod \"machine-approver-56656f9798-qln2c\" (UID: \"89c6a1cc-e545-44a6-a7e3-8e900ca0858f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qln2c" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962384 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-trusted-ca\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962416 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fb5635f7-027d-4411-ad86-2e27d7e74efb-signing-key\") pod \"service-ca-9c57cc56f-qjq4v\" (UID: \"fb5635f7-027d-4411-ad86-2e27d7e74efb\") " pod="openshift-service-ca/service-ca-9c57cc56f-qjq4v" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962437 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dd6fed1e-59d9-400c-b9cd-561ec215a48e-proxy-tls\") pod \"machine-config-controller-84d6567774-f6574\" (UID: \"dd6fed1e-59d9-400c-b9cd-561ec215a48e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6574" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962493 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62zgl\" (UniqueName: \"kubernetes.io/projected/2ce9900d-791b-48db-ad36-1fcdc3200dbf-kube-api-access-62zgl\") pod \"console-operator-58897d9998-krdpg\" (UID: \"2ce9900d-791b-48db-ad36-1fcdc3200dbf\") " pod="openshift-console-operator/console-operator-58897d9998-krdpg" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962518 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89a685cc-973c-4317-b6fe-5bc842e32099-secret-volume\") pod \"collect-profiles-29400195-lwbwx\" (UID: \"89a685cc-973c-4317-b6fe-5bc842e32099\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962572 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b781aa9b-e5aa-43c5-b9a5-4962fe210ad7-certs\") pod \"machine-config-server-cs84n\" (UID: \"b781aa9b-e5aa-43c5-b9a5-4962fe210ad7\") " pod="openshift-machine-config-operator/machine-config-server-cs84n" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962595 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8d30a2dd-d2ba-463e-8d95-6f47271bac81-default-certificate\") pod \"router-default-5444994796-qrrxs\" (UID: \"8d30a2dd-d2ba-463e-8d95-6f47271bac81\") " pod="openshift-ingress/router-default-5444994796-qrrxs" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962620 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04267c57-8e76-48a4-b768-ce571525ed62-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nvpxb\" (UID: \"04267c57-8e76-48a4-b768-ce571525ed62\") " pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962688 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-bound-sa-token\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.962723 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pb4k\" (UniqueName: \"kubernetes.io/projected/f95538f5-bbe6-43f4-a40c-0a60b3a2b828-kube-api-access-6pb4k\") pod \"multus-admission-controller-857f4d67dd-d2q8r\" (UID: \"f95538f5-bbe6-43f4-a40c-0a60b3a2b828\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d2q8r" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.963080 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9392d98-21d0-4761-a4e4-0a75367b4c31-client-ca\") pod \"route-controller-manager-6576b87f9c-574hr\" (UID: \"a9392d98-21d0-4761-a4e4-0a75367b4c31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.963320 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2ce9900d-791b-48db-ad36-1fcdc3200dbf-trusted-ca\") pod \"console-operator-58897d9998-krdpg\" (UID: \"2ce9900d-791b-48db-ad36-1fcdc3200dbf\") " pod="openshift-console-operator/console-operator-58897d9998-krdpg" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.963473 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.963733 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-config\") pod \"controller-manager-879f6c89f-6zn22\" (UID: \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.963938 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-client-ca\") pod \"controller-manager-879f6c89f-6zn22\" (UID: \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.964245 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-registry-certificates\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.964691 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-trusted-ca\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.967073 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-registry-tls\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.967084 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-serving-cert\") pod \"controller-manager-879f6c89f-6zn22\" (UID: \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.967120 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6432557f-df32-40cb-a0b5-6ee2c652b10f-metrics-tls\") pod \"dns-operator-744455d44c-kqn98\" (UID: \"6432557f-df32-40cb-a0b5-6ee2c652b10f\") " pod="openshift-dns-operator/dns-operator-744455d44c-kqn98" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.967619 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.967625 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/68125ac7-cf1b-4461-820a-b7318076e62d-console-oauth-config\") pod \"console-f9d7485db-jb4lb\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.967665 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f120455-5d97-4a64-a279-3992aeba7663-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zv5wk\" (UID: \"6f120455-5d97-4a64-a279-3992aeba7663\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zv5wk" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.968014 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9392d98-21d0-4761-a4e4-0a75367b4c31-serving-cert\") pod \"route-controller-manager-6576b87f9c-574hr\" (UID: \"a9392d98-21d0-4761-a4e4-0a75367b4c31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" Nov 24 19:19:16 crc kubenswrapper[4812]: I1124 19:19:16.983090 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.003206 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.023638 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.043811 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.064255 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.067755 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.068016 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/89c6a1cc-e545-44a6-a7e3-8e900ca0858f-auth-proxy-config\") pod \"machine-approver-56656f9798-qln2c\" (UID: \"89c6a1cc-e545-44a6-a7e3-8e900ca0858f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qln2c" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.068059 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fb5635f7-027d-4411-ad86-2e27d7e74efb-signing-key\") pod \"service-ca-9c57cc56f-qjq4v\" (UID: \"fb5635f7-027d-4411-ad86-2e27d7e74efb\") " pod="openshift-service-ca/service-ca-9c57cc56f-qjq4v" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.068085 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dd6fed1e-59d9-400c-b9cd-561ec215a48e-proxy-tls\") pod \"machine-config-controller-84d6567774-f6574\" (UID: \"dd6fed1e-59d9-400c-b9cd-561ec215a48e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6574" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.068126 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89a685cc-973c-4317-b6fe-5bc842e32099-secret-volume\") pod \"collect-profiles-29400195-lwbwx\" (UID: \"89a685cc-973c-4317-b6fe-5bc842e32099\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx" Nov 24 19:19:17 crc kubenswrapper[4812]: E1124 19:19:17.068364 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:17.568312361 +0000 UTC m=+151.357264732 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.068480 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b781aa9b-e5aa-43c5-b9a5-4962fe210ad7-certs\") pod \"machine-config-server-cs84n\" (UID: \"b781aa9b-e5aa-43c5-b9a5-4962fe210ad7\") " pod="openshift-machine-config-operator/machine-config-server-cs84n" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.068601 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8d30a2dd-d2ba-463e-8d95-6f47271bac81-default-certificate\") pod \"router-default-5444994796-qrrxs\" (UID: \"8d30a2dd-d2ba-463e-8d95-6f47271bac81\") " pod="openshift-ingress/router-default-5444994796-qrrxs" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.068698 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04267c57-8e76-48a4-b768-ce571525ed62-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nvpxb\" (UID: \"04267c57-8e76-48a4-b768-ce571525ed62\") " pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.068923 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pb4k\" (UniqueName: \"kubernetes.io/projected/f95538f5-bbe6-43f4-a40c-0a60b3a2b828-kube-api-access-6pb4k\") pod \"multus-admission-controller-857f4d67dd-d2q8r\" (UID: \"f95538f5-bbe6-43f4-a40c-0a60b3a2b828\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d2q8r" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.069044 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjmrs\" (UniqueName: \"kubernetes.io/projected/a2fd27c6-f3db-49bd-b347-b00275c78e6e-kube-api-access-jjmrs\") pod \"package-server-manager-789f6589d5-9hgkr\" (UID: \"a2fd27c6-f3db-49bd-b347-b00275c78e6e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9hgkr" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.069145 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmlsh\" (UniqueName: \"kubernetes.io/projected/d4a06222-8c3c-46c0-a29c-acf87a56c0db-kube-api-access-tmlsh\") pod \"cluster-image-registry-operator-dc59b4c8b-277zs\" (UID: \"d4a06222-8c3c-46c0-a29c-acf87a56c0db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-277zs" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.069114 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/89c6a1cc-e545-44a6-a7e3-8e900ca0858f-auth-proxy-config\") pod \"machine-approver-56656f9798-qln2c\" (UID: \"89c6a1cc-e545-44a6-a7e3-8e900ca0858f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qln2c" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.069243 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e8777c17-ca96-4124-9420-0d41d4d458a9-webhook-cert\") pod \"packageserver-d55dfcdfc-7kfm7\" (UID: \"e8777c17-ca96-4124-9420-0d41d4d458a9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.069426 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/89c6a1cc-e545-44a6-a7e3-8e900ca0858f-machine-approver-tls\") pod \"machine-approver-56656f9798-qln2c\" (UID: \"89c6a1cc-e545-44a6-a7e3-8e900ca0858f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qln2c" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.069524 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60c29281-e44c-44f2-9981-e76e97baf4a8-config\") pod \"kube-apiserver-operator-766d6c64bb-8zwdr\" (UID: \"60c29281-e44c-44f2-9981-e76e97baf4a8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8zwdr" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.069631 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md485\" (UniqueName: \"kubernetes.io/projected/29bdcdf7-cbd6-4ec4-93b3-84e1424bdaa8-kube-api-access-md485\") pod \"migrator-59844c95c7-bnzn9\" (UID: \"29bdcdf7-cbd6-4ec4-93b3-84e1424bdaa8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnzn9" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.069732 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9h28\" (UniqueName: \"kubernetes.io/projected/89c6a1cc-e545-44a6-a7e3-8e900ca0858f-kube-api-access-v9h28\") pod \"machine-approver-56656f9798-qln2c\" (UID: \"89c6a1cc-e545-44a6-a7e3-8e900ca0858f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qln2c" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.069828 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e972f42f-2bbb-4c1a-83a8-56b3fab82b60-cert\") pod \"ingress-canary-ccmv2\" (UID: \"e972f42f-2bbb-4c1a-83a8-56b3fab82b60\") " pod="openshift-ingress-canary/ingress-canary-ccmv2" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.069955 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/139070ff-0113-4e25-93ef-d1861e0c4318-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2knf7\" (UID: \"139070ff-0113-4e25-93ef-d1861e0c4318\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2knf7" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.070051 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c35074ed-4b34-451a-aea5-9022f6f8c685-images\") pod \"machine-api-operator-5694c8668f-6nn48\" (UID: \"c35074ed-4b34-451a-aea5-9022f6f8c685\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6nn48" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.070143 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/726e8147-8d74-4797-8989-fdb12e97bd01-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bsr2j\" (UID: \"726e8147-8d74-4797-8989-fdb12e97bd01\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bsr2j" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.070213 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0787504c-4f5c-4089-8f03-679eb74d7b50-config\") pod \"service-ca-operator-777779d784-7dndk\" (UID: \"0787504c-4f5c-4089-8f03-679eb74d7b50\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dndk" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.070278 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a153dfd1-0dbb-4913-bcf4-768496d5db9a-serving-cert\") pod \"openshift-config-operator-7777fb866f-xv79f\" (UID: \"a153dfd1-0dbb-4913-bcf4-768496d5db9a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xv79f" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.070387 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/47cb9fd2-9706-464a-9360-901e3da7aa86-csi-data-dir\") pod \"csi-hostpathplugin-pgq95\" (UID: \"47cb9fd2-9706-464a-9360-901e3da7aa86\") " pod="hostpath-provisioner/csi-hostpathplugin-pgq95" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.070493 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/793a0705-3d09-4760-aa80-481eddf0ad45-trusted-ca\") pod \"ingress-operator-5b745b69d9-zs9kb\" (UID: \"793a0705-3d09-4760-aa80-481eddf0ad45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zs9kb" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.070564 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c35074ed-4b34-451a-aea5-9022f6f8c685-config\") pod \"machine-api-operator-5694c8668f-6nn48\" (UID: \"c35074ed-4b34-451a-aea5-9022f6f8c685\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6nn48" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.070641 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89c6a1cc-e545-44a6-a7e3-8e900ca0858f-config\") pod \"machine-approver-56656f9798-qln2c\" (UID: \"89c6a1cc-e545-44a6-a7e3-8e900ca0858f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qln2c" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.070721 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzrxj\" (UniqueName: \"kubernetes.io/projected/8d30a2dd-d2ba-463e-8d95-6f47271bac81-kube-api-access-lzrxj\") pod \"router-default-5444994796-qrrxs\" (UID: \"8d30a2dd-d2ba-463e-8d95-6f47271bac81\") " pod="openshift-ingress/router-default-5444994796-qrrxs" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.070813 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crzxl\" (UniqueName: \"kubernetes.io/projected/0787504c-4f5c-4089-8f03-679eb74d7b50-kube-api-access-crzxl\") pod \"service-ca-operator-777779d784-7dndk\" (UID: \"0787504c-4f5c-4089-8f03-679eb74d7b50\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dndk" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.070915 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb6l6\" (UniqueName: \"kubernetes.io/projected/04267c57-8e76-48a4-b768-ce571525ed62-kube-api-access-xb6l6\") pod \"marketplace-operator-79b997595-nvpxb\" (UID: \"04267c57-8e76-48a4-b768-ce571525ed62\") " pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.071004 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jdqd\" (UniqueName: \"kubernetes.io/projected/9bb2abbd-0aca-4038-91a7-4fff299bfb45-kube-api-access-4jdqd\") pod \"dns-default-nzz22\" (UID: \"9bb2abbd-0aca-4038-91a7-4fff299bfb45\") " pod="openshift-dns/dns-default-nzz22" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.071106 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nmn6\" (UniqueName: \"kubernetes.io/projected/fb5635f7-027d-4411-ad86-2e27d7e74efb-kube-api-access-4nmn6\") pod \"service-ca-9c57cc56f-qjq4v\" (UID: \"fb5635f7-027d-4411-ad86-2e27d7e74efb\") " pod="openshift-service-ca/service-ca-9c57cc56f-qjq4v" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.071215 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-png85\" (UniqueName: \"kubernetes.io/projected/dd6fed1e-59d9-400c-b9cd-561ec215a48e-kube-api-access-png85\") pod \"machine-config-controller-84d6567774-f6574\" (UID: \"dd6fed1e-59d9-400c-b9cd-561ec215a48e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6574" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.071329 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60c29281-e44c-44f2-9981-e76e97baf4a8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8zwdr\" (UID: \"60c29281-e44c-44f2-9981-e76e97baf4a8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8zwdr" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.071487 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d003f73-5e4b-475f-bb57-66f13916a0c5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qr7zk\" (UID: \"0d003f73-5e4b-475f-bb57-66f13916a0c5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qr7zk" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.071592 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/793a0705-3d09-4760-aa80-481eddf0ad45-metrics-tls\") pod \"ingress-operator-5b745b69d9-zs9kb\" (UID: \"793a0705-3d09-4760-aa80-481eddf0ad45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zs9kb" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072110 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/139070ff-0113-4e25-93ef-d1861e0c4318-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2knf7\" (UID: \"139070ff-0113-4e25-93ef-d1861e0c4318\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2knf7" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072229 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a153dfd1-0dbb-4913-bcf4-768496d5db9a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-xv79f\" (UID: \"a153dfd1-0dbb-4913-bcf4-768496d5db9a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xv79f" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072350 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2bmh\" (UniqueName: \"kubernetes.io/projected/7b8ace94-12b2-4176-ba9f-e14f0b55e3c1-kube-api-access-t2bmh\") pod \"kube-storage-version-migrator-operator-b67b599dd-5bljf\" (UID: \"7b8ace94-12b2-4176-ba9f-e14f0b55e3c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5bljf" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.071917 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89c6a1cc-e545-44a6-a7e3-8e900ca0858f-config\") pod \"machine-approver-56656f9798-qln2c\" (UID: \"89c6a1cc-e545-44a6-a7e3-8e900ca0858f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qln2c" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072449 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8d30a2dd-d2ba-463e-8d95-6f47271bac81-stats-auth\") pod \"router-default-5444994796-qrrxs\" (UID: \"8d30a2dd-d2ba-463e-8d95-6f47271bac81\") " pod="openshift-ingress/router-default-5444994796-qrrxs" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072536 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/47cb9fd2-9706-464a-9360-901e3da7aa86-socket-dir\") pod \"csi-hostpathplugin-pgq95\" (UID: \"47cb9fd2-9706-464a-9360-901e3da7aa86\") " pod="hostpath-provisioner/csi-hostpathplugin-pgq95" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072580 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh7b4\" (UniqueName: \"kubernetes.io/projected/a153dfd1-0dbb-4913-bcf4-768496d5db9a-kube-api-access-hh7b4\") pod \"openshift-config-operator-7777fb866f-xv79f\" (UID: \"a153dfd1-0dbb-4913-bcf4-768496d5db9a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xv79f" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072602 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/85cc9fad-228f-4d80-8348-8743b79cf691-proxy-tls\") pod \"machine-config-operator-74547568cd-qb8jp\" (UID: \"85cc9fad-228f-4d80-8348-8743b79cf691\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qb8jp" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072623 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fb5635f7-027d-4411-ad86-2e27d7e74efb-signing-cabundle\") pod \"service-ca-9c57cc56f-qjq4v\" (UID: \"fb5635f7-027d-4411-ad86-2e27d7e74efb\") " pod="openshift-service-ca/service-ca-9c57cc56f-qjq4v" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072639 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d30a2dd-d2ba-463e-8d95-6f47271bac81-service-ca-bundle\") pod \"router-default-5444994796-qrrxs\" (UID: \"8d30a2dd-d2ba-463e-8d95-6f47271bac81\") " pod="openshift-ingress/router-default-5444994796-qrrxs" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072669 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072686 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b8ace94-12b2-4176-ba9f-e14f0b55e3c1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5bljf\" (UID: \"7b8ace94-12b2-4176-ba9f-e14f0b55e3c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5bljf" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072703 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4a06222-8c3c-46c0-a29c-acf87a56c0db-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-277zs\" (UID: \"d4a06222-8c3c-46c0-a29c-acf87a56c0db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-277zs" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072721 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0787504c-4f5c-4089-8f03-679eb74d7b50-serving-cert\") pod \"service-ca-operator-777779d784-7dndk\" (UID: \"0787504c-4f5c-4089-8f03-679eb74d7b50\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dndk" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072740 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd9bc\" (UniqueName: \"kubernetes.io/projected/e8777c17-ca96-4124-9420-0d41d4d458a9-kube-api-access-fd9bc\") pod \"packageserver-d55dfcdfc-7kfm7\" (UID: \"e8777c17-ca96-4124-9420-0d41d4d458a9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072764 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b781aa9b-e5aa-43c5-b9a5-4962fe210ad7-node-bootstrap-token\") pod \"machine-config-server-cs84n\" (UID: \"b781aa9b-e5aa-43c5-b9a5-4962fe210ad7\") " pod="openshift-machine-config-operator/machine-config-server-cs84n" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072784 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b8ace94-12b2-4176-ba9f-e14f0b55e3c1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5bljf\" (UID: \"7b8ace94-12b2-4176-ba9f-e14f0b55e3c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5bljf" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072800 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/139070ff-0113-4e25-93ef-d1861e0c4318-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2knf7\" (UID: \"139070ff-0113-4e25-93ef-d1861e0c4318\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2knf7" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072817 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9bb2abbd-0aca-4038-91a7-4fff299bfb45-config-volume\") pod \"dns-default-nzz22\" (UID: \"9bb2abbd-0aca-4038-91a7-4fff299bfb45\") " pod="openshift-dns/dns-default-nzz22" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072835 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/726e8147-8d74-4797-8989-fdb12e97bd01-config\") pod \"kube-controller-manager-operator-78b949d7b-bsr2j\" (UID: \"726e8147-8d74-4797-8989-fdb12e97bd01\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bsr2j" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072852 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/726e8147-8d74-4797-8989-fdb12e97bd01-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bsr2j\" (UID: \"726e8147-8d74-4797-8989-fdb12e97bd01\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bsr2j" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072874 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chp89\" (UniqueName: \"kubernetes.io/projected/e972f42f-2bbb-4c1a-83a8-56b3fab82b60-kube-api-access-chp89\") pod \"ingress-canary-ccmv2\" (UID: \"e972f42f-2bbb-4c1a-83a8-56b3fab82b60\") " pod="openshift-ingress-canary/ingress-canary-ccmv2" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072891 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn6kv\" (UniqueName: \"kubernetes.io/projected/85cc9fad-228f-4d80-8348-8743b79cf691-kube-api-access-gn6kv\") pod \"machine-config-operator-74547568cd-qb8jp\" (UID: \"85cc9fad-228f-4d80-8348-8743b79cf691\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qb8jp" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072908 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2c4fc3d9-eec6-483b-94ac-626032f65ff6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-56qgd\" (UID: \"2c4fc3d9-eec6-483b-94ac-626032f65ff6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56qgd" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072924 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85cc9fad-228f-4d80-8348-8743b79cf691-images\") pod \"machine-config-operator-74547568cd-qb8jp\" (UID: \"85cc9fad-228f-4d80-8348-8743b79cf691\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qb8jp" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072940 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9bb2abbd-0aca-4038-91a7-4fff299bfb45-metrics-tls\") pod \"dns-default-nzz22\" (UID: \"9bb2abbd-0aca-4038-91a7-4fff299bfb45\") " pod="openshift-dns/dns-default-nzz22" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072961 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/793a0705-3d09-4760-aa80-481eddf0ad45-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zs9kb\" (UID: \"793a0705-3d09-4760-aa80-481eddf0ad45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zs9kb" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072980 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/04267c57-8e76-48a4-b768-ce571525ed62-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nvpxb\" (UID: \"04267c57-8e76-48a4-b768-ce571525ed62\") " pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073000 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2c4fc3d9-eec6-483b-94ac-626032f65ff6-srv-cert\") pod \"olm-operator-6b444d44fb-56qgd\" (UID: \"2c4fc3d9-eec6-483b-94ac-626032f65ff6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56qgd" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073016 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsvmq\" (UniqueName: \"kubernetes.io/projected/2c4fc3d9-eec6-483b-94ac-626032f65ff6-kube-api-access-tsvmq\") pod \"olm-operator-6b444d44fb-56qgd\" (UID: \"2c4fc3d9-eec6-483b-94ac-626032f65ff6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56qgd" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073033 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f95538f5-bbe6-43f4-a40c-0a60b3a2b828-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-d2q8r\" (UID: \"f95538f5-bbe6-43f4-a40c-0a60b3a2b828\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d2q8r" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073062 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfmsp\" (UniqueName: \"kubernetes.io/projected/3fdb6f5e-7a9b-4dca-a612-07aa2e4751a3-kube-api-access-mfmsp\") pod \"catalog-operator-68c6474976-v8jpj\" (UID: \"3fdb6f5e-7a9b-4dca-a612-07aa2e4751a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v8jpj" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073079 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/47cb9fd2-9706-464a-9360-901e3da7aa86-mountpoint-dir\") pod \"csi-hostpathplugin-pgq95\" (UID: \"47cb9fd2-9706-464a-9360-901e3da7aa86\") " pod="hostpath-provisioner/csi-hostpathplugin-pgq95" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073094 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60c29281-e44c-44f2-9981-e76e97baf4a8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8zwdr\" (UID: \"60c29281-e44c-44f2-9981-e76e97baf4a8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8zwdr" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073110 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89a685cc-973c-4317-b6fe-5bc842e32099-config-volume\") pod \"collect-profiles-29400195-lwbwx\" (UID: \"89a685cc-973c-4317-b6fe-5bc842e32099\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073112 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a153dfd1-0dbb-4913-bcf4-768496d5db9a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-xv79f\" (UID: \"a153dfd1-0dbb-4913-bcf4-768496d5db9a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xv79f" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073161 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/139070ff-0113-4e25-93ef-d1861e0c4318-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2knf7\" (UID: \"139070ff-0113-4e25-93ef-d1861e0c4318\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2knf7" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073128 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgjj5\" (UniqueName: \"kubernetes.io/projected/c35074ed-4b34-451a-aea5-9022f6f8c685-kube-api-access-vgjj5\") pod \"machine-api-operator-5694c8668f-6nn48\" (UID: \"c35074ed-4b34-451a-aea5-9022f6f8c685\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6nn48" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073228 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/47cb9fd2-9706-464a-9360-901e3da7aa86-registration-dir\") pod \"csi-hostpathplugin-pgq95\" (UID: \"47cb9fd2-9706-464a-9360-901e3da7aa86\") " pod="hostpath-provisioner/csi-hostpathplugin-pgq95" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073256 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/47cb9fd2-9706-464a-9360-901e3da7aa86-plugins-dir\") pod \"csi-hostpathplugin-pgq95\" (UID: \"47cb9fd2-9706-464a-9360-901e3da7aa86\") " pod="hostpath-provisioner/csi-hostpathplugin-pgq95" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073273 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/89c6a1cc-e545-44a6-a7e3-8e900ca0858f-machine-approver-tls\") pod \"machine-approver-56656f9798-qln2c\" (UID: \"89c6a1cc-e545-44a6-a7e3-8e900ca0858f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qln2c" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073278 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d30a2dd-d2ba-463e-8d95-6f47271bac81-metrics-certs\") pod \"router-default-5444994796-qrrxs\" (UID: \"8d30a2dd-d2ba-463e-8d95-6f47271bac81\") " pod="openshift-ingress/router-default-5444994796-qrrxs" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073373 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vltcl\" (UniqueName: \"kubernetes.io/projected/0d003f73-5e4b-475f-bb57-66f13916a0c5-kube-api-access-vltcl\") pod \"control-plane-machine-set-operator-78cbb6b69f-qr7zk\" (UID: \"0d003f73-5e4b-475f-bb57-66f13916a0c5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qr7zk" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073403 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d4a06222-8c3c-46c0-a29c-acf87a56c0db-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-277zs\" (UID: \"d4a06222-8c3c-46c0-a29c-acf87a56c0db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-277zs" Nov 24 19:19:17 crc kubenswrapper[4812]: E1124 19:19:17.073441 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:17.573417543 +0000 UTC m=+151.362369914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073462 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3fdb6f5e-7a9b-4dca-a612-07aa2e4751a3-profile-collector-cert\") pod \"catalog-operator-68c6474976-v8jpj\" (UID: \"3fdb6f5e-7a9b-4dca-a612-07aa2e4751a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v8jpj" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073487 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7787q\" (UniqueName: \"kubernetes.io/projected/793a0705-3d09-4760-aa80-481eddf0ad45-kube-api-access-7787q\") pod \"ingress-operator-5b745b69d9-zs9kb\" (UID: \"793a0705-3d09-4760-aa80-481eddf0ad45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zs9kb" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073504 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85cc9fad-228f-4d80-8348-8743b79cf691-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qb8jp\" (UID: \"85cc9fad-228f-4d80-8348-8743b79cf691\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qb8jp" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073531 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnbtz\" (UniqueName: \"kubernetes.io/projected/89a685cc-973c-4317-b6fe-5bc842e32099-kube-api-access-dnbtz\") pod \"collect-profiles-29400195-lwbwx\" (UID: \"89a685cc-973c-4317-b6fe-5bc842e32099\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073555 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e8777c17-ca96-4124-9420-0d41d4d458a9-apiservice-cert\") pod \"packageserver-d55dfcdfc-7kfm7\" (UID: \"e8777c17-ca96-4124-9420-0d41d4d458a9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073577 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2fd27c6-f3db-49bd-b347-b00275c78e6e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9hgkr\" (UID: \"a2fd27c6-f3db-49bd-b347-b00275c78e6e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9hgkr" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073580 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/47cb9fd2-9706-464a-9360-901e3da7aa86-registration-dir\") pod \"csi-hostpathplugin-pgq95\" (UID: \"47cb9fd2-9706-464a-9360-901e3da7aa86\") " pod="hostpath-provisioner/csi-hostpathplugin-pgq95" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073598 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4a06222-8c3c-46c0-a29c-acf87a56c0db-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-277zs\" (UID: \"d4a06222-8c3c-46c0-a29c-acf87a56c0db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-277zs" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.072013 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/47cb9fd2-9706-464a-9360-901e3da7aa86-csi-data-dir\") pod \"csi-hostpathplugin-pgq95\" (UID: \"47cb9fd2-9706-464a-9360-901e3da7aa86\") " pod="hostpath-provisioner/csi-hostpathplugin-pgq95" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073624 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e8777c17-ca96-4124-9420-0d41d4d458a9-tmpfs\") pod \"packageserver-d55dfcdfc-7kfm7\" (UID: \"e8777c17-ca96-4124-9420-0d41d4d458a9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073648 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh8xc\" (UniqueName: \"kubernetes.io/projected/b781aa9b-e5aa-43c5-b9a5-4962fe210ad7-kube-api-access-mh8xc\") pod \"machine-config-server-cs84n\" (UID: \"b781aa9b-e5aa-43c5-b9a5-4962fe210ad7\") " pod="openshift-machine-config-operator/machine-config-server-cs84n" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073670 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3fdb6f5e-7a9b-4dca-a612-07aa2e4751a3-srv-cert\") pod \"catalog-operator-68c6474976-v8jpj\" (UID: \"3fdb6f5e-7a9b-4dca-a612-07aa2e4751a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v8jpj" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073697 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c35074ed-4b34-451a-aea5-9022f6f8c685-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6nn48\" (UID: \"c35074ed-4b34-451a-aea5-9022f6f8c685\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6nn48" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073718 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dd6fed1e-59d9-400c-b9cd-561ec215a48e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f6574\" (UID: \"dd6fed1e-59d9-400c-b9cd-561ec215a48e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6574" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.073747 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q5mh\" (UniqueName: \"kubernetes.io/projected/47cb9fd2-9706-464a-9360-901e3da7aa86-kube-api-access-2q5mh\") pod \"csi-hostpathplugin-pgq95\" (UID: \"47cb9fd2-9706-464a-9360-901e3da7aa86\") " pod="hostpath-provisioner/csi-hostpathplugin-pgq95" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.074189 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/47cb9fd2-9706-464a-9360-901e3da7aa86-mountpoint-dir\") pod \"csi-hostpathplugin-pgq95\" (UID: \"47cb9fd2-9706-464a-9360-901e3da7aa86\") " pod="hostpath-provisioner/csi-hostpathplugin-pgq95" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.074207 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8d30a2dd-d2ba-463e-8d95-6f47271bac81-default-certificate\") pod \"router-default-5444994796-qrrxs\" (UID: \"8d30a2dd-d2ba-463e-8d95-6f47271bac81\") " pod="openshift-ingress/router-default-5444994796-qrrxs" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.074962 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60c29281-e44c-44f2-9981-e76e97baf4a8-config\") pod \"kube-apiserver-operator-766d6c64bb-8zwdr\" (UID: \"60c29281-e44c-44f2-9981-e76e97baf4a8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8zwdr" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.075209 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/793a0705-3d09-4760-aa80-481eddf0ad45-metrics-tls\") pod \"ingress-operator-5b745b69d9-zs9kb\" (UID: \"793a0705-3d09-4760-aa80-481eddf0ad45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zs9kb" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.075406 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/47cb9fd2-9706-464a-9360-901e3da7aa86-plugins-dir\") pod \"csi-hostpathplugin-pgq95\" (UID: \"47cb9fd2-9706-464a-9360-901e3da7aa86\") " pod="hostpath-provisioner/csi-hostpathplugin-pgq95" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.075851 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d30a2dd-d2ba-463e-8d95-6f47271bac81-service-ca-bundle\") pod \"router-default-5444994796-qrrxs\" (UID: \"8d30a2dd-d2ba-463e-8d95-6f47271bac81\") " pod="openshift-ingress/router-default-5444994796-qrrxs" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.075946 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/47cb9fd2-9706-464a-9360-901e3da7aa86-socket-dir\") pod \"csi-hostpathplugin-pgq95\" (UID: \"47cb9fd2-9706-464a-9360-901e3da7aa86\") " pod="hostpath-provisioner/csi-hostpathplugin-pgq95" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.076198 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/726e8147-8d74-4797-8989-fdb12e97bd01-config\") pod \"kube-controller-manager-operator-78b949d7b-bsr2j\" (UID: \"726e8147-8d74-4797-8989-fdb12e97bd01\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bsr2j" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.076620 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60c29281-e44c-44f2-9981-e76e97baf4a8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8zwdr\" (UID: \"60c29281-e44c-44f2-9981-e76e97baf4a8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8zwdr" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.076899 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e8777c17-ca96-4124-9420-0d41d4d458a9-tmpfs\") pod \"packageserver-d55dfcdfc-7kfm7\" (UID: \"e8777c17-ca96-4124-9420-0d41d4d458a9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.077362 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/85cc9fad-228f-4d80-8348-8743b79cf691-auth-proxy-config\") pod \"machine-config-operator-74547568cd-qb8jp\" (UID: \"85cc9fad-228f-4d80-8348-8743b79cf691\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qb8jp" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.078206 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dd6fed1e-59d9-400c-b9cd-561ec215a48e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-f6574\" (UID: \"dd6fed1e-59d9-400c-b9cd-561ec215a48e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6574" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.078371 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8d30a2dd-d2ba-463e-8d95-6f47271bac81-metrics-certs\") pod \"router-default-5444994796-qrrxs\" (UID: \"8d30a2dd-d2ba-463e-8d95-6f47271bac81\") " pod="openshift-ingress/router-default-5444994796-qrrxs" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.078946 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/793a0705-3d09-4760-aa80-481eddf0ad45-trusted-ca\") pod \"ingress-operator-5b745b69d9-zs9kb\" (UID: \"793a0705-3d09-4760-aa80-481eddf0ad45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zs9kb" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.079501 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8d30a2dd-d2ba-463e-8d95-6f47271bac81-stats-auth\") pod \"router-default-5444994796-qrrxs\" (UID: \"8d30a2dd-d2ba-463e-8d95-6f47271bac81\") " pod="openshift-ingress/router-default-5444994796-qrrxs" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.079729 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a153dfd1-0dbb-4913-bcf4-768496d5db9a-serving-cert\") pod \"openshift-config-operator-7777fb866f-xv79f\" (UID: \"a153dfd1-0dbb-4913-bcf4-768496d5db9a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xv79f" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.081408 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/726e8147-8d74-4797-8989-fdb12e97bd01-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-bsr2j\" (UID: \"726e8147-8d74-4797-8989-fdb12e97bd01\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bsr2j" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.082671 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4a06222-8c3c-46c0-a29c-acf87a56c0db-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-277zs\" (UID: \"d4a06222-8c3c-46c0-a29c-acf87a56c0db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-277zs" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.082974 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dd6fed1e-59d9-400c-b9cd-561ec215a48e-proxy-tls\") pod \"machine-config-controller-84d6567774-f6574\" (UID: \"dd6fed1e-59d9-400c-b9cd-561ec215a48e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6574" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.083102 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/139070ff-0113-4e25-93ef-d1861e0c4318-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2knf7\" (UID: \"139070ff-0113-4e25-93ef-d1861e0c4318\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2knf7" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.083743 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.085852 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d4a06222-8c3c-46c0-a29c-acf87a56c0db-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-277zs\" (UID: \"d4a06222-8c3c-46c0-a29c-acf87a56c0db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-277zs" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.105573 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.116621 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c35074ed-4b34-451a-aea5-9022f6f8c685-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-6nn48\" (UID: \"c35074ed-4b34-451a-aea5-9022f6f8c685\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6nn48" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.123816 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.144367 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.164011 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.173259 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c35074ed-4b34-451a-aea5-9022f6f8c685-config\") pod \"machine-api-operator-5694c8668f-6nn48\" (UID: \"c35074ed-4b34-451a-aea5-9022f6f8c685\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6nn48" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.175044 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:17 crc kubenswrapper[4812]: E1124 19:19:17.175162 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:17.675138017 +0000 UTC m=+151.464090418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.175651 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:17 crc kubenswrapper[4812]: E1124 19:19:17.175992 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:17.675984121 +0000 UTC m=+151.464936492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.183642 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.192289 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c35074ed-4b34-451a-aea5-9022f6f8c685-images\") pod \"machine-api-operator-5694c8668f-6nn48\" (UID: \"c35074ed-4b34-451a-aea5-9022f6f8c685\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6nn48" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.204819 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.223825 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.230967 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/85cc9fad-228f-4d80-8348-8743b79cf691-proxy-tls\") pod \"machine-config-operator-74547568cd-qb8jp\" (UID: \"85cc9fad-228f-4d80-8348-8743b79cf691\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qb8jp" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.244446 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.247113 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85cc9fad-228f-4d80-8348-8743b79cf691-images\") pod \"machine-config-operator-74547568cd-qb8jp\" (UID: \"85cc9fad-228f-4d80-8348-8743b79cf691\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qb8jp" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.264864 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.276580 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:17 crc kubenswrapper[4812]: E1124 19:19:17.276840 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:17.77680752 +0000 UTC m=+151.565759931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.276958 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:17 crc kubenswrapper[4812]: E1124 19:19:17.277824 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:17.777805927 +0000 UTC m=+151.566758298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.284490 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.305235 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.323378 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.344521 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.364499 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.379647 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:17 crc kubenswrapper[4812]: E1124 19:19:17.379859 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:17.87983027 +0000 UTC m=+151.668782671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.380275 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:17 crc kubenswrapper[4812]: E1124 19:19:17.380900 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:17.880855898 +0000 UTC m=+151.669808299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.382815 4812 request.go:700] Waited for 1.018121853s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0 Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.385128 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.397311 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b8ace94-12b2-4176-ba9f-e14f0b55e3c1-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5bljf\" (UID: \"7b8ace94-12b2-4176-ba9f-e14f0b55e3c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5bljf" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.403824 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.405826 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b8ace94-12b2-4176-ba9f-e14f0b55e3c1-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5bljf\" (UID: \"7b8ace94-12b2-4176-ba9f-e14f0b55e3c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5bljf" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.424843 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.444404 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.460068 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/04267c57-8e76-48a4-b768-ce571525ed62-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nvpxb\" (UID: \"04267c57-8e76-48a4-b768-ce571525ed62\") " pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.465376 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.482390 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:17 crc kubenswrapper[4812]: E1124 19:19:17.482683 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:17.982620692 +0000 UTC m=+151.771573103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.483025 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:17 crc kubenswrapper[4812]: E1124 19:19:17.483826 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:17.983802415 +0000 UTC m=+151.772754816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.494738 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.501657 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04267c57-8e76-48a4-b768-ce571525ed62-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nvpxb\" (UID: \"04267c57-8e76-48a4-b768-ce571525ed62\") " pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.503977 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.524111 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.543949 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.563763 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.570132 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3fdb6f5e-7a9b-4dca-a612-07aa2e4751a3-profile-collector-cert\") pod \"catalog-operator-68c6474976-v8jpj\" (UID: \"3fdb6f5e-7a9b-4dca-a612-07aa2e4751a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v8jpj" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.571042 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2c4fc3d9-eec6-483b-94ac-626032f65ff6-profile-collector-cert\") pod \"olm-operator-6b444d44fb-56qgd\" (UID: \"2c4fc3d9-eec6-483b-94ac-626032f65ff6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56qgd" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.573757 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89a685cc-973c-4317-b6fe-5bc842e32099-secret-volume\") pod \"collect-profiles-29400195-lwbwx\" (UID: \"89a685cc-973c-4317-b6fe-5bc842e32099\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.584653 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.585116 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:17 crc kubenswrapper[4812]: E1124 19:19:17.585316 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:18.085290922 +0000 UTC m=+151.874243333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.585785 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:17 crc kubenswrapper[4812]: E1124 19:19:17.586135 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:18.086120566 +0000 UTC m=+151.875072977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.604725 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.619148 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2c4fc3d9-eec6-483b-94ac-626032f65ff6-srv-cert\") pod \"olm-operator-6b444d44fb-56qgd\" (UID: \"2c4fc3d9-eec6-483b-94ac-626032f65ff6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56qgd" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.623531 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.644875 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.657502 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/0d003f73-5e4b-475f-bb57-66f13916a0c5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qr7zk\" (UID: \"0d003f73-5e4b-475f-bb57-66f13916a0c5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qr7zk" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.664770 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.683983 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.687312 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:17 crc kubenswrapper[4812]: E1124 19:19:17.687557 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:18.18750934 +0000 UTC m=+151.976461761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.688571 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:17 crc kubenswrapper[4812]: E1124 19:19:17.689101 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:18.189078374 +0000 UTC m=+151.978030785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.691251 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2fd27c6-f3db-49bd-b347-b00275c78e6e-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9hgkr\" (UID: \"a2fd27c6-f3db-49bd-b347-b00275c78e6e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9hgkr" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.704822 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.711778 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3fdb6f5e-7a9b-4dca-a612-07aa2e4751a3-srv-cert\") pod \"catalog-operator-68c6474976-v8jpj\" (UID: \"3fdb6f5e-7a9b-4dca-a612-07aa2e4751a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v8jpj" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.724408 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.743663 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.749186 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f95538f5-bbe6-43f4-a40c-0a60b3a2b828-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-d2q8r\" (UID: \"f95538f5-bbe6-43f4-a40c-0a60b3a2b828\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d2q8r" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.763593 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.767850 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e8777c17-ca96-4124-9420-0d41d4d458a9-apiservice-cert\") pod \"packageserver-d55dfcdfc-7kfm7\" (UID: \"e8777c17-ca96-4124-9420-0d41d4d458a9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.774715 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e8777c17-ca96-4124-9420-0d41d4d458a9-webhook-cert\") pod \"packageserver-d55dfcdfc-7kfm7\" (UID: \"e8777c17-ca96-4124-9420-0d41d4d458a9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.784576 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.790068 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:17 crc kubenswrapper[4812]: E1124 19:19:17.790418 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:18.290388346 +0000 UTC m=+152.079340757 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.791995 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:17 crc kubenswrapper[4812]: E1124 19:19:17.792498 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:18.292476214 +0000 UTC m=+152.081428615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.803962 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.823557 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.829632 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0787504c-4f5c-4089-8f03-679eb74d7b50-serving-cert\") pod \"service-ca-operator-777779d784-7dndk\" (UID: \"0787504c-4f5c-4089-8f03-679eb74d7b50\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dndk" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.844958 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.853805 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0787504c-4f5c-4089-8f03-679eb74d7b50-config\") pod \"service-ca-operator-777779d784-7dndk\" (UID: \"0787504c-4f5c-4089-8f03-679eb74d7b50\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dndk" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.864585 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.884776 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.893456 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:17 crc kubenswrapper[4812]: E1124 19:19:17.893677 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:18.393638312 +0000 UTC m=+152.182590733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.894568 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:17 crc kubenswrapper[4812]: E1124 19:19:17.895592 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:18.395521885 +0000 UTC m=+152.184474286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.896887 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/fb5635f7-027d-4411-ad86-2e27d7e74efb-signing-key\") pod \"service-ca-9c57cc56f-qjq4v\" (UID: \"fb5635f7-027d-4411-ad86-2e27d7e74efb\") " pod="openshift-service-ca/service-ca-9c57cc56f-qjq4v" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.904724 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.907950 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/fb5635f7-027d-4411-ad86-2e27d7e74efb-signing-cabundle\") pod \"service-ca-9c57cc56f-qjq4v\" (UID: \"fb5635f7-027d-4411-ad86-2e27d7e74efb\") " pod="openshift-service-ca/service-ca-9c57cc56f-qjq4v" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.924418 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.944484 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.964073 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.995465 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:17 crc kubenswrapper[4812]: E1124 19:19:17.995660 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:18.495623684 +0000 UTC m=+152.284576125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:17 crc kubenswrapper[4812]: I1124 19:19:17.996118 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:17 crc kubenswrapper[4812]: E1124 19:19:17.996905 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:18.496866568 +0000 UTC m=+152.285818989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.004257 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.006407 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89a685cc-973c-4317-b6fe-5bc842e32099-config-volume\") pod \"collect-profiles-29400195-lwbwx\" (UID: \"89a685cc-973c-4317-b6fe-5bc842e32099\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.012318 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngt69\" (UniqueName: \"kubernetes.io/projected/1b540550-77e7-4545-9e34-b972ab5ec677-kube-api-access-ngt69\") pod \"downloads-7954f5f757-4spkn\" (UID: \"1b540550-77e7-4545-9e34-b972ab5ec677\") " pod="openshift-console/downloads-7954f5f757-4spkn" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.051552 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d7kg\" (UniqueName: \"kubernetes.io/projected/74f627e0-8a46-4876-b456-9efda3c4ad41-kube-api-access-2d7kg\") pod \"apiserver-7bbb656c7d-wfqs2\" (UID: \"74f627e0-8a46-4876-b456-9efda3c4ad41\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.068377 4812 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.068478 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b781aa9b-e5aa-43c5-b9a5-4962fe210ad7-certs podName:b781aa9b-e5aa-43c5-b9a5-4962fe210ad7 nodeName:}" failed. No retries permitted until 2025-11-24 19:19:18.568458723 +0000 UTC m=+152.357411094 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/b781aa9b-e5aa-43c5-b9a5-4962fe210ad7-certs") pod "machine-config-server-cs84n" (UID: "b781aa9b-e5aa-43c5-b9a5-4962fe210ad7") : failed to sync secret cache: timed out waiting for the condition Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.072113 4812 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.072190 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e972f42f-2bbb-4c1a-83a8-56b3fab82b60-cert podName:e972f42f-2bbb-4c1a-83a8-56b3fab82b60 nodeName:}" failed. No retries permitted until 2025-11-24 19:19:18.572173106 +0000 UTC m=+152.361125477 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e972f42f-2bbb-4c1a-83a8-56b3fab82b60-cert") pod "ingress-canary-ccmv2" (UID: "e972f42f-2bbb-4c1a-83a8-56b3fab82b60") : failed to sync secret cache: timed out waiting for the condition Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.073812 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfbhn\" (UniqueName: \"kubernetes.io/projected/03ad7961-295b-4e12-82b0-b75f196049b0-kube-api-access-sfbhn\") pod \"oauth-openshift-558db77b4-fqzwn\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.074224 4812 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.074303 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b781aa9b-e5aa-43c5-b9a5-4962fe210ad7-node-bootstrap-token podName:b781aa9b-e5aa-43c5-b9a5-4962fe210ad7 nodeName:}" failed. No retries permitted until 2025-11-24 19:19:18.574283765 +0000 UTC m=+152.363236136 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/b781aa9b-e5aa-43c5-b9a5-4962fe210ad7-node-bootstrap-token") pod "machine-config-server-cs84n" (UID: "b781aa9b-e5aa-43c5-b9a5-4962fe210ad7") : failed to sync secret cache: timed out waiting for the condition Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.075722 4812 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.075761 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9bb2abbd-0aca-4038-91a7-4fff299bfb45-config-volume podName:9bb2abbd-0aca-4038-91a7-4fff299bfb45 nodeName:}" failed. No retries permitted until 2025-11-24 19:19:18.575751166 +0000 UTC m=+152.364703537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9bb2abbd-0aca-4038-91a7-4fff299bfb45-config-volume") pod "dns-default-nzz22" (UID: "9bb2abbd-0aca-4038-91a7-4fff299bfb45") : failed to sync configmap cache: timed out waiting for the condition Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.075848 4812 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.075891 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9bb2abbd-0aca-4038-91a7-4fff299bfb45-metrics-tls podName:9bb2abbd-0aca-4038-91a7-4fff299bfb45 nodeName:}" failed. No retries permitted until 2025-11-24 19:19:18.575881709 +0000 UTC m=+152.364834080 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9bb2abbd-0aca-4038-91a7-4fff299bfb45-metrics-tls") pod "dns-default-nzz22" (UID: "9bb2abbd-0aca-4038-91a7-4fff299bfb45") : failed to sync secret cache: timed out waiting for the condition Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.076157 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.098106 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.098282 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:18.598258103 +0000 UTC m=+152.387210474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.098508 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.098653 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j46xv\" (UniqueName: \"kubernetes.io/projected/e28dbd25-fd09-440e-b2d8-4560d43ea079-kube-api-access-j46xv\") pod \"etcd-operator-b45778765-22pnq\" (UID: \"e28dbd25-fd09-440e-b2d8-4560d43ea079\") " pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.098863 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:18.598840759 +0000 UTC m=+152.387793200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.105000 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdhkd\" (UniqueName: \"kubernetes.io/projected/16e4032b-e804-4cf0-9a9f-0c23319c06df-kube-api-access-xdhkd\") pod \"apiserver-76f77b778f-xtdvk\" (UID: \"16e4032b-e804-4cf0-9a9f-0c23319c06df\") " pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.105517 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.160180 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msmmt\" (UniqueName: \"kubernetes.io/projected/afeebfe0-572d-4c6e-b706-2a35b40e25e0-kube-api-access-msmmt\") pod \"cluster-samples-operator-665b6dd947-7ndrw\" (UID: \"afeebfe0-572d-4c6e-b706-2a35b40e25e0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7ndrw" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.163047 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bprtz\" (UniqueName: \"kubernetes.io/projected/7dab4563-6b36-4d8e-a789-58459d65cb7c-kube-api-access-bprtz\") pod \"openshift-apiserver-operator-796bbdcf4f-gzd88\" (UID: \"7dab4563-6b36-4d8e-a789-58459d65cb7c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzd88" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.171655 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4spkn" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.183305 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdsn5\" (UniqueName: \"kubernetes.io/projected/bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0-kube-api-access-zdsn5\") pod \"authentication-operator-69f744f599-9lvg4\" (UID: \"bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9lvg4" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.184812 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.200109 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.200532 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:18.700482621 +0000 UTC m=+152.489435052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.200889 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.201269 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:18.701250222 +0000 UTC m=+152.490202603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.203523 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.224502 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.243290 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.247229 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2"] Nov 24 19:19:18 crc kubenswrapper[4812]: W1124 19:19:18.255143 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74f627e0_8a46_4876_b456_9efda3c4ad41.slice/crio-339d45ee8e0686f826274d8ffefa1e650ed766e8270d55048c5aaad1470fbe60 WatchSource:0}: Error finding container 339d45ee8e0686f826274d8ffefa1e650ed766e8270d55048c5aaad1470fbe60: Status 404 returned error can't find the container with id 339d45ee8e0686f826274d8ffefa1e650ed766e8270d55048c5aaad1470fbe60 Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.259063 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7ndrw" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.263525 4812 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.267094 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.285412 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.291120 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.302787 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.303456 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:18.803437559 +0000 UTC m=+152.592389930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.305030 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.305236 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.327505 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.347320 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.363778 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.373396 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4spkn"] Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.373502 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9lvg4" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.384205 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 24 19:19:18 crc kubenswrapper[4812]: W1124 19:19:18.386430 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b540550_77e7_4545_9e34_b972ab5ec677.slice/crio-dc802632e2244006c32ab9789901e384820fde2930b566c834f0bd77aaaa52d5 WatchSource:0}: Error finding container dc802632e2244006c32ab9789901e384820fde2930b566c834f0bd77aaaa52d5: Status 404 returned error can't find the container with id dc802632e2244006c32ab9789901e384820fde2930b566c834f0bd77aaaa52d5 Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.402945 4812 request.go:700] Waited for 1.930605813s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.404355 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.404846 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.405210 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:18.905195104 +0000 UTC m=+152.694147475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.426947 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.443855 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzd88" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.457519 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7ndrw"] Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.481614 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfsnq\" (UniqueName: \"kubernetes.io/projected/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-kube-api-access-zfsnq\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.504055 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xtdvk"] Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.506555 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.507717 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:19.007666018 +0000 UTC m=+152.796618389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.511987 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ldxd\" (UniqueName: \"kubernetes.io/projected/6432557f-df32-40cb-a0b5-6ee2c652b10f-kube-api-access-7ldxd\") pod \"dns-operator-744455d44c-kqn98\" (UID: \"6432557f-df32-40cb-a0b5-6ee2c652b10f\") " pod="openshift-dns-operator/dns-operator-744455d44c-kqn98" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.523023 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgq9m\" (UniqueName: \"kubernetes.io/projected/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-kube-api-access-mgq9m\") pod \"controller-manager-879f6c89f-6zn22\" (UID: \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.549758 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jssqm\" (UniqueName: \"kubernetes.io/projected/a9392d98-21d0-4761-a4e4-0a75367b4c31-kube-api-access-jssqm\") pod \"route-controller-manager-6576b87f9c-574hr\" (UID: \"a9392d98-21d0-4761-a4e4-0a75367b4c31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.571988 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtswf\" (UniqueName: \"kubernetes.io/projected/68125ac7-cf1b-4461-820a-b7318076e62d-kube-api-access-dtswf\") pod \"console-f9d7485db-jb4lb\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.591981 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx6d9\" (UniqueName: \"kubernetes.io/projected/6f120455-5d97-4a64-a279-3992aeba7663-kube-api-access-vx6d9\") pod \"openshift-controller-manager-operator-756b6f6bc6-zv5wk\" (UID: \"6f120455-5d97-4a64-a279-3992aeba7663\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zv5wk" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.597854 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-kqn98" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.601948 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-bound-sa-token\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.611308 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b781aa9b-e5aa-43c5-b9a5-4962fe210ad7-node-bootstrap-token\") pod \"machine-config-server-cs84n\" (UID: \"b781aa9b-e5aa-43c5-b9a5-4962fe210ad7\") " pod="openshift-machine-config-operator/machine-config-server-cs84n" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.611437 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9bb2abbd-0aca-4038-91a7-4fff299bfb45-config-volume\") pod \"dns-default-nzz22\" (UID: \"9bb2abbd-0aca-4038-91a7-4fff299bfb45\") " pod="openshift-dns/dns-default-nzz22" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.611482 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9bb2abbd-0aca-4038-91a7-4fff299bfb45-metrics-tls\") pod \"dns-default-nzz22\" (UID: \"9bb2abbd-0aca-4038-91a7-4fff299bfb45\") " pod="openshift-dns/dns-default-nzz22" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.611610 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b781aa9b-e5aa-43c5-b9a5-4962fe210ad7-certs\") pod \"machine-config-server-cs84n\" (UID: \"b781aa9b-e5aa-43c5-b9a5-4962fe210ad7\") " pod="openshift-machine-config-operator/machine-config-server-cs84n" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.611671 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e972f42f-2bbb-4c1a-83a8-56b3fab82b60-cert\") pod \"ingress-canary-ccmv2\" (UID: \"e972f42f-2bbb-4c1a-83a8-56b3fab82b60\") " pod="openshift-ingress-canary/ingress-canary-ccmv2" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.613430 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9bb2abbd-0aca-4038-91a7-4fff299bfb45-config-volume\") pod \"dns-default-nzz22\" (UID: \"9bb2abbd-0aca-4038-91a7-4fff299bfb45\") " pod="openshift-dns/dns-default-nzz22" Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.613862 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:19.113841686 +0000 UTC m=+152.902794057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.614444 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.616213 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e972f42f-2bbb-4c1a-83a8-56b3fab82b60-cert\") pod \"ingress-canary-ccmv2\" (UID: \"e972f42f-2bbb-4c1a-83a8-56b3fab82b60\") " pod="openshift-ingress-canary/ingress-canary-ccmv2" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.617310 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b781aa9b-e5aa-43c5-b9a5-4962fe210ad7-node-bootstrap-token\") pod \"machine-config-server-cs84n\" (UID: \"b781aa9b-e5aa-43c5-b9a5-4962fe210ad7\") " pod="openshift-machine-config-operator/machine-config-server-cs84n" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.617903 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b781aa9b-e5aa-43c5-b9a5-4962fe210ad7-certs\") pod \"machine-config-server-cs84n\" (UID: \"b781aa9b-e5aa-43c5-b9a5-4962fe210ad7\") " pod="openshift-machine-config-operator/machine-config-server-cs84n" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.620142 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9bb2abbd-0aca-4038-91a7-4fff299bfb45-metrics-tls\") pod \"dns-default-nzz22\" (UID: \"9bb2abbd-0aca-4038-91a7-4fff299bfb45\") " pod="openshift-dns/dns-default-nzz22" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.621906 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9lvg4"] Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.622972 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62zgl\" (UniqueName: \"kubernetes.io/projected/2ce9900d-791b-48db-ad36-1fcdc3200dbf-kube-api-access-62zgl\") pod \"console-operator-58897d9998-krdpg\" (UID: \"2ce9900d-791b-48db-ad36-1fcdc3200dbf\") " pod="openshift-console-operator/console-operator-58897d9998-krdpg" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.624646 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.641240 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pb4k\" (UniqueName: \"kubernetes.io/projected/f95538f5-bbe6-43f4-a40c-0a60b3a2b828-kube-api-access-6pb4k\") pod \"multus-admission-controller-857f4d67dd-d2q8r\" (UID: \"f95538f5-bbe6-43f4-a40c-0a60b3a2b828\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-d2q8r" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.644287 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fqzwn"] Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.660910 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjmrs\" (UniqueName: \"kubernetes.io/projected/a2fd27c6-f3db-49bd-b347-b00275c78e6e-kube-api-access-jjmrs\") pod \"package-server-manager-789f6589d5-9hgkr\" (UID: \"a2fd27c6-f3db-49bd-b347-b00275c78e6e\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9hgkr" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.665909 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-22pnq"] Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.683674 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmlsh\" (UniqueName: \"kubernetes.io/projected/d4a06222-8c3c-46c0-a29c-acf87a56c0db-kube-api-access-tmlsh\") pod \"cluster-image-registry-operator-dc59b4c8b-277zs\" (UID: \"d4a06222-8c3c-46c0-a29c-acf87a56c0db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-277zs" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.692841 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.697221 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzd88"] Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.699849 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/139070ff-0113-4e25-93ef-d1861e0c4318-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2knf7\" (UID: \"139070ff-0113-4e25-93ef-d1861e0c4318\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2knf7" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.716369 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.717014 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:19.21699187 +0000 UTC m=+153.005944241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.724948 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-krdpg" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.727646 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzrxj\" (UniqueName: \"kubernetes.io/projected/8d30a2dd-d2ba-463e-8d95-6f47271bac81-kube-api-access-lzrxj\") pod \"router-default-5444994796-qrrxs\" (UID: \"8d30a2dd-d2ba-463e-8d95-6f47271bac81\") " pod="openshift-ingress/router-default-5444994796-qrrxs" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.739886 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/726e8147-8d74-4797-8989-fdb12e97bd01-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-bsr2j\" (UID: \"726e8147-8d74-4797-8989-fdb12e97bd01\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bsr2j" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.765876 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9h28\" (UniqueName: \"kubernetes.io/projected/89c6a1cc-e545-44a6-a7e3-8e900ca0858f-kube-api-access-v9h28\") pod \"machine-approver-56656f9798-qln2c\" (UID: \"89c6a1cc-e545-44a6-a7e3-8e900ca0858f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qln2c" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.782013 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.784615 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nmn6\" (UniqueName: \"kubernetes.io/projected/fb5635f7-027d-4411-ad86-2e27d7e74efb-kube-api-access-4nmn6\") pod \"service-ca-9c57cc56f-qjq4v\" (UID: \"fb5635f7-027d-4411-ad86-2e27d7e74efb\") " pod="openshift-service-ca/service-ca-9c57cc56f-qjq4v" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.806364 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md485\" (UniqueName: \"kubernetes.io/projected/29bdcdf7-cbd6-4ec4-93b3-84e1424bdaa8-kube-api-access-md485\") pod \"migrator-59844c95c7-bnzn9\" (UID: \"29bdcdf7-cbd6-4ec4-93b3-84e1424bdaa8\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnzn9" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.815617 4812 generic.go:334] "Generic (PLEG): container finished" podID="74f627e0-8a46-4876-b456-9efda3c4ad41" containerID="d15df23c26c800a71bd9e96f7e3b6e2483f2c960acfaecef25de241e9c90b0b0" exitCode=0 Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.815749 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" event={"ID":"74f627e0-8a46-4876-b456-9efda3c4ad41","Type":"ContainerDied","Data":"d15df23c26c800a71bd9e96f7e3b6e2483f2c960acfaecef25de241e9c90b0b0"} Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.815812 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" event={"ID":"74f627e0-8a46-4876-b456-9efda3c4ad41","Type":"ContainerStarted","Data":"339d45ee8e0686f826274d8ffefa1e650ed766e8270d55048c5aaad1470fbe60"} Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.818760 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:19.318740955 +0000 UTC m=+153.107693326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.820259 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.821787 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-kqn98"] Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.821969 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9lvg4" event={"ID":"bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0","Type":"ContainerStarted","Data":"a9b019c17ed24aefa197c8e431dae775ab60218ef7c86543693d4b94c09e7479"} Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.824790 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" event={"ID":"03ad7961-295b-4e12-82b0-b75f196049b0","Type":"ContainerStarted","Data":"05a0ae1ac223e06c4426ce0719640bc10179e0ee635b6b3165af7f32ea1643f3"} Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.831121 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crzxl\" (UniqueName: \"kubernetes.io/projected/0787504c-4f5c-4089-8f03-679eb74d7b50-kube-api-access-crzxl\") pod \"service-ca-operator-777779d784-7dndk\" (UID: \"0787504c-4f5c-4089-8f03-679eb74d7b50\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dndk" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.837261 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" event={"ID":"e28dbd25-fd09-440e-b2d8-4560d43ea079","Type":"ContainerStarted","Data":"ace6b90d70c5204d6491f42b0ba7575656f74d9e4df17a43ab852158cb88c9d1"} Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.840497 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" event={"ID":"16e4032b-e804-4cf0-9a9f-0c23319c06df","Type":"ContainerStarted","Data":"a369f9cfda8529ebc7c5a58a3489facbe630c41f8ed914b406066a4c7193e6a3"} Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.841209 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb6l6\" (UniqueName: \"kubernetes.io/projected/04267c57-8e76-48a4-b768-ce571525ed62-kube-api-access-xb6l6\") pod \"marketplace-operator-79b997595-nvpxb\" (UID: \"04267c57-8e76-48a4-b768-ce571525ed62\") " pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.843590 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7ndrw" event={"ID":"afeebfe0-572d-4c6e-b706-2a35b40e25e0","Type":"ContainerStarted","Data":"23851c1ba296728b7b3fc160ef3b70ec037519aadfa9edcc586b925b1f9126ce"} Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.843623 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7ndrw" event={"ID":"afeebfe0-572d-4c6e-b706-2a35b40e25e0","Type":"ContainerStarted","Data":"61f6eb6df9224ce1b6a61bd2e6a771bd6ee07f934c77f913a63c39e516345c82"} Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.845238 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4spkn" event={"ID":"1b540550-77e7-4545-9e34-b972ab5ec677","Type":"ContainerStarted","Data":"d33895f68f7bf6abb55e041836c2bd24ef9fa522c7b856717aebd79b7177a164"} Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.845264 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4spkn" event={"ID":"1b540550-77e7-4545-9e34-b972ab5ec677","Type":"ContainerStarted","Data":"dc802632e2244006c32ab9789901e384820fde2930b566c834f0bd77aaaa52d5"} Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.845892 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-4spkn" Nov 24 19:19:18 crc kubenswrapper[4812]: W1124 19:19:18.846750 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6432557f_df32_40cb_a0b5_6ee2c652b10f.slice/crio-63c503c544b4aefc1d0cc9d8e3156649a47552a71400aa96d9bc0e94d8a7afb6 WatchSource:0}: Error finding container 63c503c544b4aefc1d0cc9d8e3156649a47552a71400aa96d9bc0e94d8a7afb6: Status 404 returned error can't find the container with id 63c503c544b4aefc1d0cc9d8e3156649a47552a71400aa96d9bc0e94d8a7afb6 Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.853960 4812 patch_prober.go:28] interesting pod/downloads-7954f5f757-4spkn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.854004 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4spkn" podUID="1b540550-77e7-4545-9e34-b972ab5ec677" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.858884 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9hgkr" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.858956 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jdqd\" (UniqueName: \"kubernetes.io/projected/9bb2abbd-0aca-4038-91a7-4fff299bfb45-kube-api-access-4jdqd\") pod \"dns-default-nzz22\" (UID: \"9bb2abbd-0aca-4038-91a7-4fff299bfb45\") " pod="openshift-dns/dns-default-nzz22" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.859582 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzd88" event={"ID":"7dab4563-6b36-4d8e-a789-58459d65cb7c","Type":"ContainerStarted","Data":"4b5e328442b4f555225a6c4f45e20e0ffd1707a54c592317ed89412a3b89cd17"} Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.874202 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-d2q8r" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.876723 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zv5wk" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.882231 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr"] Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.886161 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60c29281-e44c-44f2-9981-e76e97baf4a8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8zwdr\" (UID: \"60c29281-e44c-44f2-9981-e76e97baf4a8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8zwdr" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.888763 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dndk" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.895895 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-qjq4v" Nov 24 19:19:18 crc kubenswrapper[4812]: W1124 19:19:18.915796 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9392d98_21d0_4761_a4e4_0a75367b4c31.slice/crio-d750689913de07cb46bfb0b07857dc4789753d64d8c4604215b97bce80af9f25 WatchSource:0}: Error finding container d750689913de07cb46bfb0b07857dc4789753d64d8c4604215b97bce80af9f25: Status 404 returned error can't find the container with id d750689913de07cb46bfb0b07857dc4789753d64d8c4604215b97bce80af9f25 Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.916130 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-nzz22" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.917319 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-png85\" (UniqueName: \"kubernetes.io/projected/dd6fed1e-59d9-400c-b9cd-561ec215a48e-kube-api-access-png85\") pod \"machine-config-controller-84d6567774-f6574\" (UID: \"dd6fed1e-59d9-400c-b9cd-561ec215a48e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6574" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.920201 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgjj5\" (UniqueName: \"kubernetes.io/projected/c35074ed-4b34-451a-aea5-9022f6f8c685-kube-api-access-vgjj5\") pod \"machine-api-operator-5694c8668f-6nn48\" (UID: \"c35074ed-4b34-451a-aea5-9022f6f8c685\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-6nn48" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.920312 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qln2c" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.920745 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.921293 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:19.42125917 +0000 UTC m=+153.210211541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.921411 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:18 crc kubenswrapper[4812]: E1124 19:19:18.921890 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:19.421878517 +0000 UTC m=+153.210830878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.928716 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qrrxs" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.938641 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2knf7" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.941547 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2bmh\" (UniqueName: \"kubernetes.io/projected/7b8ace94-12b2-4176-ba9f-e14f0b55e3c1-kube-api-access-t2bmh\") pod \"kube-storage-version-migrator-operator-b67b599dd-5bljf\" (UID: \"7b8ace94-12b2-4176-ba9f-e14f0b55e3c1\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5bljf" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.964067 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/793a0705-3d09-4760-aa80-481eddf0ad45-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zs9kb\" (UID: \"793a0705-3d09-4760-aa80-481eddf0ad45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zs9kb" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.964532 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8zwdr" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.982892 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bsr2j" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.997413 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q5mh\" (UniqueName: \"kubernetes.io/projected/47cb9fd2-9706-464a-9360-901e3da7aa86-kube-api-access-2q5mh\") pod \"csi-hostpathplugin-pgq95\" (UID: \"47cb9fd2-9706-464a-9360-901e3da7aa86\") " pod="hostpath-provisioner/csi-hostpathplugin-pgq95" Nov 24 19:19:18 crc kubenswrapper[4812]: I1124 19:19:18.998005 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6574" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:18.999800 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4a06222-8c3c-46c0-a29c-acf87a56c0db-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-277zs\" (UID: \"d4a06222-8c3c-46c0-a29c-acf87a56c0db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-277zs" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.012604 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-6nn48" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.020269 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsvmq\" (UniqueName: \"kubernetes.io/projected/2c4fc3d9-eec6-483b-94ac-626032f65ff6-kube-api-access-tsvmq\") pod \"olm-operator-6b444d44fb-56qgd\" (UID: \"2c4fc3d9-eec6-483b-94ac-626032f65ff6\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56qgd" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.023221 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.025755 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnzn9" Nov 24 19:19:19 crc kubenswrapper[4812]: E1124 19:19:19.026562 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:19.526530343 +0000 UTC m=+153.315482714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.052029 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd9bc\" (UniqueName: \"kubernetes.io/projected/e8777c17-ca96-4124-9420-0d41d4d458a9-kube-api-access-fd9bc\") pod \"packageserver-d55dfcdfc-7kfm7\" (UID: \"e8777c17-ca96-4124-9420-0d41d4d458a9\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.052476 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5bljf" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.064043 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.067386 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-krdpg"] Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.068149 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfmsp\" (UniqueName: \"kubernetes.io/projected/3fdb6f5e-7a9b-4dca-a612-07aa2e4751a3-kube-api-access-mfmsp\") pod \"catalog-operator-68c6474976-v8jpj\" (UID: \"3fdb6f5e-7a9b-4dca-a612-07aa2e4751a3\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v8jpj" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.085132 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7787q\" (UniqueName: \"kubernetes.io/projected/793a0705-3d09-4760-aa80-481eddf0ad45-kube-api-access-7787q\") pod \"ingress-operator-5b745b69d9-zs9kb\" (UID: \"793a0705-3d09-4760-aa80-481eddf0ad45\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zs9kb" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.126418 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:19 crc kubenswrapper[4812]: E1124 19:19:19.126920 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:19.626902789 +0000 UTC m=+153.415855160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.132195 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chp89\" (UniqueName: \"kubernetes.io/projected/e972f42f-2bbb-4c1a-83a8-56b3fab82b60-kube-api-access-chp89\") pod \"ingress-canary-ccmv2\" (UID: \"e972f42f-2bbb-4c1a-83a8-56b3fab82b60\") " pod="openshift-ingress-canary/ingress-canary-ccmv2" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.141943 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56qgd" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.142060 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnbtz\" (UniqueName: \"kubernetes.io/projected/89a685cc-973c-4317-b6fe-5bc842e32099-kube-api-access-dnbtz\") pod \"collect-profiles-29400195-lwbwx\" (UID: \"89a685cc-973c-4317-b6fe-5bc842e32099\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.149200 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn6kv\" (UniqueName: \"kubernetes.io/projected/85cc9fad-228f-4d80-8348-8743b79cf691-kube-api-access-gn6kv\") pod \"machine-config-operator-74547568cd-qb8jp\" (UID: \"85cc9fad-228f-4d80-8348-8743b79cf691\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qb8jp" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.159552 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vltcl\" (UniqueName: \"kubernetes.io/projected/0d003f73-5e4b-475f-bb57-66f13916a0c5-kube-api-access-vltcl\") pod \"control-plane-machine-set-operator-78cbb6b69f-qr7zk\" (UID: \"0d003f73-5e4b-475f-bb57-66f13916a0c5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qr7zk" Nov 24 19:19:19 crc kubenswrapper[4812]: W1124 19:19:19.160615 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ce9900d_791b_48db_ad36_1fcdc3200dbf.slice/crio-5337345dfa2cfc09a035c0dfe4ec95cb2de9f5d692b2f8eaa9362bbaedfa538e WatchSource:0}: Error finding container 5337345dfa2cfc09a035c0dfe4ec95cb2de9f5d692b2f8eaa9362bbaedfa538e: Status 404 returned error can't find the container with id 5337345dfa2cfc09a035c0dfe4ec95cb2de9f5d692b2f8eaa9362bbaedfa538e Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.167697 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v8jpj" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.179623 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.179779 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh7b4\" (UniqueName: \"kubernetes.io/projected/a153dfd1-0dbb-4913-bcf4-768496d5db9a-kube-api-access-hh7b4\") pod \"openshift-config-operator-7777fb866f-xv79f\" (UID: \"a153dfd1-0dbb-4913-bcf4-768496d5db9a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-xv79f" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.184390 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jb4lb"] Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.209762 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.210816 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh8xc\" (UniqueName: \"kubernetes.io/projected/b781aa9b-e5aa-43c5-b9a5-4962fe210ad7-kube-api-access-mh8xc\") pod \"machine-config-server-cs84n\" (UID: \"b781aa9b-e5aa-43c5-b9a5-4962fe210ad7\") " pod="openshift-machine-config-operator/machine-config-server-cs84n" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.213686 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zs9kb" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.227546 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:19 crc kubenswrapper[4812]: E1124 19:19:19.227733 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:19.727705007 +0000 UTC m=+153.516657368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.227818 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:19 crc kubenswrapper[4812]: E1124 19:19:19.228120 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:19.728112768 +0000 UTC m=+153.517065129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.236319 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pgq95" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.244047 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ccmv2" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.244905 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-277zs" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.252481 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-cs84n" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.253114 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xv79f" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.329281 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:19 crc kubenswrapper[4812]: E1124 19:19:19.329424 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:19.82940409 +0000 UTC m=+153.618356461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.329608 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:19 crc kubenswrapper[4812]: E1124 19:19:19.329917 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:19.829909824 +0000 UTC m=+153.618862195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.333727 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6zn22"] Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.336418 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qb8jp" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.362168 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zv5wk"] Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.422168 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9hgkr"] Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.431194 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:19 crc kubenswrapper[4812]: E1124 19:19:19.431983 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:19.931966337 +0000 UTC m=+153.720918698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.451301 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qr7zk" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.463687 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2knf7"] Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.533730 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:19 crc kubenswrapper[4812]: E1124 19:19:19.535212 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:20.035200473 +0000 UTC m=+153.824152844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:19 crc kubenswrapper[4812]: W1124 19:19:19.575869 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod139070ff_0113_4e25_93ef_d1861e0c4318.slice/crio-7e303047c93c74156f3661762f2343758d5cce1dd908258ea8ab2673fc40075f WatchSource:0}: Error finding container 7e303047c93c74156f3661762f2343758d5cce1dd908258ea8ab2673fc40075f: Status 404 returned error can't find the container with id 7e303047c93c74156f3661762f2343758d5cce1dd908258ea8ab2673fc40075f Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.607744 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-bnzn9"] Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.636322 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:19 crc kubenswrapper[4812]: E1124 19:19:19.636617 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:20.136594518 +0000 UTC m=+153.925546889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:19 crc kubenswrapper[4812]: W1124 19:19:19.694599 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb781aa9b_e5aa_43c5_b9a5_4962fe210ad7.slice/crio-6c80a2bfec6d131834383e76fcd30bece4c3edf42db2d07db95d4369f8af0dae WatchSource:0}: Error finding container 6c80a2bfec6d131834383e76fcd30bece4c3edf42db2d07db95d4369f8af0dae: Status 404 returned error can't find the container with id 6c80a2bfec6d131834383e76fcd30bece4c3edf42db2d07db95d4369f8af0dae Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.696799 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-7dndk"] Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.730813 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-qjq4v"] Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.739961 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:19 crc kubenswrapper[4812]: E1124 19:19:19.740867 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:20.240342678 +0000 UTC m=+154.029295049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.844623 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:19 crc kubenswrapper[4812]: E1124 19:19:19.844769 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:20.344747067 +0000 UTC m=+154.133699438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.845312 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:19 crc kubenswrapper[4812]: E1124 19:19:19.850104 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:20.350078105 +0000 UTC m=+154.139030476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.873368 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zv5wk" event={"ID":"6f120455-5d97-4a64-a279-3992aeba7663","Type":"ContainerStarted","Data":"a32343d2e759c6265841adc638b25c5eeff5293d5991ee71a12db02e3f810715"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.876955 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jb4lb" event={"ID":"68125ac7-cf1b-4461-820a-b7318076e62d","Type":"ContainerStarted","Data":"d1cc3c69ff571dfdb28ef512938a1977dab8160b24b9202f9968218a18535217"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.877282 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jb4lb" event={"ID":"68125ac7-cf1b-4461-820a-b7318076e62d","Type":"ContainerStarted","Data":"fa4b917ebd1be0bdd22b421c7647ab27c65c98326466c63f19eeb1c235eb413f"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.880810 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qln2c" event={"ID":"89c6a1cc-e545-44a6-a7e3-8e900ca0858f","Type":"ContainerStarted","Data":"6f6ab0ce3801794b375c8e2f9732d6ce01f536ea04df39c430f1bb89daa0c8d2"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.882525 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" event={"ID":"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a","Type":"ContainerStarted","Data":"269973b9eef9d55dfd028e938e0a26993f6fe1ffacf91d3b0d3719bc16c9cf7f"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.885917 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9lvg4" event={"ID":"bdd39769-5a8b-4e2d-b699-bde1c3a7a9e0","Type":"ContainerStarted","Data":"477bab3d00e4b62debaedc2e57873ee1ca69f1f72cf8862f20db981b4bb0742f"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.889534 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" event={"ID":"03ad7961-295b-4e12-82b0-b75f196049b0","Type":"ContainerStarted","Data":"edfced44cf5277e68981a2b2ba1c9dd542c00f04020ad22a1a0b1d096e5f05ee"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.889915 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.891832 4812 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fqzwn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.35:6443/healthz\": dial tcp 10.217.0.35:6443: connect: connection refused" start-of-body= Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.891886 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" podUID="03ad7961-295b-4e12-82b0-b75f196049b0" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.35:6443/healthz\": dial tcp 10.217.0.35:6443: connect: connection refused" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.893562 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" event={"ID":"e28dbd25-fd09-440e-b2d8-4560d43ea079","Type":"ContainerStarted","Data":"fcac6fca448be97a343a47dd7957b75d104a797e08ad96cbf01518125e17d4dd"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.896017 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" event={"ID":"a9392d98-21d0-4761-a4e4-0a75367b4c31","Type":"ContainerStarted","Data":"e7c49c08385020cf2e9de1b723115a409486bb32b3c57141a8b471c3ec7b4433"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.896039 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" event={"ID":"a9392d98-21d0-4761-a4e4-0a75367b4c31","Type":"ContainerStarted","Data":"d750689913de07cb46bfb0b07857dc4789753d64d8c4604215b97bce80af9f25"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.896243 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.902196 4812 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-574hr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.902237 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" podUID="a9392d98-21d0-4761-a4e4-0a75367b4c31" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.910327 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kqn98" event={"ID":"6432557f-df32-40cb-a0b5-6ee2c652b10f","Type":"ContainerStarted","Data":"5491dd95b1ec5831780b3c4d191e870ffa7613166f7c0486a8dcedf888d039ea"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.910386 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kqn98" event={"ID":"6432557f-df32-40cb-a0b5-6ee2c652b10f","Type":"ContainerStarted","Data":"63c503c544b4aefc1d0cc9d8e3156649a47552a71400aa96d9bc0e94d8a7afb6"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.912588 4812 generic.go:334] "Generic (PLEG): container finished" podID="16e4032b-e804-4cf0-9a9f-0c23319c06df" containerID="e792a1e3ea368db9c95d9d7b9f3f176b02ac85471f6e0ef358127eb9218cf0d0" exitCode=0 Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.912643 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" event={"ID":"16e4032b-e804-4cf0-9a9f-0c23319c06df","Type":"ContainerDied","Data":"e792a1e3ea368db9c95d9d7b9f3f176b02ac85471f6e0ef358127eb9218cf0d0"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.915415 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9hgkr" event={"ID":"a2fd27c6-f3db-49bd-b347-b00275c78e6e","Type":"ContainerStarted","Data":"76381aabb330f015746173c4093eeddbde279cebbb6e307ad3db75647cf9f7e5"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.917308 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cs84n" event={"ID":"b781aa9b-e5aa-43c5-b9a5-4962fe210ad7","Type":"ContainerStarted","Data":"6c80a2bfec6d131834383e76fcd30bece4c3edf42db2d07db95d4369f8af0dae"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.917580 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-xv79f"] Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.920116 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnzn9" event={"ID":"29bdcdf7-cbd6-4ec4-93b3-84e1424bdaa8","Type":"ContainerStarted","Data":"037793b81368dc85bb8d604a1aad251a349645c88f475151861366dd57dbd3df"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.932556 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7ndrw" event={"ID":"afeebfe0-572d-4c6e-b706-2a35b40e25e0","Type":"ContainerStarted","Data":"57d24b0bbd4039961e52464f0cee23bb6477a7338d4d86817bf5dc852d8590fc"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.934712 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzd88" event={"ID":"7dab4563-6b36-4d8e-a789-58459d65cb7c","Type":"ContainerStarted","Data":"6672a57d00dc3611f8e3622a0773782dcb96666f5786af18e4692d45be520b6a"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.936463 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2knf7" event={"ID":"139070ff-0113-4e25-93ef-d1861e0c4318","Type":"ContainerStarted","Data":"7e303047c93c74156f3661762f2343758d5cce1dd908258ea8ab2673fc40075f"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.940989 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" event={"ID":"74f627e0-8a46-4876-b456-9efda3c4ad41","Type":"ContainerStarted","Data":"684ebb6db1ac664213fa4b7691ec9274466b42a39b22f7ba643b89a385d9c952"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.943394 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qrrxs" event={"ID":"8d30a2dd-d2ba-463e-8d95-6f47271bac81","Type":"ContainerStarted","Data":"9af22bdfc3fe095e04784e50aa5192e19b88059c9062310078e740239fc1c8cf"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.946835 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:19 crc kubenswrapper[4812]: E1124 19:19:19.947930 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:20.447914481 +0000 UTC m=+154.236866852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.948772 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-krdpg" event={"ID":"2ce9900d-791b-48db-ad36-1fcdc3200dbf","Type":"ContainerStarted","Data":"45c677c17c24870bc06903bae70be314ae8fa0b44cd5363aab9c2f73b05725b8"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.948873 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-krdpg" event={"ID":"2ce9900d-791b-48db-ad36-1fcdc3200dbf","Type":"ContainerStarted","Data":"5337345dfa2cfc09a035c0dfe4ec95cb2de9f5d692b2f8eaa9362bbaedfa538e"} Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.951158 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-krdpg" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.953027 4812 patch_prober.go:28] interesting pod/downloads-7954f5f757-4spkn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.953082 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4spkn" podUID="1b540550-77e7-4545-9e34-b972ab5ec677" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.957873 4812 patch_prober.go:28] interesting pod/console-operator-58897d9998-krdpg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 24 19:19:19 crc kubenswrapper[4812]: I1124 19:19:19.958045 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-krdpg" podUID="2ce9900d-791b-48db-ad36-1fcdc3200dbf" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 24 19:19:20 crc kubenswrapper[4812]: I1124 19:19:20.092469 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:20 crc kubenswrapper[4812]: E1124 19:19:20.095416 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:20.595402399 +0000 UTC m=+154.384354990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:20 crc kubenswrapper[4812]: I1124 19:19:20.200239 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:20 crc kubenswrapper[4812]: E1124 19:19:20.213752 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:20.713727666 +0000 UTC m=+154.502680027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:20 crc kubenswrapper[4812]: I1124 19:19:20.214821 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:20 crc kubenswrapper[4812]: E1124 19:19:20.215139 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:20.715131965 +0000 UTC m=+154.504084336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:20 crc kubenswrapper[4812]: I1124 19:19:20.316024 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:20 crc kubenswrapper[4812]: E1124 19:19:20.316812 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:20.816790987 +0000 UTC m=+154.605743368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:20 crc kubenswrapper[4812]: I1124 19:19:20.424074 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:20 crc kubenswrapper[4812]: E1124 19:19:20.424369 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:20.924356783 +0000 UTC m=+154.713309154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:20 crc kubenswrapper[4812]: I1124 19:19:20.479500 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-4spkn" podStartSLOduration=129.479476369 podStartE2EDuration="2m9.479476369s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:20.476112835 +0000 UTC m=+154.265065206" watchObservedRunningTime="2025-11-24 19:19:20.479476369 +0000 UTC m=+154.268428770" Nov 24 19:19:20 crc kubenswrapper[4812]: I1124 19:19:20.527163 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:20 crc kubenswrapper[4812]: E1124 19:19:20.527712 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:21.027679662 +0000 UTC m=+154.816632043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:20 crc kubenswrapper[4812]: I1124 19:19:20.537380 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:20 crc kubenswrapper[4812]: E1124 19:19:20.537879 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:21.037863325 +0000 UTC m=+154.826815686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:20 crc kubenswrapper[4812]: I1124 19:19:20.555756 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-nzz22"] Nov 24 19:19:20 crc kubenswrapper[4812]: W1124 19:19:20.582569 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bb2abbd_0aca_4038_91a7_4fff299bfb45.slice/crio-78af9a9f90c7bf6dd11616c017cd0f562996d82ca44e297393f35d8b79eadf8f WatchSource:0}: Error finding container 78af9a9f90c7bf6dd11616c017cd0f562996d82ca44e297393f35d8b79eadf8f: Status 404 returned error can't find the container with id 78af9a9f90c7bf6dd11616c017cd0f562996d82ca44e297393f35d8b79eadf8f Nov 24 19:19:20 crc kubenswrapper[4812]: I1124 19:19:20.640833 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:20 crc kubenswrapper[4812]: E1124 19:19:20.641673 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:21.141642136 +0000 UTC m=+154.930594507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:20 crc kubenswrapper[4812]: I1124 19:19:20.648227 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5bljf"] Nov 24 19:19:20 crc kubenswrapper[4812]: I1124 19:19:20.753588 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:20 crc kubenswrapper[4812]: E1124 19:19:20.754152 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:21.25414275 +0000 UTC m=+155.043095111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:20 crc kubenswrapper[4812]: I1124 19:19:20.867761 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:20 crc kubenswrapper[4812]: E1124 19:19:20.867907 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:21.367890639 +0000 UTC m=+155.156843000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:20 crc kubenswrapper[4812]: I1124 19:19:20.868095 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:20 crc kubenswrapper[4812]: E1124 19:19:20.868636 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:21.36862911 +0000 UTC m=+155.157581471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:20 crc kubenswrapper[4812]: I1124 19:19:20.931926 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" podStartSLOduration=129.931911703 podStartE2EDuration="2m9.931911703s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:20.929777503 +0000 UTC m=+154.718729874" watchObservedRunningTime="2025-11-24 19:19:20.931911703 +0000 UTC m=+154.720864064" Nov 24 19:19:20 crc kubenswrapper[4812]: I1124 19:19:20.969418 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:20 crc kubenswrapper[4812]: E1124 19:19:20.969731 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:21.469710946 +0000 UTC m=+155.258663317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:20 crc kubenswrapper[4812]: I1124 19:19:20.981258 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-krdpg" podStartSLOduration=129.981224126 podStartE2EDuration="2m9.981224126s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:20.976915316 +0000 UTC m=+154.765867687" watchObservedRunningTime="2025-11-24 19:19:20.981224126 +0000 UTC m=+154.770176497" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.023604 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5bljf" event={"ID":"7b8ace94-12b2-4176-ba9f-e14f0b55e3c1","Type":"ContainerStarted","Data":"7fad445a4ecfe0759a400d5b21323e5144b7634333d4eca588b2b6b21124456d"} Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.057169 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bsr2j"] Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.060453 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v8jpj"] Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.074907 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:21 crc kubenswrapper[4812]: E1124 19:19:21.075445 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:21.57543315 +0000 UTC m=+155.364385521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.075499 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ccmv2"] Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.075788 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zv5wk" event={"ID":"6f120455-5d97-4a64-a279-3992aeba7663","Type":"ContainerStarted","Data":"41dce4cee50a7d50cc69a39d0bfc2120d52b74b221cad9e9868306382736ca24"} Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.077719 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-f6574"] Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.085982 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nvpxb"] Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.086036 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-d2q8r"] Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.087769 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-jb4lb" podStartSLOduration=130.087759033 podStartE2EDuration="2m10.087759033s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:21.087166387 +0000 UTC m=+154.876118758" watchObservedRunningTime="2025-11-24 19:19:21.087759033 +0000 UTC m=+154.876711404" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.104151 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-kqn98" event={"ID":"6432557f-df32-40cb-a0b5-6ee2c652b10f","Type":"ContainerStarted","Data":"b817df1a04ac3e98c40f904d39728e18e1e9363be8fa11b0ae69b7ecc82c00ea"} Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.117348 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" podStartSLOduration=129.117316037 podStartE2EDuration="2m9.117316037s" podCreationTimestamp="2025-11-24 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:21.116844884 +0000 UTC m=+154.905797255" watchObservedRunningTime="2025-11-24 19:19:21.117316037 +0000 UTC m=+154.906268408" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.127621 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2knf7" event={"ID":"139070ff-0113-4e25-93ef-d1861e0c4318","Type":"ContainerStarted","Data":"53d3b57c3589e46aff8cb0cb2583ab728264a20b3d6a932bd08c6e8550021552"} Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.129979 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9hgkr" event={"ID":"a2fd27c6-f3db-49bd-b347-b00275c78e6e","Type":"ContainerStarted","Data":"f4620c8369c13a169a4f84410ea50c4482e5b7590416e2259f1964a9c663f645"} Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.130035 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9hgkr" event={"ID":"a2fd27c6-f3db-49bd-b347-b00275c78e6e","Type":"ContainerStarted","Data":"082188be1467dbdffe822436155d8d0ca1afb484d84e1e427dec54242c343550"} Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.130800 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9hgkr" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.151287 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" podStartSLOduration=129.151263973 podStartE2EDuration="2m9.151263973s" podCreationTimestamp="2025-11-24 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:21.147501398 +0000 UTC m=+154.936453769" watchObservedRunningTime="2025-11-24 19:19:21.151263973 +0000 UTC m=+154.940216344" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.160473 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-cs84n" event={"ID":"b781aa9b-e5aa-43c5-b9a5-4962fe210ad7","Type":"ContainerStarted","Data":"442c39eb1fd4ffe48d0e6d5f4e3632b0aa8b6e614b0ac8774863ca590812b101"} Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.166249 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8zwdr"] Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.173048 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zs9kb"] Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.173126 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xv79f" event={"ID":"a153dfd1-0dbb-4913-bcf4-768496d5db9a","Type":"ContainerStarted","Data":"a7e7517400b2eb55adc739807e5a675c45aa251f0302f0d06ddedff8ef5b5521"} Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.173161 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xv79f" event={"ID":"a153dfd1-0dbb-4913-bcf4-768496d5db9a","Type":"ContainerStarted","Data":"ea153a49a003f2340dd7fc1a259298f16c6de2740647ce6c0d521a7360127953"} Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.175035 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56qgd"] Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.176040 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:21 crc kubenswrapper[4812]: E1124 19:19:21.176201 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:21.676179647 +0000 UTC m=+155.465132018 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.176348 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:21 crc kubenswrapper[4812]: E1124 19:19:21.177238 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:21.677204095 +0000 UTC m=+155.466156466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.178292 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" event={"ID":"16e4032b-e804-4cf0-9a9f-0c23319c06df","Type":"ContainerStarted","Data":"25209b90878e2d30328e84e5a49e78f5507ea1b14788bd868d78967140e91e71"} Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.193400 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gzd88" podStartSLOduration=130.193377546 podStartE2EDuration="2m10.193377546s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:21.184825308 +0000 UTC m=+154.973777689" watchObservedRunningTime="2025-11-24 19:19:21.193377546 +0000 UTC m=+154.982329917" Nov 24 19:19:21 crc kubenswrapper[4812]: W1124 19:19:21.194262 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3fdb6f5e_7a9b_4dca_a612_07aa2e4751a3.slice/crio-289e9629d8710ef4e531099eee6f7adc7f3434bd9e2695fec74717604ba783bf WatchSource:0}: Error finding container 289e9629d8710ef4e531099eee6f7adc7f3434bd9e2695fec74717604ba783bf: Status 404 returned error can't find the container with id 289e9629d8710ef4e531099eee6f7adc7f3434bd9e2695fec74717604ba783bf Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.194639 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnzn9" event={"ID":"29bdcdf7-cbd6-4ec4-93b3-84e1424bdaa8","Type":"ContainerStarted","Data":"518f2fb29e06a0292c22e694d9a8be0768ec135c78f57814d71f8e71b328551a"} Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.223281 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qr7zk"] Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.237363 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-9lvg4" podStartSLOduration=130.237345981 podStartE2EDuration="2m10.237345981s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:21.236752264 +0000 UTC m=+155.025704645" watchObservedRunningTime="2025-11-24 19:19:21.237345981 +0000 UTC m=+155.026298352" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.299905 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nzz22" event={"ID":"9bb2abbd-0aca-4038-91a7-4fff299bfb45","Type":"ContainerStarted","Data":"78af9a9f90c7bf6dd11616c017cd0f562996d82ca44e297393f35d8b79eadf8f"} Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.301120 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:21 crc kubenswrapper[4812]: E1124 19:19:21.302095 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:21.802080274 +0000 UTC m=+155.591032645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.303972 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-6nn48"] Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.304030 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7"] Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.307783 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pgq95"] Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.326261 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-7ndrw" podStartSLOduration=130.326246337 podStartE2EDuration="2m10.326246337s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:21.32383065 +0000 UTC m=+155.112783021" watchObservedRunningTime="2025-11-24 19:19:21.326246337 +0000 UTC m=+155.115198708" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.334824 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dndk" event={"ID":"0787504c-4f5c-4089-8f03-679eb74d7b50","Type":"ContainerStarted","Data":"2abfa1a61ee889355b852c0f7c86817924feb37c9bf00be4e9c2d20979b32190"} Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.334918 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dndk" event={"ID":"0787504c-4f5c-4089-8f03-679eb74d7b50","Type":"ContainerStarted","Data":"0bdc3855ae773de81240e04367f9f87da2b0e88e0cf33c99f120e34cb744d5b2"} Nov 24 19:19:21 crc kubenswrapper[4812]: W1124 19:19:21.367611 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60c29281_e44c_44f2_9981_e76e97baf4a8.slice/crio-0f51338dc81577da5a1b4e7e1c10d1ef2a4cf679f92ba8f800a741bb7d91d2f0 WatchSource:0}: Error finding container 0f51338dc81577da5a1b4e7e1c10d1ef2a4cf679f92ba8f800a741bb7d91d2f0: Status 404 returned error can't find the container with id 0f51338dc81577da5a1b4e7e1c10d1ef2a4cf679f92ba8f800a741bb7d91d2f0 Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.382893 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-22pnq" podStartSLOduration=130.382864555 podStartE2EDuration="2m10.382864555s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:21.38199348 +0000 UTC m=+155.170945851" watchObservedRunningTime="2025-11-24 19:19:21.382864555 +0000 UTC m=+155.171816916" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.390737 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" event={"ID":"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a","Type":"ContainerStarted","Data":"04f8139176875de509920e10f4b7b69dae3b8ced2d32e28b51e1aac66bdb10ae"} Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.424183 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-277zs"] Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.426345 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qrrxs" event={"ID":"8d30a2dd-d2ba-463e-8d95-6f47271bac81","Type":"ContainerStarted","Data":"f9c035945cdedfd8cf5881338f952edeb8f62595e89ed3ba6c5843e70425dd08"} Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.444494 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:21 crc kubenswrapper[4812]: E1124 19:19:21.446074 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:21.946062955 +0000 UTC m=+155.735015326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.452299 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-qjq4v" event={"ID":"fb5635f7-027d-4411-ad86-2e27d7e74efb","Type":"ContainerStarted","Data":"84380c95dec7391c6f7854ece2f661ddcec3f8e2c4a0e37cc8effe605f49bc8a"} Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.452701 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-qjq4v" event={"ID":"fb5635f7-027d-4411-ad86-2e27d7e74efb","Type":"ContainerStarted","Data":"86a398c1cbc7ca17c7e640721a27add34fe8b7689d9782fe58354ebdc7776d71"} Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.460986 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.463794 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-qb8jp"] Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.497487 4812 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-6zn22 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.497539 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" podUID="1ba44129-7f71-4b58-a7eb-ee5356d4aa4a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.497855 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qln2c" event={"ID":"89c6a1cc-e545-44a6-a7e3-8e900ca0858f","Type":"ContainerStarted","Data":"235898ad0088676c705f828eaba86e1a2e61d9bcf9b50c24746cd97ef5688911"} Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.501097 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zv5wk" podStartSLOduration=130.501086488 podStartE2EDuration="2m10.501086488s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:21.498073414 +0000 UTC m=+155.287025785" watchObservedRunningTime="2025-11-24 19:19:21.501086488 +0000 UTC m=+155.290038859" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.501788 4812 patch_prober.go:28] interesting pod/console-operator-58897d9998-krdpg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.501846 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-krdpg" podUID="2ce9900d-791b-48db-ad36-1fcdc3200dbf" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.502072 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx"] Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.502527 4812 patch_prober.go:28] interesting pod/downloads-7954f5f757-4spkn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.502544 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4spkn" podUID="1b540550-77e7-4545-9e34-b972ab5ec677" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.520629 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.520687 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.546114 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:21 crc kubenswrapper[4812]: E1124 19:19:21.546275 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:22.046249816 +0000 UTC m=+155.835202187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.546804 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:21 crc kubenswrapper[4812]: E1124 19:19:21.553207 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:22.05283548 +0000 UTC m=+155.841787851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.607032 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7dndk" podStartSLOduration=129.607017269 podStartE2EDuration="2m9.607017269s" podCreationTimestamp="2025-11-24 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:21.605723923 +0000 UTC m=+155.394676294" watchObservedRunningTime="2025-11-24 19:19:21.607017269 +0000 UTC m=+155.395969640" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.629460 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-kqn98" podStartSLOduration=130.629444584 podStartE2EDuration="2m10.629444584s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:21.626378579 +0000 UTC m=+155.415330950" watchObservedRunningTime="2025-11-24 19:19:21.629444584 +0000 UTC m=+155.418396955" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.657899 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:21 crc kubenswrapper[4812]: E1124 19:19:21.659574 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:22.159558613 +0000 UTC m=+155.948510984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.695214 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2knf7" podStartSLOduration=129.695190916 podStartE2EDuration="2m9.695190916s" podCreationTimestamp="2025-11-24 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:21.676802513 +0000 UTC m=+155.465754884" watchObservedRunningTime="2025-11-24 19:19:21.695190916 +0000 UTC m=+155.484143277" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.706463 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnzn9" podStartSLOduration=129.706438829 podStartE2EDuration="2m9.706438829s" podCreationTimestamp="2025-11-24 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:21.706093839 +0000 UTC m=+155.495046210" watchObservedRunningTime="2025-11-24 19:19:21.706438829 +0000 UTC m=+155.495391200" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.759630 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:21 crc kubenswrapper[4812]: E1124 19:19:21.759918 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:22.259905708 +0000 UTC m=+156.048858079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.868904 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:21 crc kubenswrapper[4812]: E1124 19:19:21.870614 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:22.370598682 +0000 UTC m=+156.159551053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.900556 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-cs84n" podStartSLOduration=5.900530926 podStartE2EDuration="5.900530926s" podCreationTimestamp="2025-11-24 19:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:21.897637325 +0000 UTC m=+155.686589706" watchObservedRunningTime="2025-11-24 19:19:21.900530926 +0000 UTC m=+155.689483297" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.929442 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qrrxs" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.943460 4812 patch_prober.go:28] interesting pod/router-default-5444994796-qrrxs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 19:19:21 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Nov 24 19:19:21 crc kubenswrapper[4812]: [+]process-running ok Nov 24 19:19:21 crc kubenswrapper[4812]: healthz check failed Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.943504 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qrrxs" podUID="8d30a2dd-d2ba-463e-8d95-6f47271bac81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 19:19:21 crc kubenswrapper[4812]: I1124 19:19:21.971594 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:21 crc kubenswrapper[4812]: E1124 19:19:21.971918 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:22.471905814 +0000 UTC m=+156.260858185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.074834 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:22 crc kubenswrapper[4812]: E1124 19:19:22.075008 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:22.574985106 +0000 UTC m=+156.363937477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.075055 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:22 crc kubenswrapper[4812]: E1124 19:19:22.075434 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:22.575427008 +0000 UTC m=+156.364379369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.176092 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:22 crc kubenswrapper[4812]: E1124 19:19:22.176406 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:22.676390441 +0000 UTC m=+156.465342812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.182026 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9hgkr" podStartSLOduration=130.182007478 podStartE2EDuration="2m10.182007478s" podCreationTimestamp="2025-11-24 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:22.178958883 +0000 UTC m=+155.967911254" watchObservedRunningTime="2025-11-24 19:19:22.182007478 +0000 UTC m=+155.970959849" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.314179 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:22 crc kubenswrapper[4812]: E1124 19:19:22.314759 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:22.814738045 +0000 UTC m=+156.603690416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.331484 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-qrrxs" podStartSLOduration=131.331459741 podStartE2EDuration="2m11.331459741s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:22.328405786 +0000 UTC m=+156.117358177" watchObservedRunningTime="2025-11-24 19:19:22.331459741 +0000 UTC m=+156.120412112" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.358815 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-qjq4v" podStartSLOduration=130.358789112 podStartE2EDuration="2m10.358789112s" podCreationTimestamp="2025-11-24 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:22.353131265 +0000 UTC m=+156.142083636" watchObservedRunningTime="2025-11-24 19:19:22.358789112 +0000 UTC m=+156.147741483" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.417582 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:22 crc kubenswrapper[4812]: E1124 19:19:22.417914 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:22.917897119 +0000 UTC m=+156.706849480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.427229 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" podStartSLOduration=131.427204418 podStartE2EDuration="2m11.427204418s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:22.418152336 +0000 UTC m=+156.207104707" watchObservedRunningTime="2025-11-24 19:19:22.427204418 +0000 UTC m=+156.216156789" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.508899 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5bljf" event={"ID":"7b8ace94-12b2-4176-ba9f-e14f0b55e3c1","Type":"ContainerStarted","Data":"860ea693e5d0fdb3b569843786619f7ddb6c2cad3b053eb6c45060fba6891cbd"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.519000 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qln2c" event={"ID":"89c6a1cc-e545-44a6-a7e3-8e900ca0858f","Type":"ContainerStarted","Data":"5a7ec3c92bb80458bbb70a68d94a3fc2bf4e1afb14a2cea702642a135f49c03b"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.520152 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:22 crc kubenswrapper[4812]: E1124 19:19:22.520890 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:23.020871118 +0000 UTC m=+156.809823589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.529646 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7" event={"ID":"e8777c17-ca96-4124-9420-0d41d4d458a9","Type":"ContainerStarted","Data":"98305b362ab5f6e11356b5980b6f93594038e2fc4cb53669b6ac01eb4eb34119"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.529703 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7" event={"ID":"e8777c17-ca96-4124-9420-0d41d4d458a9","Type":"ContainerStarted","Data":"c5d5a41bceca0d105d63d62b5fe8aba324aad9e42a26b528a30369d50a876ac5"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.530773 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.531722 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5bljf" podStartSLOduration=130.53170658 podStartE2EDuration="2m10.53170658s" podCreationTimestamp="2025-11-24 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:22.529960491 +0000 UTC m=+156.318912882" watchObservedRunningTime="2025-11-24 19:19:22.53170658 +0000 UTC m=+156.320658951" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.534679 4812 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7kfm7 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.534728 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7" podUID="e8777c17-ca96-4124-9420-0d41d4d458a9" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.537237 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qb8jp" event={"ID":"85cc9fad-228f-4d80-8348-8743b79cf691","Type":"ContainerStarted","Data":"f6b4c48da98b2ed8fbb2a81c0acd84b77b1ce7feefe78cc278af535054369e7a"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.537529 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qb8jp" event={"ID":"85cc9fad-228f-4d80-8348-8743b79cf691","Type":"ContainerStarted","Data":"fd7399b9f1425360130bdfc89bf5f57833a0a40802cae8ce92c7fc32e13ef6b2"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.554496 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-bnzn9" event={"ID":"29bdcdf7-cbd6-4ec4-93b3-84e1424bdaa8","Type":"ContainerStarted","Data":"d885f473aafb09062448ac48afb80b99fef975f3021073eda01f0e0e0a3e0f1f"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.572663 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx" event={"ID":"89a685cc-973c-4317-b6fe-5bc842e32099","Type":"ContainerStarted","Data":"04cb93a7af0996c4077c97be9869ff3e819c474357cc487a1130750133c5197e"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.572933 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx" event={"ID":"89a685cc-973c-4317-b6fe-5bc842e32099","Type":"ContainerStarted","Data":"abfd78a3fd873c048f74fa9c13fea10f6393a8970fd05a05bd0e9f689e0251ca"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.575941 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qln2c" podStartSLOduration=131.575926352 podStartE2EDuration="2m11.575926352s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:22.573155944 +0000 UTC m=+156.362108335" watchObservedRunningTime="2025-11-24 19:19:22.575926352 +0000 UTC m=+156.364878723" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.581411 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" event={"ID":"04267c57-8e76-48a4-b768-ce571525ed62","Type":"ContainerStarted","Data":"d130346746318faad1b53cadb4d4d35b6cbe7d429c55588a7bb106672bf83a7c"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.581466 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" event={"ID":"04267c57-8e76-48a4-b768-ce571525ed62","Type":"ContainerStarted","Data":"8047621a4f613eee1e27dca4e22c8386e3209902069afc802a3a1ee60c9f8146"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.582301 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.584912 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nzz22" event={"ID":"9bb2abbd-0aca-4038-91a7-4fff299bfb45","Type":"ContainerStarted","Data":"21a1d248d9e9617e865c7ad96f5727f3b5fa48e85440625dd2840c14bf4cf168"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.584957 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-nzz22" event={"ID":"9bb2abbd-0aca-4038-91a7-4fff299bfb45","Type":"ContainerStarted","Data":"5ec763973e8af59908cb8a6674426be4c570ee1eeaa95093c0f13766036df1ac"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.586073 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-nzz22" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.595009 4812 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nvpxb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.595103 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" podUID="04267c57-8e76-48a4-b768-ce571525ed62" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.597043 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bsr2j" event={"ID":"726e8147-8d74-4797-8989-fdb12e97bd01","Type":"ContainerStarted","Data":"9fca0fd5b90bfce6699c134e373e24052ab6049ed7e06784610aea3935585930"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.597090 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bsr2j" event={"ID":"726e8147-8d74-4797-8989-fdb12e97bd01","Type":"ContainerStarted","Data":"f647f4855fe7975f0fb9f831aa9bc1538e97b92e5b0d21eb98752816cb87929a"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.621310 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:22 crc kubenswrapper[4812]: E1124 19:19:22.621913 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:23.121889442 +0000 UTC m=+156.910841813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.622477 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:22 crc kubenswrapper[4812]: E1124 19:19:22.630436 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:23.1304151 +0000 UTC m=+156.919367471 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.633453 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" event={"ID":"16e4032b-e804-4cf0-9a9f-0c23319c06df","Type":"ContainerStarted","Data":"9b028b96c7a20fabe6d8e6567e8e3e1a35b2d27c1ecef730143bec26df292e1f"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.645082 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" podStartSLOduration=130.645067288 podStartE2EDuration="2m10.645067288s" podCreationTimestamp="2025-11-24 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:22.644701468 +0000 UTC m=+156.433653839" watchObservedRunningTime="2025-11-24 19:19:22.645067288 +0000 UTC m=+156.434019649" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.646751 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7" podStartSLOduration=130.646744564 podStartE2EDuration="2m10.646744564s" podCreationTimestamp="2025-11-24 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:22.608321894 +0000 UTC m=+156.397274265" watchObservedRunningTime="2025-11-24 19:19:22.646744564 +0000 UTC m=+156.435696935" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.657681 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-6nn48" event={"ID":"c35074ed-4b34-451a-aea5-9022f6f8c685","Type":"ContainerStarted","Data":"a278077c4ea64f39afd189e038b2afa83a6d0b2795a218487475341584bb2c3c"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.658016 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-6nn48" event={"ID":"c35074ed-4b34-451a-aea5-9022f6f8c685","Type":"ContainerStarted","Data":"00755edfe4173433e79d6534d731b427cf638ad3981990787ed224f91ec4f522"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.681474 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8zwdr" event={"ID":"60c29281-e44c-44f2-9981-e76e97baf4a8","Type":"ContainerStarted","Data":"8b0f2081f9c87dab975e6417b7d78964a40f8ae764144900eda680e2e3831ac1"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.681528 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8zwdr" event={"ID":"60c29281-e44c-44f2-9981-e76e97baf4a8","Type":"ContainerStarted","Data":"0f51338dc81577da5a1b4e7e1c10d1ef2a4cf679f92ba8f800a741bb7d91d2f0"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.681891 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-nzz22" podStartSLOduration=6.681867243 podStartE2EDuration="6.681867243s" podCreationTimestamp="2025-11-24 19:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:22.681442231 +0000 UTC m=+156.470394612" watchObservedRunningTime="2025-11-24 19:19:22.681867243 +0000 UTC m=+156.470819614" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.689084 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qr7zk" event={"ID":"0d003f73-5e4b-475f-bb57-66f13916a0c5","Type":"ContainerStarted","Data":"82b6d648c17054b9802c5d6a6e3843eccc2569b98a523f2923b2b20410361b35"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.689408 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qr7zk" event={"ID":"0d003f73-5e4b-475f-bb57-66f13916a0c5","Type":"ContainerStarted","Data":"6ddfcf8167391d1f9b887463fb6be509002201e87245fad93100f6882517621a"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.708997 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-277zs" event={"ID":"d4a06222-8c3c-46c0-a29c-acf87a56c0db","Type":"ContainerStarted","Data":"4faff94b18c2f3b43612ec3e00b09bc7add56dca328df185030b9b1fda1d5846"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.709039 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-277zs" event={"ID":"d4a06222-8c3c-46c0-a29c-acf87a56c0db","Type":"ContainerStarted","Data":"d241eac94d101cea7e20579fca84462524136b77c31512bdd8f12561dfbe5203"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.712212 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pgq95" event={"ID":"47cb9fd2-9706-464a-9360-901e3da7aa86","Type":"ContainerStarted","Data":"141474d92af7a96c829642f6a59371ec2b5a6bf2e2bd2a3748dd7344c3f1e689"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.713633 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zs9kb" event={"ID":"793a0705-3d09-4760-aa80-481eddf0ad45","Type":"ContainerStarted","Data":"6e539a26f097f02d277d2e32a5d7425e691e954c020e97d418097180653ff852"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.713656 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zs9kb" event={"ID":"793a0705-3d09-4760-aa80-481eddf0ad45","Type":"ContainerStarted","Data":"2b2e8916d52b312931aa3f92c0466661fdb7e4d6e8867cd2dc714035f1c032ca"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.717301 4812 generic.go:334] "Generic (PLEG): container finished" podID="a153dfd1-0dbb-4913-bcf4-768496d5db9a" containerID="a7e7517400b2eb55adc739807e5a675c45aa251f0302f0d06ddedff8ef5b5521" exitCode=0 Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.717377 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xv79f" event={"ID":"a153dfd1-0dbb-4913-bcf4-768496d5db9a","Type":"ContainerDied","Data":"a7e7517400b2eb55adc739807e5a675c45aa251f0302f0d06ddedff8ef5b5521"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.717394 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xv79f" event={"ID":"a153dfd1-0dbb-4913-bcf4-768496d5db9a","Type":"ContainerStarted","Data":"a734ebd58beef320e47ca0fd89cd7b20ee6ad6568d2903bcf197a7eb4dfe7e22"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.717761 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xv79f" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.719289 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v8jpj" event={"ID":"3fdb6f5e-7a9b-4dca-a612-07aa2e4751a3","Type":"ContainerStarted","Data":"8a659303bb1cc2979068fdfad7f4f4d18cf795305edff1a1195b91a5c7017f0a"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.719310 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v8jpj" event={"ID":"3fdb6f5e-7a9b-4dca-a612-07aa2e4751a3","Type":"ContainerStarted","Data":"289e9629d8710ef4e531099eee6f7adc7f3434bd9e2695fec74717604ba783bf"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.719905 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v8jpj" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.723710 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:22 crc kubenswrapper[4812]: E1124 19:19:22.724921 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:23.224904882 +0000 UTC m=+157.013857253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.731240 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-d2q8r" event={"ID":"f95538f5-bbe6-43f4-a40c-0a60b3a2b828","Type":"ContainerStarted","Data":"702e2d78bb927f296720acc1c921e00973449d6fe185e5833660decd710300a9"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.731293 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-d2q8r" event={"ID":"f95538f5-bbe6-43f4-a40c-0a60b3a2b828","Type":"ContainerStarted","Data":"2f44b903a7726bd476336e04f42bd3d101cc6f7e19f7a942c585dc4f46b53a15"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.731303 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-d2q8r" event={"ID":"f95538f5-bbe6-43f4-a40c-0a60b3a2b828","Type":"ContainerStarted","Data":"fac7cb1574a17eba6a7978e8daa4a878c11f30406082569c656cc5256a7f515f"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.731734 4812 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-v8jpj container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.731880 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v8jpj" podUID="3fdb6f5e-7a9b-4dca-a612-07aa2e4751a3" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.731902 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx" podStartSLOduration=131.731891077 podStartE2EDuration="2m11.731891077s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:22.731343211 +0000 UTC m=+156.520295592" watchObservedRunningTime="2025-11-24 19:19:22.731891077 +0000 UTC m=+156.520843448" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.733798 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-bsr2j" podStartSLOduration=130.733783859 podStartE2EDuration="2m10.733783859s" podCreationTimestamp="2025-11-24 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:22.70869897 +0000 UTC m=+156.497651341" watchObservedRunningTime="2025-11-24 19:19:22.733783859 +0000 UTC m=+156.522736230" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.735390 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ccmv2" event={"ID":"e972f42f-2bbb-4c1a-83a8-56b3fab82b60","Type":"ContainerStarted","Data":"6b20eac2178eca43bd60f768633b938bdb6f68dae9057ecb27cdacd607dfa87f"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.735425 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ccmv2" event={"ID":"e972f42f-2bbb-4c1a-83a8-56b3fab82b60","Type":"ContainerStarted","Data":"4ed0031a38234f120a127a5b61e3b1d29d21ed1382ff8120370bbfc7c691ab6b"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.742860 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6574" event={"ID":"dd6fed1e-59d9-400c-b9cd-561ec215a48e","Type":"ContainerStarted","Data":"2fb393257f8e9b09b645ff06c5ad2f0968e31811b4f54a2aa8403528be9a7a76"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.743804 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6574" event={"ID":"dd6fed1e-59d9-400c-b9cd-561ec215a48e","Type":"ContainerStarted","Data":"9c17f62d1c002c1e12696ecead7561239c6da33c6e965fb0ebbb23f2ea8a4dd6"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.751258 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zs9kb" podStartSLOduration=131.751240486 podStartE2EDuration="2m11.751240486s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:22.7481614 +0000 UTC m=+156.537113771" watchObservedRunningTime="2025-11-24 19:19:22.751240486 +0000 UTC m=+156.540192857" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.771962 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56qgd" event={"ID":"2c4fc3d9-eec6-483b-94ac-626032f65ff6","Type":"ContainerStarted","Data":"ea323af4a23e74713956a083b4347e10572575c5c429923ae1ce70336ac6a81d"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.772357 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56qgd" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.772373 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56qgd" event={"ID":"2c4fc3d9-eec6-483b-94ac-626032f65ff6","Type":"ContainerStarted","Data":"512468cdb4a18c6872061209c9850981065c20a2a9c520acd60e4b480dd28544"} Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.795505 4812 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-56qgd container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.795562 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56qgd" podUID="2c4fc3d9-eec6-483b-94ac-626032f65ff6" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.798603 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.798757 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xv79f" podStartSLOduration=131.798745609 podStartE2EDuration="2m11.798745609s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:22.773270799 +0000 UTC m=+156.562223180" watchObservedRunningTime="2025-11-24 19:19:22.798745609 +0000 UTC m=+156.587697980" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.801093 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qr7zk" podStartSLOduration=130.801079884 podStartE2EDuration="2m10.801079884s" podCreationTimestamp="2025-11-24 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:22.798954705 +0000 UTC m=+156.587907076" watchObservedRunningTime="2025-11-24 19:19:22.801079884 +0000 UTC m=+156.590032255" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.825486 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:22 crc kubenswrapper[4812]: E1124 19:19:22.830271 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:23.330253597 +0000 UTC m=+157.119205968 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.868700 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-277zs" podStartSLOduration=131.868686357 podStartE2EDuration="2m11.868686357s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:22.866770544 +0000 UTC m=+156.655722925" watchObservedRunningTime="2025-11-24 19:19:22.868686357 +0000 UTC m=+156.657638728" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.869822 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" podStartSLOduration=131.869816839 podStartE2EDuration="2m11.869816839s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:22.823692094 +0000 UTC m=+156.612644455" watchObservedRunningTime="2025-11-24 19:19:22.869816839 +0000 UTC m=+156.658769210" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.898255 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v8jpj" podStartSLOduration=130.898237131 podStartE2EDuration="2m10.898237131s" podCreationTimestamp="2025-11-24 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:22.89535708 +0000 UTC m=+156.684309471" watchObservedRunningTime="2025-11-24 19:19:22.898237131 +0000 UTC m=+156.687189502" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.926772 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:22 crc kubenswrapper[4812]: E1124 19:19:22.927188 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:23.427170177 +0000 UTC m=+157.216122558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.932543 4812 patch_prober.go:28] interesting pod/router-default-5444994796-qrrxs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 19:19:22 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Nov 24 19:19:22 crc kubenswrapper[4812]: [+]process-running ok Nov 24 19:19:22 crc kubenswrapper[4812]: healthz check failed Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.932606 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qrrxs" podUID="8d30a2dd-d2ba-463e-8d95-6f47271bac81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.935805 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8zwdr" podStartSLOduration=130.935793997 podStartE2EDuration="2m10.935793997s" podCreationTimestamp="2025-11-24 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:22.935727605 +0000 UTC m=+156.724679986" watchObservedRunningTime="2025-11-24 19:19:22.935793997 +0000 UTC m=+156.724746368" Nov 24 19:19:22 crc kubenswrapper[4812]: I1124 19:19:22.979416 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-ccmv2" podStartSLOduration=6.979394462 podStartE2EDuration="6.979394462s" podCreationTimestamp="2025-11-24 19:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:22.979050322 +0000 UTC m=+156.768002693" watchObservedRunningTime="2025-11-24 19:19:22.979394462 +0000 UTC m=+156.768346833" Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.029614 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:23 crc kubenswrapper[4812]: E1124 19:19:23.029882 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:23.529870128 +0000 UTC m=+157.318822499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.057200 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6574" podStartSLOduration=131.057183139 podStartE2EDuration="2m11.057183139s" podCreationTimestamp="2025-11-24 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:23.055223534 +0000 UTC m=+156.844175905" watchObservedRunningTime="2025-11-24 19:19:23.057183139 +0000 UTC m=+156.846135510" Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.057539 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-d2q8r" podStartSLOduration=131.057533288 podStartE2EDuration="2m11.057533288s" podCreationTimestamp="2025-11-24 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:23.033781567 +0000 UTC m=+156.822733948" watchObservedRunningTime="2025-11-24 19:19:23.057533288 +0000 UTC m=+156.846485659" Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.077220 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.078527 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.098426 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.130863 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:23 crc kubenswrapper[4812]: E1124 19:19:23.131058 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:23.631029836 +0000 UTC m=+157.419982207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.131282 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:23 crc kubenswrapper[4812]: E1124 19:19:23.131573 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:23.631561771 +0000 UTC m=+157.420514132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.199453 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56qgd" podStartSLOduration=131.199436331 podStartE2EDuration="2m11.199436331s" podCreationTimestamp="2025-11-24 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:23.198117575 +0000 UTC m=+156.987069946" watchObservedRunningTime="2025-11-24 19:19:23.199436331 +0000 UTC m=+156.988388702" Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.231998 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:23 crc kubenswrapper[4812]: E1124 19:19:23.232397 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:23.732380599 +0000 UTC m=+157.521332970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.268078 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.268128 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.269806 4812 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xtdvk container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.15:8443/livez\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.269852 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" podUID="16e4032b-e804-4cf0-9a9f-0c23319c06df" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.15:8443/livez\": dial tcp 10.217.0.15:8443: connect: connection refused" Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.333197 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:23 crc kubenswrapper[4812]: E1124 19:19:23.333555 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:23.833544447 +0000 UTC m=+157.622496808 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.434660 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:23 crc kubenswrapper[4812]: E1124 19:19:23.434846 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:23.934816549 +0000 UTC m=+157.723768920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.536183 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:23 crc kubenswrapper[4812]: E1124 19:19:23.536492 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:24.036481021 +0000 UTC m=+157.825433392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.637545 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:23 crc kubenswrapper[4812]: E1124 19:19:23.637727 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:24.137700321 +0000 UTC m=+157.926652692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.637826 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:23 crc kubenswrapper[4812]: E1124 19:19:23.638184 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:24.138170514 +0000 UTC m=+157.927122885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.739231 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:23 crc kubenswrapper[4812]: E1124 19:19:23.739633 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:24.23960397 +0000 UTC m=+158.028556341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.776733 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-f6574" event={"ID":"dd6fed1e-59d9-400c-b9cd-561ec215a48e","Type":"ContainerStarted","Data":"93c8a8cd8269f4446b2c6e653232ecba1127fbab36f64ffed2b2d5cf6de7e4d6"} Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.778996 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pgq95" event={"ID":"47cb9fd2-9706-464a-9360-901e3da7aa86","Type":"ContainerStarted","Data":"525c0277ac49b23dadfb1c1a6990960f8293dc82b72b46d42fd58ee99e46e2fc"} Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.780485 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zs9kb" event={"ID":"793a0705-3d09-4760-aa80-481eddf0ad45","Type":"ContainerStarted","Data":"6df475f6e76212982b6bc852f04bb078398322523682e69fa0a5b5a19d437e8e"} Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.782409 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-6nn48" event={"ID":"c35074ed-4b34-451a-aea5-9022f6f8c685","Type":"ContainerStarted","Data":"0c677f9052e300e6c9a53d8d431b4f5a6d546f99807e2ed7e62291d3088ed996"} Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.786863 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qb8jp" event={"ID":"85cc9fad-228f-4d80-8348-8743b79cf691","Type":"ContainerStarted","Data":"f786d2c1617a1a6cf5454ef4881e321b9321053b1a3e0988b079ddef0ce49877"} Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.787774 4812 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nvpxb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.787827 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" podUID="04267c57-8e76-48a4-b768-ce571525ed62" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.796858 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wfqs2" Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.813940 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-v8jpj" Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.830600 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-6nn48" podStartSLOduration=131.830584384 podStartE2EDuration="2m11.830584384s" podCreationTimestamp="2025-11-24 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:23.82540838 +0000 UTC m=+157.614360751" watchObservedRunningTime="2025-11-24 19:19:23.830584384 +0000 UTC m=+157.619536755" Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.844904 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:23 crc kubenswrapper[4812]: E1124 19:19:23.853985 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:24.353968446 +0000 UTC m=+158.142920807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.896131 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-56qgd" Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.942387 4812 patch_prober.go:28] interesting pod/router-default-5444994796-qrrxs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 19:19:23 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Nov 24 19:19:23 crc kubenswrapper[4812]: [+]process-running ok Nov 24 19:19:23 crc kubenswrapper[4812]: healthz check failed Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.942471 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qrrxs" podUID="8d30a2dd-d2ba-463e-8d95-6f47271bac81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.945883 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:23 crc kubenswrapper[4812]: E1124 19:19:23.946571 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:24.446554505 +0000 UTC m=+158.235506876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:23 crc kubenswrapper[4812]: I1124 19:19:23.958140 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-qb8jp" podStartSLOduration=131.958119997 podStartE2EDuration="2m11.958119997s" podCreationTimestamp="2025-11-24 19:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:23.915464719 +0000 UTC m=+157.704417090" watchObservedRunningTime="2025-11-24 19:19:23.958119997 +0000 UTC m=+157.747072358" Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.055079 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:24 crc kubenswrapper[4812]: E1124 19:19:24.055489 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:24.555478029 +0000 UTC m=+158.344430400 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.155827 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:24 crc kubenswrapper[4812]: E1124 19:19:24.156029 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:24.65600092 +0000 UTC m=+158.444953291 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.156115 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:24 crc kubenswrapper[4812]: E1124 19:19:24.156404 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:24.656392851 +0000 UTC m=+158.445345222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.162933 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7kfm7" Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.257084 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:24 crc kubenswrapper[4812]: E1124 19:19:24.257481 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:24.757457536 +0000 UTC m=+158.546409907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.257592 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:24 crc kubenswrapper[4812]: E1124 19:19:24.257928 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:24.757911859 +0000 UTC m=+158.546864290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.359586 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:24 crc kubenswrapper[4812]: E1124 19:19:24.359837 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:24.859805938 +0000 UTC m=+158.648758309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.359928 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:24 crc kubenswrapper[4812]: E1124 19:19:24.360254 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:24.86024494 +0000 UTC m=+158.649197311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.461384 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:24 crc kubenswrapper[4812]: E1124 19:19:24.461618 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:24.961585073 +0000 UTC m=+158.750537444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.462079 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:24 crc kubenswrapper[4812]: E1124 19:19:24.462388 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:24.962376345 +0000 UTC m=+158.751328716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.562703 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:24 crc kubenswrapper[4812]: E1124 19:19:24.562893 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:25.062869184 +0000 UTC m=+158.851821555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.562975 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:24 crc kubenswrapper[4812]: E1124 19:19:24.563310 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:25.063301926 +0000 UTC m=+158.852254297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.664491 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:24 crc kubenswrapper[4812]: E1124 19:19:24.664682 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:25.164655399 +0000 UTC m=+158.953607770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.664784 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:24 crc kubenswrapper[4812]: E1124 19:19:24.665138 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:25.165131682 +0000 UTC m=+158.954084053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.765400 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:24 crc kubenswrapper[4812]: E1124 19:19:24.765714 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:25.265671723 +0000 UTC m=+159.054624094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.765763 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:24 crc kubenswrapper[4812]: E1124 19:19:24.766208 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:25.266200388 +0000 UTC m=+159.055152759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.788568 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-466wh"] Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.789530 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-466wh" Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.793758 4812 generic.go:334] "Generic (PLEG): container finished" podID="89a685cc-973c-4317-b6fe-5bc842e32099" containerID="04cb93a7af0996c4077c97be9869ff3e819c474357cc487a1130750133c5197e" exitCode=0 Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.793917 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx" event={"ID":"89a685cc-973c-4317-b6fe-5bc842e32099","Type":"ContainerDied","Data":"04cb93a7af0996c4077c97be9869ff3e819c474357cc487a1130750133c5197e"} Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.794607 4812 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nvpxb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.794656 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" podUID="04267c57-8e76-48a4-b768-ce571525ed62" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.20:8080/healthz\": dial tcp 10.217.0.20:8080: connect: connection refused" Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.805478 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.810885 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-xv79f" Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.833744 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-466wh"] Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.869653 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.869791 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trvrd\" (UniqueName: \"kubernetes.io/projected/da6d85be-188f-4631-af4a-49aa40bb0d3e-kube-api-access-trvrd\") pod \"community-operators-466wh\" (UID: \"da6d85be-188f-4631-af4a-49aa40bb0d3e\") " pod="openshift-marketplace/community-operators-466wh" Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.869848 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da6d85be-188f-4631-af4a-49aa40bb0d3e-catalog-content\") pod \"community-operators-466wh\" (UID: \"da6d85be-188f-4631-af4a-49aa40bb0d3e\") " pod="openshift-marketplace/community-operators-466wh" Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.870373 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da6d85be-188f-4631-af4a-49aa40bb0d3e-utilities\") pod \"community-operators-466wh\" (UID: \"da6d85be-188f-4631-af4a-49aa40bb0d3e\") " pod="openshift-marketplace/community-operators-466wh" Nov 24 19:19:24 crc kubenswrapper[4812]: E1124 19:19:24.870479 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:25.370462142 +0000 UTC m=+159.159414513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.961992 4812 patch_prober.go:28] interesting pod/router-default-5444994796-qrrxs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 19:19:24 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Nov 24 19:19:24 crc kubenswrapper[4812]: [+]process-running ok Nov 24 19:19:24 crc kubenswrapper[4812]: healthz check failed Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.962381 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qrrxs" podUID="8d30a2dd-d2ba-463e-8d95-6f47271bac81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.971136 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da6d85be-188f-4631-af4a-49aa40bb0d3e-utilities\") pod \"community-operators-466wh\" (UID: \"da6d85be-188f-4631-af4a-49aa40bb0d3e\") " pod="openshift-marketplace/community-operators-466wh" Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.971173 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trvrd\" (UniqueName: \"kubernetes.io/projected/da6d85be-188f-4631-af4a-49aa40bb0d3e-kube-api-access-trvrd\") pod \"community-operators-466wh\" (UID: \"da6d85be-188f-4631-af4a-49aa40bb0d3e\") " pod="openshift-marketplace/community-operators-466wh" Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.971194 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da6d85be-188f-4631-af4a-49aa40bb0d3e-catalog-content\") pod \"community-operators-466wh\" (UID: \"da6d85be-188f-4631-af4a-49aa40bb0d3e\") " pod="openshift-marketplace/community-operators-466wh" Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.971227 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:24 crc kubenswrapper[4812]: E1124 19:19:24.971548 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:25.471535308 +0000 UTC m=+159.260487679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.971942 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da6d85be-188f-4631-af4a-49aa40bb0d3e-catalog-content\") pod \"community-operators-466wh\" (UID: \"da6d85be-188f-4631-af4a-49aa40bb0d3e\") " pod="openshift-marketplace/community-operators-466wh" Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.972035 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da6d85be-188f-4631-af4a-49aa40bb0d3e-utilities\") pod \"community-operators-466wh\" (UID: \"da6d85be-188f-4631-af4a-49aa40bb0d3e\") " pod="openshift-marketplace/community-operators-466wh" Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.988917 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xhtx9"] Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.990052 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xhtx9" Nov 24 19:19:24 crc kubenswrapper[4812]: I1124 19:19:24.994441 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.000304 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xhtx9"] Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.002846 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trvrd\" (UniqueName: \"kubernetes.io/projected/da6d85be-188f-4631-af4a-49aa40bb0d3e-kube-api-access-trvrd\") pod \"community-operators-466wh\" (UID: \"da6d85be-188f-4631-af4a-49aa40bb0d3e\") " pod="openshift-marketplace/community-operators-466wh" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.082888 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:25 crc kubenswrapper[4812]: E1124 19:19:25.083274 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:25.583257531 +0000 UTC m=+159.372209902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.163719 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-466wh" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.177808 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zzpbr"] Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.178962 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zzpbr" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.184815 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.184863 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87wkq\" (UniqueName: \"kubernetes.io/projected/d5b14a89-062b-410d-8d78-11239a856b78-kube-api-access-87wkq\") pod \"certified-operators-xhtx9\" (UID: \"d5b14a89-062b-410d-8d78-11239a856b78\") " pod="openshift-marketplace/certified-operators-xhtx9" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.184895 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0e8e18b-0be2-41c1-9520-61ed4b767fc2-catalog-content\") pod \"community-operators-zzpbr\" (UID: \"f0e8e18b-0be2-41c1-9520-61ed4b767fc2\") " pod="openshift-marketplace/community-operators-zzpbr" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.184920 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5b14a89-062b-410d-8d78-11239a856b78-catalog-content\") pod \"certified-operators-xhtx9\" (UID: \"d5b14a89-062b-410d-8d78-11239a856b78\") " pod="openshift-marketplace/certified-operators-xhtx9" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.184940 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5b14a89-062b-410d-8d78-11239a856b78-utilities\") pod \"certified-operators-xhtx9\" (UID: \"d5b14a89-062b-410d-8d78-11239a856b78\") " pod="openshift-marketplace/certified-operators-xhtx9" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.184959 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0e8e18b-0be2-41c1-9520-61ed4b767fc2-utilities\") pod \"community-operators-zzpbr\" (UID: \"f0e8e18b-0be2-41c1-9520-61ed4b767fc2\") " pod="openshift-marketplace/community-operators-zzpbr" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.185001 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf9p2\" (UniqueName: \"kubernetes.io/projected/f0e8e18b-0be2-41c1-9520-61ed4b767fc2-kube-api-access-qf9p2\") pod \"community-operators-zzpbr\" (UID: \"f0e8e18b-0be2-41c1-9520-61ed4b767fc2\") " pod="openshift-marketplace/community-operators-zzpbr" Nov 24 19:19:25 crc kubenswrapper[4812]: E1124 19:19:25.185295 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:25.685284273 +0000 UTC m=+159.474236644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.224041 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zzpbr"] Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.285732 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.285960 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0e8e18b-0be2-41c1-9520-61ed4b767fc2-catalog-content\") pod \"community-operators-zzpbr\" (UID: \"f0e8e18b-0be2-41c1-9520-61ed4b767fc2\") " pod="openshift-marketplace/community-operators-zzpbr" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.285993 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5b14a89-062b-410d-8d78-11239a856b78-catalog-content\") pod \"certified-operators-xhtx9\" (UID: \"d5b14a89-062b-410d-8d78-11239a856b78\") " pod="openshift-marketplace/certified-operators-xhtx9" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.286014 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5b14a89-062b-410d-8d78-11239a856b78-utilities\") pod \"certified-operators-xhtx9\" (UID: \"d5b14a89-062b-410d-8d78-11239a856b78\") " pod="openshift-marketplace/certified-operators-xhtx9" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.286041 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0e8e18b-0be2-41c1-9520-61ed4b767fc2-utilities\") pod \"community-operators-zzpbr\" (UID: \"f0e8e18b-0be2-41c1-9520-61ed4b767fc2\") " pod="openshift-marketplace/community-operators-zzpbr" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.286081 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf9p2\" (UniqueName: \"kubernetes.io/projected/f0e8e18b-0be2-41c1-9520-61ed4b767fc2-kube-api-access-qf9p2\") pod \"community-operators-zzpbr\" (UID: \"f0e8e18b-0be2-41c1-9520-61ed4b767fc2\") " pod="openshift-marketplace/community-operators-zzpbr" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.286134 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87wkq\" (UniqueName: \"kubernetes.io/projected/d5b14a89-062b-410d-8d78-11239a856b78-kube-api-access-87wkq\") pod \"certified-operators-xhtx9\" (UID: \"d5b14a89-062b-410d-8d78-11239a856b78\") " pod="openshift-marketplace/certified-operators-xhtx9" Nov 24 19:19:25 crc kubenswrapper[4812]: E1124 19:19:25.286652 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:25.786611216 +0000 UTC m=+159.575563587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.287191 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5b14a89-062b-410d-8d78-11239a856b78-utilities\") pod \"certified-operators-xhtx9\" (UID: \"d5b14a89-062b-410d-8d78-11239a856b78\") " pod="openshift-marketplace/certified-operators-xhtx9" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.287237 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5b14a89-062b-410d-8d78-11239a856b78-catalog-content\") pod \"certified-operators-xhtx9\" (UID: \"d5b14a89-062b-410d-8d78-11239a856b78\") " pod="openshift-marketplace/certified-operators-xhtx9" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.287358 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0e8e18b-0be2-41c1-9520-61ed4b767fc2-catalog-content\") pod \"community-operators-zzpbr\" (UID: \"f0e8e18b-0be2-41c1-9520-61ed4b767fc2\") " pod="openshift-marketplace/community-operators-zzpbr" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.287493 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0e8e18b-0be2-41c1-9520-61ed4b767fc2-utilities\") pod \"community-operators-zzpbr\" (UID: \"f0e8e18b-0be2-41c1-9520-61ed4b767fc2\") " pod="openshift-marketplace/community-operators-zzpbr" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.328618 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87wkq\" (UniqueName: \"kubernetes.io/projected/d5b14a89-062b-410d-8d78-11239a856b78-kube-api-access-87wkq\") pod \"certified-operators-xhtx9\" (UID: \"d5b14a89-062b-410d-8d78-11239a856b78\") " pod="openshift-marketplace/certified-operators-xhtx9" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.334685 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xhtx9" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.343764 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf9p2\" (UniqueName: \"kubernetes.io/projected/f0e8e18b-0be2-41c1-9520-61ed4b767fc2-kube-api-access-qf9p2\") pod \"community-operators-zzpbr\" (UID: \"f0e8e18b-0be2-41c1-9520-61ed4b767fc2\") " pod="openshift-marketplace/community-operators-zzpbr" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.398553 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:25 crc kubenswrapper[4812]: E1124 19:19:25.398848 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:25.898837592 +0000 UTC m=+159.687789963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.401003 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jjshd"] Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.402027 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjshd" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.407685 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jjshd"] Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.499873 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.500207 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e7743d-c680-4492-a4fe-87196f7fd893-catalog-content\") pod \"certified-operators-jjshd\" (UID: \"a8e7743d-c680-4492-a4fe-87196f7fd893\") " pod="openshift-marketplace/certified-operators-jjshd" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.500367 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mnx6\" (UniqueName: \"kubernetes.io/projected/a8e7743d-c680-4492-a4fe-87196f7fd893-kube-api-access-5mnx6\") pod \"certified-operators-jjshd\" (UID: \"a8e7743d-c680-4492-a4fe-87196f7fd893\") " pod="openshift-marketplace/certified-operators-jjshd" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.500421 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e7743d-c680-4492-a4fe-87196f7fd893-utilities\") pod \"certified-operators-jjshd\" (UID: \"a8e7743d-c680-4492-a4fe-87196f7fd893\") " pod="openshift-marketplace/certified-operators-jjshd" Nov 24 19:19:25 crc kubenswrapper[4812]: E1124 19:19:25.500926 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:26.000879215 +0000 UTC m=+159.789831606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.502738 4812 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.524819 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zzpbr" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.586867 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-466wh"] Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.601942 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e7743d-c680-4492-a4fe-87196f7fd893-catalog-content\") pod \"certified-operators-jjshd\" (UID: \"a8e7743d-c680-4492-a4fe-87196f7fd893\") " pod="openshift-marketplace/certified-operators-jjshd" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.601995 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.602015 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mnx6\" (UniqueName: \"kubernetes.io/projected/a8e7743d-c680-4492-a4fe-87196f7fd893-kube-api-access-5mnx6\") pod \"certified-operators-jjshd\" (UID: \"a8e7743d-c680-4492-a4fe-87196f7fd893\") " pod="openshift-marketplace/certified-operators-jjshd" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.602157 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e7743d-c680-4492-a4fe-87196f7fd893-utilities\") pod \"certified-operators-jjshd\" (UID: \"a8e7743d-c680-4492-a4fe-87196f7fd893\") " pod="openshift-marketplace/certified-operators-jjshd" Nov 24 19:19:25 crc kubenswrapper[4812]: E1124 19:19:25.603613 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:26.103599526 +0000 UTC m=+159.892551897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:25 crc kubenswrapper[4812]: W1124 19:19:25.603735 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda6d85be_188f_4631_af4a_49aa40bb0d3e.slice/crio-fdd626f4907fad15ac70b8e614d607cd7e152f68f22bf083032a026d26cd009e WatchSource:0}: Error finding container fdd626f4907fad15ac70b8e614d607cd7e152f68f22bf083032a026d26cd009e: Status 404 returned error can't find the container with id fdd626f4907fad15ac70b8e614d607cd7e152f68f22bf083032a026d26cd009e Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.604055 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e7743d-c680-4492-a4fe-87196f7fd893-catalog-content\") pod \"certified-operators-jjshd\" (UID: \"a8e7743d-c680-4492-a4fe-87196f7fd893\") " pod="openshift-marketplace/certified-operators-jjshd" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.604579 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e7743d-c680-4492-a4fe-87196f7fd893-utilities\") pod \"certified-operators-jjshd\" (UID: \"a8e7743d-c680-4492-a4fe-87196f7fd893\") " pod="openshift-marketplace/certified-operators-jjshd" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.629962 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mnx6\" (UniqueName: \"kubernetes.io/projected/a8e7743d-c680-4492-a4fe-87196f7fd893-kube-api-access-5mnx6\") pod \"certified-operators-jjshd\" (UID: \"a8e7743d-c680-4492-a4fe-87196f7fd893\") " pod="openshift-marketplace/certified-operators-jjshd" Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.670259 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xhtx9"] Nov 24 19:19:25 crc kubenswrapper[4812]: E1124 19:19:25.705402 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:26.205369002 +0000 UTC m=+159.994321383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.717108 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.717465 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:25 crc kubenswrapper[4812]: E1124 19:19:25.717850 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:26.217808848 +0000 UTC m=+160.006761219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.737646 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjshd" Nov 24 19:19:25 crc kubenswrapper[4812]: W1124 19:19:25.742685 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5b14a89_062b_410d_8d78_11239a856b78.slice/crio-6dbb30bfb051cc7b69c16e11b45fae5235d1e600171e8bfb3bc8a11efa59d107 WatchSource:0}: Error finding container 6dbb30bfb051cc7b69c16e11b45fae5235d1e600171e8bfb3bc8a11efa59d107: Status 404 returned error can't find the container with id 6dbb30bfb051cc7b69c16e11b45fae5235d1e600171e8bfb3bc8a11efa59d107 Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.820054 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:25 crc kubenswrapper[4812]: E1124 19:19:25.820409 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:26.320393806 +0000 UTC m=+160.109346177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.850800 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pgq95" event={"ID":"47cb9fd2-9706-464a-9360-901e3da7aa86","Type":"ContainerStarted","Data":"c8935616b26c854ca84fdd7fe188d99a22d3ad22cbf734fbb720a7e0576534be"} Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.850846 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pgq95" event={"ID":"47cb9fd2-9706-464a-9360-901e3da7aa86","Type":"ContainerStarted","Data":"b84a806415e1c91cbc14ea8e429de3813785bd263b4610ccd107be79acf55343"} Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.855570 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-466wh" event={"ID":"da6d85be-188f-4631-af4a-49aa40bb0d3e","Type":"ContainerStarted","Data":"fdd626f4907fad15ac70b8e614d607cd7e152f68f22bf083032a026d26cd009e"} Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.873200 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhtx9" event={"ID":"d5b14a89-062b-410d-8d78-11239a856b78","Type":"ContainerStarted","Data":"6dbb30bfb051cc7b69c16e11b45fae5235d1e600171e8bfb3bc8a11efa59d107"} Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.924571 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:25 crc kubenswrapper[4812]: E1124 19:19:25.930052 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:26.430037131 +0000 UTC m=+160.218989502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.934073 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zzpbr"] Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.934376 4812 patch_prober.go:28] interesting pod/router-default-5444994796-qrrxs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 19:19:25 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Nov 24 19:19:25 crc kubenswrapper[4812]: [+]process-running ok Nov 24 19:19:25 crc kubenswrapper[4812]: healthz check failed Nov 24 19:19:25 crc kubenswrapper[4812]: I1124 19:19:25.934568 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qrrxs" podUID="8d30a2dd-d2ba-463e-8d95-6f47271bac81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.025727 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:26 crc kubenswrapper[4812]: E1124 19:19:26.025905 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:26.525878941 +0000 UTC m=+160.314831312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.026118 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:26 crc kubenswrapper[4812]: E1124 19:19:26.026486 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 19:19:26.526472677 +0000 UTC m=+160.315425048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bd6qw" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.126887 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:26 crc kubenswrapper[4812]: E1124 19:19:26.127539 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 19:19:26.627523242 +0000 UTC m=+160.416475613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.150495 4812 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-24T19:19:25.502759757Z","Handler":null,"Name":""} Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.167678 4812 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.167729 4812 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.198204 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx" Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.221838 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jjshd"] Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.228869 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnbtz\" (UniqueName: \"kubernetes.io/projected/89a685cc-973c-4317-b6fe-5bc842e32099-kube-api-access-dnbtz\") pod \"89a685cc-973c-4317-b6fe-5bc842e32099\" (UID: \"89a685cc-973c-4317-b6fe-5bc842e32099\") " Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.228965 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89a685cc-973c-4317-b6fe-5bc842e32099-config-volume\") pod \"89a685cc-973c-4317-b6fe-5bc842e32099\" (UID: \"89a685cc-973c-4317-b6fe-5bc842e32099\") " Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.229025 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89a685cc-973c-4317-b6fe-5bc842e32099-secret-volume\") pod \"89a685cc-973c-4317-b6fe-5bc842e32099\" (UID: \"89a685cc-973c-4317-b6fe-5bc842e32099\") " Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.229236 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.231712 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89a685cc-973c-4317-b6fe-5bc842e32099-config-volume" (OuterVolumeSpecName: "config-volume") pod "89a685cc-973c-4317-b6fe-5bc842e32099" (UID: "89a685cc-973c-4317-b6fe-5bc842e32099"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.233758 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.233788 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.237123 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a685cc-973c-4317-b6fe-5bc842e32099-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "89a685cc-973c-4317-b6fe-5bc842e32099" (UID: "89a685cc-973c-4317-b6fe-5bc842e32099"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.239491 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a685cc-973c-4317-b6fe-5bc842e32099-kube-api-access-dnbtz" (OuterVolumeSpecName: "kube-api-access-dnbtz") pod "89a685cc-973c-4317-b6fe-5bc842e32099" (UID: "89a685cc-973c-4317-b6fe-5bc842e32099"). InnerVolumeSpecName "kube-api-access-dnbtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.267649 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bd6qw\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.330830 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.331099 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89a685cc-973c-4317-b6fe-5bc842e32099-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.331117 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/89a685cc-973c-4317-b6fe-5bc842e32099-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.331126 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnbtz\" (UniqueName: \"kubernetes.io/projected/89a685cc-973c-4317-b6fe-5bc842e32099-kube-api-access-dnbtz\") on node \"crc\" DevicePath \"\"" Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.334047 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.384705 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.542522 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bd6qw"] Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.879275 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" event={"ID":"74998661-1cd3-4ea2-ae49-b1e1a17da3e4","Type":"ContainerStarted","Data":"eb54f30a369f16fd9bdea35066417a2cd87a84227b6b211d964c197fca2c28d2"} Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.882904 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx" event={"ID":"89a685cc-973c-4317-b6fe-5bc842e32099","Type":"ContainerDied","Data":"abfd78a3fd873c048f74fa9c13fea10f6393a8970fd05a05bd0e9f689e0251ca"} Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.882963 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx" Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.882972 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abfd78a3fd873c048f74fa9c13fea10f6393a8970fd05a05bd0e9f689e0251ca" Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.895144 4812 generic.go:334] "Generic (PLEG): container finished" podID="a8e7743d-c680-4492-a4fe-87196f7fd893" containerID="965a085c56e5b4a38ca87d1b74c8788094549e739a1544d6c7d4c0cfc08583cf" exitCode=0 Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.895433 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjshd" event={"ID":"a8e7743d-c680-4492-a4fe-87196f7fd893","Type":"ContainerDied","Data":"965a085c56e5b4a38ca87d1b74c8788094549e739a1544d6c7d4c0cfc08583cf"} Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.895616 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjshd" event={"ID":"a8e7743d-c680-4492-a4fe-87196f7fd893","Type":"ContainerStarted","Data":"83b27929aaceb32a92d114ac69952564c3945e37aeeacab89299bed056d4ad17"} Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.901809 4812 generic.go:334] "Generic (PLEG): container finished" podID="f0e8e18b-0be2-41c1-9520-61ed4b767fc2" containerID="47bf703006b9b6f12319b7a900409da75b7a9c960c73a3ed44bd3e0e5df5f685" exitCode=0 Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.901944 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzpbr" event={"ID":"f0e8e18b-0be2-41c1-9520-61ed4b767fc2","Type":"ContainerDied","Data":"47bf703006b9b6f12319b7a900409da75b7a9c960c73a3ed44bd3e0e5df5f685"} Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.901993 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzpbr" event={"ID":"f0e8e18b-0be2-41c1-9520-61ed4b767fc2","Type":"ContainerStarted","Data":"9f5816b905ac1ff9bb8ae0eab84e36618b4833ed1299e52fc74b70062f92ea7e"} Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.903296 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.912009 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pgq95" event={"ID":"47cb9fd2-9706-464a-9360-901e3da7aa86","Type":"ContainerStarted","Data":"7ebe812febca36a8e2ee8c62eee57ba23c247746c495823a77471969ce66d572"} Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.915710 4812 generic.go:334] "Generic (PLEG): container finished" podID="da6d85be-188f-4631-af4a-49aa40bb0d3e" containerID="35f4f9bf22c02117b94b246051a46060ef3c96182ced99175f0d8235fa81af03" exitCode=0 Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.915908 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-466wh" event={"ID":"da6d85be-188f-4631-af4a-49aa40bb0d3e","Type":"ContainerDied","Data":"35f4f9bf22c02117b94b246051a46060ef3c96182ced99175f0d8235fa81af03"} Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.922117 4812 generic.go:334] "Generic (PLEG): container finished" podID="d5b14a89-062b-410d-8d78-11239a856b78" containerID="06e2f1f7cd36047fe043aa09b17443deeba2848e0dde5676e423fca03b083440" exitCode=0 Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.922161 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhtx9" event={"ID":"d5b14a89-062b-410d-8d78-11239a856b78","Type":"ContainerDied","Data":"06e2f1f7cd36047fe043aa09b17443deeba2848e0dde5676e423fca03b083440"} Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.935306 4812 patch_prober.go:28] interesting pod/router-default-5444994796-qrrxs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 19:19:26 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Nov 24 19:19:26 crc kubenswrapper[4812]: [+]process-running ok Nov 24 19:19:26 crc kubenswrapper[4812]: healthz check failed Nov 24 19:19:26 crc kubenswrapper[4812]: I1124 19:19:26.935374 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qrrxs" podUID="8d30a2dd-d2ba-463e-8d95-6f47271bac81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.047828 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.048800 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rzgzm"] Nov 24 19:19:27 crc kubenswrapper[4812]: E1124 19:19:27.049078 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a685cc-973c-4317-b6fe-5bc842e32099" containerName="collect-profiles" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.049457 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a685cc-973c-4317-b6fe-5bc842e32099" containerName="collect-profiles" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.049608 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a685cc-973c-4317-b6fe-5bc842e32099" containerName="collect-profiles" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.052047 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzgzm"] Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.052188 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rzgzm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.054737 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.096100 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-pgq95" podStartSLOduration=11.096078275 podStartE2EDuration="11.096078275s" podCreationTimestamp="2025-11-24 19:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:27.078891716 +0000 UTC m=+160.867844087" watchObservedRunningTime="2025-11-24 19:19:27.096078275 +0000 UTC m=+160.885030636" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.183948 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.184691 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.189146 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.189207 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.192529 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.243603 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12d754a7-703d-4e0a-a803-8a65a41786ed-catalog-content\") pod \"redhat-marketplace-rzgzm\" (UID: \"12d754a7-703d-4e0a-a803-8a65a41786ed\") " pod="openshift-marketplace/redhat-marketplace-rzgzm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.243717 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twtxz\" (UniqueName: \"kubernetes.io/projected/12d754a7-703d-4e0a-a803-8a65a41786ed-kube-api-access-twtxz\") pod \"redhat-marketplace-rzgzm\" (UID: \"12d754a7-703d-4e0a-a803-8a65a41786ed\") " pod="openshift-marketplace/redhat-marketplace-rzgzm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.244129 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12d754a7-703d-4e0a-a803-8a65a41786ed-utilities\") pod \"redhat-marketplace-rzgzm\" (UID: \"12d754a7-703d-4e0a-a803-8a65a41786ed\") " pod="openshift-marketplace/redhat-marketplace-rzgzm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.345685 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twtxz\" (UniqueName: \"kubernetes.io/projected/12d754a7-703d-4e0a-a803-8a65a41786ed-kube-api-access-twtxz\") pod \"redhat-marketplace-rzgzm\" (UID: \"12d754a7-703d-4e0a-a803-8a65a41786ed\") " pod="openshift-marketplace/redhat-marketplace-rzgzm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.345770 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12d754a7-703d-4e0a-a803-8a65a41786ed-utilities\") pod \"redhat-marketplace-rzgzm\" (UID: \"12d754a7-703d-4e0a-a803-8a65a41786ed\") " pod="openshift-marketplace/redhat-marketplace-rzgzm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.345829 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/245d8e2f-a3f1-4415-a431-595eccb8892a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"245d8e2f-a3f1-4415-a431-595eccb8892a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.345928 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12d754a7-703d-4e0a-a803-8a65a41786ed-catalog-content\") pod \"redhat-marketplace-rzgzm\" (UID: \"12d754a7-703d-4e0a-a803-8a65a41786ed\") " pod="openshift-marketplace/redhat-marketplace-rzgzm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.345980 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/245d8e2f-a3f1-4415-a431-595eccb8892a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"245d8e2f-a3f1-4415-a431-595eccb8892a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.346644 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12d754a7-703d-4e0a-a803-8a65a41786ed-utilities\") pod \"redhat-marketplace-rzgzm\" (UID: \"12d754a7-703d-4e0a-a803-8a65a41786ed\") " pod="openshift-marketplace/redhat-marketplace-rzgzm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.346684 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12d754a7-703d-4e0a-a803-8a65a41786ed-catalog-content\") pod \"redhat-marketplace-rzgzm\" (UID: \"12d754a7-703d-4e0a-a803-8a65a41786ed\") " pod="openshift-marketplace/redhat-marketplace-rzgzm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.371012 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7mpjm"] Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.372864 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twtxz\" (UniqueName: \"kubernetes.io/projected/12d754a7-703d-4e0a-a803-8a65a41786ed-kube-api-access-twtxz\") pod \"redhat-marketplace-rzgzm\" (UID: \"12d754a7-703d-4e0a-a803-8a65a41786ed\") " pod="openshift-marketplace/redhat-marketplace-rzgzm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.373182 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7mpjm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.383006 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mpjm"] Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.447221 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/245d8e2f-a3f1-4415-a431-595eccb8892a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"245d8e2f-a3f1-4415-a431-595eccb8892a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.447303 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/245d8e2f-a3f1-4415-a431-595eccb8892a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"245d8e2f-a3f1-4415-a431-595eccb8892a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.447478 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/245d8e2f-a3f1-4415-a431-595eccb8892a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"245d8e2f-a3f1-4415-a431-595eccb8892a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.467101 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/245d8e2f-a3f1-4415-a431-595eccb8892a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"245d8e2f-a3f1-4415-a431-595eccb8892a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.505690 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.548695 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47c3c513-ab2f-4a3a-9528-bd74c2d724a5-catalog-content\") pod \"redhat-marketplace-7mpjm\" (UID: \"47c3c513-ab2f-4a3a-9528-bd74c2d724a5\") " pod="openshift-marketplace/redhat-marketplace-7mpjm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.548778 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47c3c513-ab2f-4a3a-9528-bd74c2d724a5-utilities\") pod \"redhat-marketplace-7mpjm\" (UID: \"47c3c513-ab2f-4a3a-9528-bd74c2d724a5\") " pod="openshift-marketplace/redhat-marketplace-7mpjm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.548823 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpc8w\" (UniqueName: \"kubernetes.io/projected/47c3c513-ab2f-4a3a-9528-bd74c2d724a5-kube-api-access-jpc8w\") pod \"redhat-marketplace-7mpjm\" (UID: \"47c3c513-ab2f-4a3a-9528-bd74c2d724a5\") " pod="openshift-marketplace/redhat-marketplace-7mpjm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.649849 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47c3c513-ab2f-4a3a-9528-bd74c2d724a5-utilities\") pod \"redhat-marketplace-7mpjm\" (UID: \"47c3c513-ab2f-4a3a-9528-bd74c2d724a5\") " pod="openshift-marketplace/redhat-marketplace-7mpjm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.650238 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpc8w\" (UniqueName: \"kubernetes.io/projected/47c3c513-ab2f-4a3a-9528-bd74c2d724a5-kube-api-access-jpc8w\") pod \"redhat-marketplace-7mpjm\" (UID: \"47c3c513-ab2f-4a3a-9528-bd74c2d724a5\") " pod="openshift-marketplace/redhat-marketplace-7mpjm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.650362 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47c3c513-ab2f-4a3a-9528-bd74c2d724a5-catalog-content\") pod \"redhat-marketplace-7mpjm\" (UID: \"47c3c513-ab2f-4a3a-9528-bd74c2d724a5\") " pod="openshift-marketplace/redhat-marketplace-7mpjm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.651239 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47c3c513-ab2f-4a3a-9528-bd74c2d724a5-catalog-content\") pod \"redhat-marketplace-7mpjm\" (UID: \"47c3c513-ab2f-4a3a-9528-bd74c2d724a5\") " pod="openshift-marketplace/redhat-marketplace-7mpjm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.651533 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47c3c513-ab2f-4a3a-9528-bd74c2d724a5-utilities\") pod \"redhat-marketplace-7mpjm\" (UID: \"47c3c513-ab2f-4a3a-9528-bd74c2d724a5\") " pod="openshift-marketplace/redhat-marketplace-7mpjm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.669682 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rzgzm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.682743 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpc8w\" (UniqueName: \"kubernetes.io/projected/47c3c513-ab2f-4a3a-9528-bd74c2d724a5-kube-api-access-jpc8w\") pod \"redhat-marketplace-7mpjm\" (UID: \"47c3c513-ab2f-4a3a-9528-bd74c2d724a5\") " pod="openshift-marketplace/redhat-marketplace-7mpjm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.710369 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7mpjm" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.928347 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.936839 4812 patch_prober.go:28] interesting pod/router-default-5444994796-qrrxs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 19:19:27 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Nov 24 19:19:27 crc kubenswrapper[4812]: [+]process-running ok Nov 24 19:19:27 crc kubenswrapper[4812]: healthz check failed Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.936883 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qrrxs" podUID="8d30a2dd-d2ba-463e-8d95-6f47271bac81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.937663 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mpjm"] Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.943188 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" event={"ID":"74998661-1cd3-4ea2-ae49-b1e1a17da3e4","Type":"ContainerStarted","Data":"5ef9c0a0654c3b8b66f7544de0348e3426931d117121c7c27d213fc6a258200f"} Nov 24 19:19:27 crc kubenswrapper[4812]: W1124 19:19:27.943278 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod245d8e2f_a3f1_4415_a431_595eccb8892a.slice/crio-300c4d18260f40ad944c12d0241c92d27dea35a4de398684542c4a91487d3483 WatchSource:0}: Error finding container 300c4d18260f40ad944c12d0241c92d27dea35a4de398684542c4a91487d3483: Status 404 returned error can't find the container with id 300c4d18260f40ad944c12d0241c92d27dea35a4de398684542c4a91487d3483 Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.943314 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:27 crc kubenswrapper[4812]: W1124 19:19:27.953557 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47c3c513_ab2f_4a3a_9528_bd74c2d724a5.slice/crio-6221694f7513ca4ec2537ede75656d6543ed1d30dfca03a7614fe566d281ae06 WatchSource:0}: Error finding container 6221694f7513ca4ec2537ede75656d6543ed1d30dfca03a7614fe566d281ae06: Status 404 returned error can't find the container with id 6221694f7513ca4ec2537ede75656d6543ed1d30dfca03a7614fe566d281ae06 Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.970575 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mtqhp"] Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.971615 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mtqhp" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.972378 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" podStartSLOduration=136.972370257 podStartE2EDuration="2m16.972370257s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:27.964299262 +0000 UTC m=+161.753251653" watchObservedRunningTime="2025-11-24 19:19:27.972370257 +0000 UTC m=+161.761322628" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.983798 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 19:19:27 crc kubenswrapper[4812]: I1124 19:19:27.991223 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mtqhp"] Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.089528 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzgzm"] Nov 24 19:19:28 crc kubenswrapper[4812]: W1124 19:19:28.112739 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12d754a7_703d_4e0a_a803_8a65a41786ed.slice/crio-4fe13b36d42ce55963524ac66446f11721c4b41d9e45994a60de86e2ff8f734c WatchSource:0}: Error finding container 4fe13b36d42ce55963524ac66446f11721c4b41d9e45994a60de86e2ff8f734c: Status 404 returned error can't find the container with id 4fe13b36d42ce55963524ac66446f11721c4b41d9e45994a60de86e2ff8f734c Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.157994 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9605af63-3073-4b4f-bd24-f85dab94bac3-catalog-content\") pod \"redhat-operators-mtqhp\" (UID: \"9605af63-3073-4b4f-bd24-f85dab94bac3\") " pod="openshift-marketplace/redhat-operators-mtqhp" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.158056 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66ct8\" (UniqueName: \"kubernetes.io/projected/9605af63-3073-4b4f-bd24-f85dab94bac3-kube-api-access-66ct8\") pod \"redhat-operators-mtqhp\" (UID: \"9605af63-3073-4b4f-bd24-f85dab94bac3\") " pod="openshift-marketplace/redhat-operators-mtqhp" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.158083 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9605af63-3073-4b4f-bd24-f85dab94bac3-utilities\") pod \"redhat-operators-mtqhp\" (UID: \"9605af63-3073-4b4f-bd24-f85dab94bac3\") " pod="openshift-marketplace/redhat-operators-mtqhp" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.172775 4812 patch_prober.go:28] interesting pod/downloads-7954f5f757-4spkn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.172814 4812 patch_prober.go:28] interesting pod/downloads-7954f5f757-4spkn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.172847 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4spkn" podUID="1b540550-77e7-4545-9e34-b972ab5ec677" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.172848 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4spkn" podUID="1b540550-77e7-4545-9e34-b972ab5ec677" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.260966 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9605af63-3073-4b4f-bd24-f85dab94bac3-catalog-content\") pod \"redhat-operators-mtqhp\" (UID: \"9605af63-3073-4b4f-bd24-f85dab94bac3\") " pod="openshift-marketplace/redhat-operators-mtqhp" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.261043 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66ct8\" (UniqueName: \"kubernetes.io/projected/9605af63-3073-4b4f-bd24-f85dab94bac3-kube-api-access-66ct8\") pod \"redhat-operators-mtqhp\" (UID: \"9605af63-3073-4b4f-bd24-f85dab94bac3\") " pod="openshift-marketplace/redhat-operators-mtqhp" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.261080 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9605af63-3073-4b4f-bd24-f85dab94bac3-utilities\") pod \"redhat-operators-mtqhp\" (UID: \"9605af63-3073-4b4f-bd24-f85dab94bac3\") " pod="openshift-marketplace/redhat-operators-mtqhp" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.261681 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9605af63-3073-4b4f-bd24-f85dab94bac3-utilities\") pod \"redhat-operators-mtqhp\" (UID: \"9605af63-3073-4b4f-bd24-f85dab94bac3\") " pod="openshift-marketplace/redhat-operators-mtqhp" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.261801 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9605af63-3073-4b4f-bd24-f85dab94bac3-catalog-content\") pod \"redhat-operators-mtqhp\" (UID: \"9605af63-3073-4b4f-bd24-f85dab94bac3\") " pod="openshift-marketplace/redhat-operators-mtqhp" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.276205 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.285682 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xtdvk" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.288007 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66ct8\" (UniqueName: \"kubernetes.io/projected/9605af63-3073-4b4f-bd24-f85dab94bac3-kube-api-access-66ct8\") pod \"redhat-operators-mtqhp\" (UID: \"9605af63-3073-4b4f-bd24-f85dab94bac3\") " pod="openshift-marketplace/redhat-operators-mtqhp" Nov 24 19:19:28 crc kubenswrapper[4812]: E1124 19:19:28.307855 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47c3c513_ab2f_4a3a_9528_bd74c2d724a5.slice/crio-89a1999513ac0762e87eabc488d024de2ebb6f20240f18256d564b12c08e2302.scope\": RecentStats: unable to find data in memory cache]" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.387358 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mtqhp" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.401038 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x7wg6"] Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.402154 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7wg6" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.408316 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7wg6"] Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.567157 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/559e80e9-11e4-45ca-9e7d-5071fe04faa3-catalog-content\") pod \"redhat-operators-x7wg6\" (UID: \"559e80e9-11e4-45ca-9e7d-5071fe04faa3\") " pod="openshift-marketplace/redhat-operators-x7wg6" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.567546 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6kzt\" (UniqueName: \"kubernetes.io/projected/559e80e9-11e4-45ca-9e7d-5071fe04faa3-kube-api-access-x6kzt\") pod \"redhat-operators-x7wg6\" (UID: \"559e80e9-11e4-45ca-9e7d-5071fe04faa3\") " pod="openshift-marketplace/redhat-operators-x7wg6" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.567854 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/559e80e9-11e4-45ca-9e7d-5071fe04faa3-utilities\") pod \"redhat-operators-x7wg6\" (UID: \"559e80e9-11e4-45ca-9e7d-5071fe04faa3\") " pod="openshift-marketplace/redhat-operators-x7wg6" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.669408 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/559e80e9-11e4-45ca-9e7d-5071fe04faa3-catalog-content\") pod \"redhat-operators-x7wg6\" (UID: \"559e80e9-11e4-45ca-9e7d-5071fe04faa3\") " pod="openshift-marketplace/redhat-operators-x7wg6" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.669466 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6kzt\" (UniqueName: \"kubernetes.io/projected/559e80e9-11e4-45ca-9e7d-5071fe04faa3-kube-api-access-x6kzt\") pod \"redhat-operators-x7wg6\" (UID: \"559e80e9-11e4-45ca-9e7d-5071fe04faa3\") " pod="openshift-marketplace/redhat-operators-x7wg6" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.669511 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/559e80e9-11e4-45ca-9e7d-5071fe04faa3-utilities\") pod \"redhat-operators-x7wg6\" (UID: \"559e80e9-11e4-45ca-9e7d-5071fe04faa3\") " pod="openshift-marketplace/redhat-operators-x7wg6" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.669907 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/559e80e9-11e4-45ca-9e7d-5071fe04faa3-catalog-content\") pod \"redhat-operators-x7wg6\" (UID: \"559e80e9-11e4-45ca-9e7d-5071fe04faa3\") " pod="openshift-marketplace/redhat-operators-x7wg6" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.669954 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/559e80e9-11e4-45ca-9e7d-5071fe04faa3-utilities\") pod \"redhat-operators-x7wg6\" (UID: \"559e80e9-11e4-45ca-9e7d-5071fe04faa3\") " pod="openshift-marketplace/redhat-operators-x7wg6" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.679501 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mtqhp"] Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.692836 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6kzt\" (UniqueName: \"kubernetes.io/projected/559e80e9-11e4-45ca-9e7d-5071fe04faa3-kube-api-access-x6kzt\") pod \"redhat-operators-x7wg6\" (UID: \"559e80e9-11e4-45ca-9e7d-5071fe04faa3\") " pod="openshift-marketplace/redhat-operators-x7wg6" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.693187 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.693248 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.695173 4812 patch_prober.go:28] interesting pod/console-f9d7485db-jb4lb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.695229 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-jb4lb" podUID="68125ac7-cf1b-4461-820a-b7318076e62d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.7:8443/health\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 24 19:19:28 crc kubenswrapper[4812]: W1124 19:19:28.706463 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9605af63_3073_4b4f_bd24_f85dab94bac3.slice/crio-2bee4f81d27829cd6c30959fe059872fcb499342e89d291710730784443af8a5 WatchSource:0}: Error finding container 2bee4f81d27829cd6c30959fe059872fcb499342e89d291710730784443af8a5: Status 404 returned error can't find the container with id 2bee4f81d27829cd6c30959fe059872fcb499342e89d291710730784443af8a5 Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.732152 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-krdpg" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.762690 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7wg6" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.930768 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qrrxs" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.939244 4812 patch_prober.go:28] interesting pod/router-default-5444994796-qrrxs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 19:19:28 crc kubenswrapper[4812]: [-]has-synced failed: reason withheld Nov 24 19:19:28 crc kubenswrapper[4812]: [+]process-running ok Nov 24 19:19:28 crc kubenswrapper[4812]: healthz check failed Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.939295 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qrrxs" podUID="8d30a2dd-d2ba-463e-8d95-6f47271bac81" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 19:19:28 crc kubenswrapper[4812]: I1124 19:19:28.998609 4812 generic.go:334] "Generic (PLEG): container finished" podID="47c3c513-ab2f-4a3a-9528-bd74c2d724a5" containerID="89a1999513ac0762e87eabc488d024de2ebb6f20240f18256d564b12c08e2302" exitCode=0 Nov 24 19:19:29 crc kubenswrapper[4812]: I1124 19:19:29.036523 4812 generic.go:334] "Generic (PLEG): container finished" podID="12d754a7-703d-4e0a-a803-8a65a41786ed" containerID="80e397673b4d128e8553cb59f5250ef55a9ba8cce46be3f93ed24c609c23f993" exitCode=0 Nov 24 19:19:29 crc kubenswrapper[4812]: I1124 19:19:29.038572 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mpjm" event={"ID":"47c3c513-ab2f-4a3a-9528-bd74c2d724a5","Type":"ContainerDied","Data":"89a1999513ac0762e87eabc488d024de2ebb6f20240f18256d564b12c08e2302"} Nov 24 19:19:29 crc kubenswrapper[4812]: I1124 19:19:29.038603 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mpjm" event={"ID":"47c3c513-ab2f-4a3a-9528-bd74c2d724a5","Type":"ContainerStarted","Data":"6221694f7513ca4ec2537ede75656d6543ed1d30dfca03a7614fe566d281ae06"} Nov 24 19:19:29 crc kubenswrapper[4812]: I1124 19:19:29.038618 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzgzm" event={"ID":"12d754a7-703d-4e0a-a803-8a65a41786ed","Type":"ContainerDied","Data":"80e397673b4d128e8553cb59f5250ef55a9ba8cce46be3f93ed24c609c23f993"} Nov 24 19:19:29 crc kubenswrapper[4812]: I1124 19:19:29.038628 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzgzm" event={"ID":"12d754a7-703d-4e0a-a803-8a65a41786ed","Type":"ContainerStarted","Data":"4fe13b36d42ce55963524ac66446f11721c4b41d9e45994a60de86e2ff8f734c"} Nov 24 19:19:29 crc kubenswrapper[4812]: I1124 19:19:29.038638 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtqhp" event={"ID":"9605af63-3073-4b4f-bd24-f85dab94bac3","Type":"ContainerStarted","Data":"2bee4f81d27829cd6c30959fe059872fcb499342e89d291710730784443af8a5"} Nov 24 19:19:29 crc kubenswrapper[4812]: I1124 19:19:29.050982 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"245d8e2f-a3f1-4415-a431-595eccb8892a","Type":"ContainerStarted","Data":"f619cfdb2448357ebf79fbf7fe13ba8094d1f7e5356e494a1a36978f9e447077"} Nov 24 19:19:29 crc kubenswrapper[4812]: I1124 19:19:29.052362 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"245d8e2f-a3f1-4415-a431-595eccb8892a","Type":"ContainerStarted","Data":"300c4d18260f40ad944c12d0241c92d27dea35a4de398684542c4a91487d3483"} Nov 24 19:19:29 crc kubenswrapper[4812]: I1124 19:19:29.071097 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" Nov 24 19:19:29 crc kubenswrapper[4812]: I1124 19:19:29.071301 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.07128294 podStartE2EDuration="2.07128294s" podCreationTimestamp="2025-11-24 19:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:29.070850187 +0000 UTC m=+162.859802588" watchObservedRunningTime="2025-11-24 19:19:29.07128294 +0000 UTC m=+162.860235311" Nov 24 19:19:29 crc kubenswrapper[4812]: I1124 19:19:29.367052 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7wg6"] Nov 24 19:19:29 crc kubenswrapper[4812]: I1124 19:19:29.910482 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 19:19:29 crc kubenswrapper[4812]: I1124 19:19:29.911370 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 19:19:29 crc kubenswrapper[4812]: I1124 19:19:29.920761 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 24 19:19:29 crc kubenswrapper[4812]: I1124 19:19:29.922082 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 24 19:19:29 crc kubenswrapper[4812]: I1124 19:19:29.928740 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 19:19:29 crc kubenswrapper[4812]: I1124 19:19:29.942677 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qrrxs" Nov 24 19:19:29 crc kubenswrapper[4812]: I1124 19:19:29.953701 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qrrxs" Nov 24 19:19:30 crc kubenswrapper[4812]: I1124 19:19:30.029438 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4bbe994-34dd-4bc3-877f-f0cb17f1640b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d4bbe994-34dd-4bc3-877f-f0cb17f1640b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 19:19:30 crc kubenswrapper[4812]: I1124 19:19:30.029739 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d4bbe994-34dd-4bc3-877f-f0cb17f1640b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d4bbe994-34dd-4bc3-877f-f0cb17f1640b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 19:19:30 crc kubenswrapper[4812]: I1124 19:19:30.085175 4812 generic.go:334] "Generic (PLEG): container finished" podID="9605af63-3073-4b4f-bd24-f85dab94bac3" containerID="24a337ebed84e374390629efbb4ad413fa11671d6e4cf637e4a605dd479d4206" exitCode=0 Nov 24 19:19:30 crc kubenswrapper[4812]: I1124 19:19:30.085227 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtqhp" event={"ID":"9605af63-3073-4b4f-bd24-f85dab94bac3","Type":"ContainerDied","Data":"24a337ebed84e374390629efbb4ad413fa11671d6e4cf637e4a605dd479d4206"} Nov 24 19:19:30 crc kubenswrapper[4812]: I1124 19:19:30.090144 4812 generic.go:334] "Generic (PLEG): container finished" podID="245d8e2f-a3f1-4415-a431-595eccb8892a" containerID="f619cfdb2448357ebf79fbf7fe13ba8094d1f7e5356e494a1a36978f9e447077" exitCode=0 Nov 24 19:19:30 crc kubenswrapper[4812]: I1124 19:19:30.090181 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"245d8e2f-a3f1-4415-a431-595eccb8892a","Type":"ContainerDied","Data":"f619cfdb2448357ebf79fbf7fe13ba8094d1f7e5356e494a1a36978f9e447077"} Nov 24 19:19:30 crc kubenswrapper[4812]: I1124 19:19:30.092915 4812 generic.go:334] "Generic (PLEG): container finished" podID="559e80e9-11e4-45ca-9e7d-5071fe04faa3" containerID="e0fec7e7067522b34bcd4686950c02f06b0b4279d16da552644e6210d7cd4918" exitCode=0 Nov 24 19:19:30 crc kubenswrapper[4812]: I1124 19:19:30.093624 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7wg6" event={"ID":"559e80e9-11e4-45ca-9e7d-5071fe04faa3","Type":"ContainerDied","Data":"e0fec7e7067522b34bcd4686950c02f06b0b4279d16da552644e6210d7cd4918"} Nov 24 19:19:30 crc kubenswrapper[4812]: I1124 19:19:30.093652 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7wg6" event={"ID":"559e80e9-11e4-45ca-9e7d-5071fe04faa3","Type":"ContainerStarted","Data":"f9b18bbaa7807accb91f9616185f4d726488d5dd54def12d4d75d1242fecc200"} Nov 24 19:19:30 crc kubenswrapper[4812]: I1124 19:19:30.130895 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4bbe994-34dd-4bc3-877f-f0cb17f1640b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d4bbe994-34dd-4bc3-877f-f0cb17f1640b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 19:19:30 crc kubenswrapper[4812]: I1124 19:19:30.130972 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d4bbe994-34dd-4bc3-877f-f0cb17f1640b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d4bbe994-34dd-4bc3-877f-f0cb17f1640b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 19:19:30 crc kubenswrapper[4812]: I1124 19:19:30.131063 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d4bbe994-34dd-4bc3-877f-f0cb17f1640b-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d4bbe994-34dd-4bc3-877f-f0cb17f1640b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 19:19:30 crc kubenswrapper[4812]: I1124 19:19:30.222173 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4bbe994-34dd-4bc3-877f-f0cb17f1640b-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d4bbe994-34dd-4bc3-877f-f0cb17f1640b\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 19:19:30 crc kubenswrapper[4812]: I1124 19:19:30.254499 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 19:19:30 crc kubenswrapper[4812]: I1124 19:19:30.509362 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 19:19:31 crc kubenswrapper[4812]: I1124 19:19:31.103966 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d4bbe994-34dd-4bc3-877f-f0cb17f1640b","Type":"ContainerStarted","Data":"d848ddc5e810cedcfed61c8647269b2171c8bc747ce4a4b9dba05bb1e6715534"} Nov 24 19:19:31 crc kubenswrapper[4812]: I1124 19:19:31.543750 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 19:19:31 crc kubenswrapper[4812]: I1124 19:19:31.650250 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/245d8e2f-a3f1-4415-a431-595eccb8892a-kubelet-dir\") pod \"245d8e2f-a3f1-4415-a431-595eccb8892a\" (UID: \"245d8e2f-a3f1-4415-a431-595eccb8892a\") " Nov 24 19:19:31 crc kubenswrapper[4812]: I1124 19:19:31.650375 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/245d8e2f-a3f1-4415-a431-595eccb8892a-kube-api-access\") pod \"245d8e2f-a3f1-4415-a431-595eccb8892a\" (UID: \"245d8e2f-a3f1-4415-a431-595eccb8892a\") " Nov 24 19:19:31 crc kubenswrapper[4812]: I1124 19:19:31.650429 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/245d8e2f-a3f1-4415-a431-595eccb8892a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "245d8e2f-a3f1-4415-a431-595eccb8892a" (UID: "245d8e2f-a3f1-4415-a431-595eccb8892a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:19:31 crc kubenswrapper[4812]: I1124 19:19:31.650717 4812 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/245d8e2f-a3f1-4415-a431-595eccb8892a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 19:19:31 crc kubenswrapper[4812]: I1124 19:19:31.661538 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/245d8e2f-a3f1-4415-a431-595eccb8892a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "245d8e2f-a3f1-4415-a431-595eccb8892a" (UID: "245d8e2f-a3f1-4415-a431-595eccb8892a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:19:31 crc kubenswrapper[4812]: I1124 19:19:31.752098 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/245d8e2f-a3f1-4415-a431-595eccb8892a-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 19:19:32 crc kubenswrapper[4812]: I1124 19:19:32.149068 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d4bbe994-34dd-4bc3-877f-f0cb17f1640b","Type":"ContainerStarted","Data":"e75e16d7cc3d0efe2168b4c79444c5a80fd776d42670559d3e9897edcdf15fea"} Nov 24 19:19:32 crc kubenswrapper[4812]: I1124 19:19:32.153507 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"245d8e2f-a3f1-4415-a431-595eccb8892a","Type":"ContainerDied","Data":"300c4d18260f40ad944c12d0241c92d27dea35a4de398684542c4a91487d3483"} Nov 24 19:19:32 crc kubenswrapper[4812]: I1124 19:19:32.153533 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="300c4d18260f40ad944c12d0241c92d27dea35a4de398684542c4a91487d3483" Nov 24 19:19:32 crc kubenswrapper[4812]: I1124 19:19:32.153580 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 19:19:32 crc kubenswrapper[4812]: I1124 19:19:32.170302 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.170276422 podStartE2EDuration="3.170276422s" podCreationTimestamp="2025-11-24 19:19:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:32.166851996 +0000 UTC m=+165.955804367" watchObservedRunningTime="2025-11-24 19:19:32.170276422 +0000 UTC m=+165.959228793" Nov 24 19:19:32 crc kubenswrapper[4812]: I1124 19:19:32.998271 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:19:32 crc kubenswrapper[4812]: I1124 19:19:32.998789 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:19:33 crc kubenswrapper[4812]: I1124 19:19:33.165523 4812 generic.go:334] "Generic (PLEG): container finished" podID="d4bbe994-34dd-4bc3-877f-f0cb17f1640b" containerID="e75e16d7cc3d0efe2168b4c79444c5a80fd776d42670559d3e9897edcdf15fea" exitCode=0 Nov 24 19:19:33 crc kubenswrapper[4812]: I1124 19:19:33.165584 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d4bbe994-34dd-4bc3-877f-f0cb17f1640b","Type":"ContainerDied","Data":"e75e16d7cc3d0efe2168b4c79444c5a80fd776d42670559d3e9897edcdf15fea"} Nov 24 19:19:33 crc kubenswrapper[4812]: I1124 19:19:33.928799 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-nzz22" Nov 24 19:19:34 crc kubenswrapper[4812]: I1124 19:19:34.500072 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs\") pod \"network-metrics-daemon-jxmnc\" (UID: \"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\") " pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:19:34 crc kubenswrapper[4812]: I1124 19:19:34.541141 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2d681a94-8d4a-45cc-8559-dfd15b6d0b1e-metrics-certs\") pod \"network-metrics-daemon-jxmnc\" (UID: \"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e\") " pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:19:34 crc kubenswrapper[4812]: I1124 19:19:34.817469 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jxmnc" Nov 24 19:19:38 crc kubenswrapper[4812]: I1124 19:19:38.178316 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-4spkn" Nov 24 19:19:38 crc kubenswrapper[4812]: I1124 19:19:38.804924 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:38 crc kubenswrapper[4812]: I1124 19:19:38.808808 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:19:39 crc kubenswrapper[4812]: I1124 19:19:39.562458 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 19:19:39 crc kubenswrapper[4812]: I1124 19:19:39.684655 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d4bbe994-34dd-4bc3-877f-f0cb17f1640b-kubelet-dir\") pod \"d4bbe994-34dd-4bc3-877f-f0cb17f1640b\" (UID: \"d4bbe994-34dd-4bc3-877f-f0cb17f1640b\") " Nov 24 19:19:39 crc kubenswrapper[4812]: I1124 19:19:39.684917 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d4bbe994-34dd-4bc3-877f-f0cb17f1640b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d4bbe994-34dd-4bc3-877f-f0cb17f1640b" (UID: "d4bbe994-34dd-4bc3-877f-f0cb17f1640b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:19:39 crc kubenswrapper[4812]: I1124 19:19:39.685198 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4bbe994-34dd-4bc3-877f-f0cb17f1640b-kube-api-access\") pod \"d4bbe994-34dd-4bc3-877f-f0cb17f1640b\" (UID: \"d4bbe994-34dd-4bc3-877f-f0cb17f1640b\") " Nov 24 19:19:39 crc kubenswrapper[4812]: I1124 19:19:39.685601 4812 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d4bbe994-34dd-4bc3-877f-f0cb17f1640b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 19:19:39 crc kubenswrapper[4812]: I1124 19:19:39.690631 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4bbe994-34dd-4bc3-877f-f0cb17f1640b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d4bbe994-34dd-4bc3-877f-f0cb17f1640b" (UID: "d4bbe994-34dd-4bc3-877f-f0cb17f1640b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:19:39 crc kubenswrapper[4812]: I1124 19:19:39.796347 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d4bbe994-34dd-4bc3-877f-f0cb17f1640b-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 19:19:40 crc kubenswrapper[4812]: I1124 19:19:40.237533 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d4bbe994-34dd-4bc3-877f-f0cb17f1640b","Type":"ContainerDied","Data":"d848ddc5e810cedcfed61c8647269b2171c8bc747ce4a4b9dba05bb1e6715534"} Nov 24 19:19:40 crc kubenswrapper[4812]: I1124 19:19:40.237600 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 19:19:40 crc kubenswrapper[4812]: I1124 19:19:40.237608 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d848ddc5e810cedcfed61c8647269b2171c8bc747ce4a4b9dba05bb1e6715534" Nov 24 19:19:46 crc kubenswrapper[4812]: I1124 19:19:46.395527 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:19:52 crc kubenswrapper[4812]: E1124 19:19:52.654865 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 24 19:19:52 crc kubenswrapper[4812]: E1124 19:19:52.655651 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qf9p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-zzpbr_openshift-marketplace(f0e8e18b-0be2-41c1-9520-61ed4b767fc2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 19:19:52 crc kubenswrapper[4812]: E1124 19:19:52.657499 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-zzpbr" podUID="f0e8e18b-0be2-41c1-9520-61ed4b767fc2" Nov 24 19:19:54 crc kubenswrapper[4812]: I1124 19:19:54.994225 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 19:19:55 crc kubenswrapper[4812]: E1124 19:19:55.389590 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-zzpbr" podUID="f0e8e18b-0be2-41c1-9520-61ed4b767fc2" Nov 24 19:19:55 crc kubenswrapper[4812]: E1124 19:19:55.483661 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 24 19:19:55 crc kubenswrapper[4812]: E1124 19:19:55.483800 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66ct8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-mtqhp_openshift-marketplace(9605af63-3073-4b4f-bd24-f85dab94bac3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 19:19:55 crc kubenswrapper[4812]: E1124 19:19:55.486533 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-mtqhp" podUID="9605af63-3073-4b4f-bd24-f85dab94bac3" Nov 24 19:19:56 crc kubenswrapper[4812]: E1124 19:19:56.637547 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-mtqhp" podUID="9605af63-3073-4b4f-bd24-f85dab94bac3" Nov 24 19:19:56 crc kubenswrapper[4812]: E1124 19:19:56.773167 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 24 19:19:56 crc kubenswrapper[4812]: E1124 19:19:56.773655 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-87wkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-xhtx9_openshift-marketplace(d5b14a89-062b-410d-8d78-11239a856b78): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 19:19:56 crc kubenswrapper[4812]: E1124 19:19:56.775394 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-xhtx9" podUID="d5b14a89-062b-410d-8d78-11239a856b78" Nov 24 19:19:56 crc kubenswrapper[4812]: I1124 19:19:56.835983 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jxmnc"] Nov 24 19:19:56 crc kubenswrapper[4812]: E1124 19:19:56.845381 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 24 19:19:56 crc kubenswrapper[4812]: E1124 19:19:56.845557 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mnx6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-jjshd_openshift-marketplace(a8e7743d-c680-4492-a4fe-87196f7fd893): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 19:19:56 crc kubenswrapper[4812]: E1124 19:19:56.846816 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-jjshd" podUID="a8e7743d-c680-4492-a4fe-87196f7fd893" Nov 24 19:19:56 crc kubenswrapper[4812]: W1124 19:19:56.850018 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d681a94_8d4a_45cc_8559_dfd15b6d0b1e.slice/crio-6e048a5c7f444b5b5865e92c4dcc7e8f91af3bba6c4ea072354e737bfe9887d6 WatchSource:0}: Error finding container 6e048a5c7f444b5b5865e92c4dcc7e8f91af3bba6c4ea072354e737bfe9887d6: Status 404 returned error can't find the container with id 6e048a5c7f444b5b5865e92c4dcc7e8f91af3bba6c4ea072354e737bfe9887d6 Nov 24 19:19:56 crc kubenswrapper[4812]: E1124 19:19:56.868189 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 24 19:19:56 crc kubenswrapper[4812]: E1124 19:19:56.868638 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6kzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-x7wg6_openshift-marketplace(559e80e9-11e4-45ca-9e7d-5071fe04faa3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 19:19:56 crc kubenswrapper[4812]: E1124 19:19:56.869847 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-x7wg6" podUID="559e80e9-11e4-45ca-9e7d-5071fe04faa3" Nov 24 19:19:57 crc kubenswrapper[4812]: I1124 19:19:57.366720 4812 generic.go:334] "Generic (PLEG): container finished" podID="da6d85be-188f-4631-af4a-49aa40bb0d3e" containerID="cd0394f546ee821992cd2a53bef521feaf5f9c79b502f40c3cdc0835686015fd" exitCode=0 Nov 24 19:19:57 crc kubenswrapper[4812]: I1124 19:19:57.366811 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-466wh" event={"ID":"da6d85be-188f-4631-af4a-49aa40bb0d3e","Type":"ContainerDied","Data":"cd0394f546ee821992cd2a53bef521feaf5f9c79b502f40c3cdc0835686015fd"} Nov 24 19:19:57 crc kubenswrapper[4812]: I1124 19:19:57.367682 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jxmnc" event={"ID":"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e","Type":"ContainerStarted","Data":"9679fda25288348a544c6efea4aa7afe8abb33552b4ab902244bf87388f24546"} Nov 24 19:19:57 crc kubenswrapper[4812]: I1124 19:19:57.367725 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jxmnc" event={"ID":"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e","Type":"ContainerStarted","Data":"6e048a5c7f444b5b5865e92c4dcc7e8f91af3bba6c4ea072354e737bfe9887d6"} Nov 24 19:19:57 crc kubenswrapper[4812]: I1124 19:19:57.372093 4812 generic.go:334] "Generic (PLEG): container finished" podID="47c3c513-ab2f-4a3a-9528-bd74c2d724a5" containerID="6d61146f7690eb6a13a2cd9a6138e599abbf906250cd5a4260ba0adff25a729c" exitCode=0 Nov 24 19:19:57 crc kubenswrapper[4812]: I1124 19:19:57.372146 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mpjm" event={"ID":"47c3c513-ab2f-4a3a-9528-bd74c2d724a5","Type":"ContainerDied","Data":"6d61146f7690eb6a13a2cd9a6138e599abbf906250cd5a4260ba0adff25a729c"} Nov 24 19:19:57 crc kubenswrapper[4812]: I1124 19:19:57.376105 4812 generic.go:334] "Generic (PLEG): container finished" podID="12d754a7-703d-4e0a-a803-8a65a41786ed" containerID="65b33d6a11e219e7e8780776b70faf2e31990c304b36675f18015d223e99abbb" exitCode=0 Nov 24 19:19:57 crc kubenswrapper[4812]: I1124 19:19:57.376198 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzgzm" event={"ID":"12d754a7-703d-4e0a-a803-8a65a41786ed","Type":"ContainerDied","Data":"65b33d6a11e219e7e8780776b70faf2e31990c304b36675f18015d223e99abbb"} Nov 24 19:19:57 crc kubenswrapper[4812]: E1124 19:19:57.400141 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-jjshd" podUID="a8e7743d-c680-4492-a4fe-87196f7fd893" Nov 24 19:19:57 crc kubenswrapper[4812]: E1124 19:19:57.400242 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-xhtx9" podUID="d5b14a89-062b-410d-8d78-11239a856b78" Nov 24 19:19:57 crc kubenswrapper[4812]: E1124 19:19:57.400322 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-x7wg6" podUID="559e80e9-11e4-45ca-9e7d-5071fe04faa3" Nov 24 19:19:58 crc kubenswrapper[4812]: I1124 19:19:58.408049 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzgzm" event={"ID":"12d754a7-703d-4e0a-a803-8a65a41786ed","Type":"ContainerStarted","Data":"a1d7df426415432df476bafb1397191b92a124265083be60034b8a7fdb4ae03e"} Nov 24 19:19:58 crc kubenswrapper[4812]: I1124 19:19:58.412725 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-466wh" event={"ID":"da6d85be-188f-4631-af4a-49aa40bb0d3e","Type":"ContainerStarted","Data":"cf318b0b46bbf15cb885e5127bddee60a27daa06c14323f3e4a95e520dad72b2"} Nov 24 19:19:58 crc kubenswrapper[4812]: I1124 19:19:58.414516 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jxmnc" event={"ID":"2d681a94-8d4a-45cc-8559-dfd15b6d0b1e","Type":"ContainerStarted","Data":"ca4bf085173fc0c550727e2d6d06acfbf79f071dd86a530fd349206e206bb419"} Nov 24 19:19:58 crc kubenswrapper[4812]: I1124 19:19:58.416672 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mpjm" event={"ID":"47c3c513-ab2f-4a3a-9528-bd74c2d724a5","Type":"ContainerStarted","Data":"fb70e657763e3cc920259c09c7e6600f058f315fcaff4d0087713aed4f18b88a"} Nov 24 19:19:58 crc kubenswrapper[4812]: I1124 19:19:58.446528 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rzgzm" podStartSLOduration=3.6200021959999997 podStartE2EDuration="32.446497048s" podCreationTimestamp="2025-11-24 19:19:26 +0000 UTC" firstStartedPulling="2025-11-24 19:19:29.03790062 +0000 UTC m=+162.826852991" lastFinishedPulling="2025-11-24 19:19:57.864395472 +0000 UTC m=+191.653347843" observedRunningTime="2025-11-24 19:19:58.425811922 +0000 UTC m=+192.214764313" watchObservedRunningTime="2025-11-24 19:19:58.446497048 +0000 UTC m=+192.235449459" Nov 24 19:19:58 crc kubenswrapper[4812]: I1124 19:19:58.448806 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-466wh" podStartSLOduration=3.608463149 podStartE2EDuration="34.448793012s" podCreationTimestamp="2025-11-24 19:19:24 +0000 UTC" firstStartedPulling="2025-11-24 19:19:26.921564333 +0000 UTC m=+160.710516704" lastFinishedPulling="2025-11-24 19:19:57.761894186 +0000 UTC m=+191.550846567" observedRunningTime="2025-11-24 19:19:58.440835481 +0000 UTC m=+192.229787852" watchObservedRunningTime="2025-11-24 19:19:58.448793012 +0000 UTC m=+192.237745433" Nov 24 19:19:58 crc kubenswrapper[4812]: I1124 19:19:58.468713 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-jxmnc" podStartSLOduration=167.468691267 podStartE2EDuration="2m47.468691267s" podCreationTimestamp="2025-11-24 19:17:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:19:58.465834767 +0000 UTC m=+192.254787178" watchObservedRunningTime="2025-11-24 19:19:58.468691267 +0000 UTC m=+192.257643668" Nov 24 19:19:58 crc kubenswrapper[4812]: I1124 19:19:58.489817 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7mpjm" podStartSLOduration=2.6004076400000002 podStartE2EDuration="31.489786104s" podCreationTimestamp="2025-11-24 19:19:27 +0000 UTC" firstStartedPulling="2025-11-24 19:19:29.034762952 +0000 UTC m=+162.823715333" lastFinishedPulling="2025-11-24 19:19:57.924141426 +0000 UTC m=+191.713093797" observedRunningTime="2025-11-24 19:19:58.489247759 +0000 UTC m=+192.278200130" watchObservedRunningTime="2025-11-24 19:19:58.489786104 +0000 UTC m=+192.278738485" Nov 24 19:19:58 crc kubenswrapper[4812]: I1124 19:19:58.863990 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9hgkr" Nov 24 19:20:02 crc kubenswrapper[4812]: I1124 19:20:02.998751 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:20:03 crc kubenswrapper[4812]: I1124 19:20:02.999385 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:20:05 crc kubenswrapper[4812]: I1124 19:20:05.164382 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-466wh" Nov 24 19:20:05 crc kubenswrapper[4812]: I1124 19:20:05.164994 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-466wh" Nov 24 19:20:05 crc kubenswrapper[4812]: I1124 19:20:05.334026 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-466wh" Nov 24 19:20:05 crc kubenswrapper[4812]: I1124 19:20:05.494286 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-466wh" Nov 24 19:20:07 crc kubenswrapper[4812]: I1124 19:20:07.670800 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rzgzm" Nov 24 19:20:07 crc kubenswrapper[4812]: I1124 19:20:07.670879 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rzgzm" Nov 24 19:20:07 crc kubenswrapper[4812]: I1124 19:20:07.713786 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7mpjm" Nov 24 19:20:07 crc kubenswrapper[4812]: I1124 19:20:07.714187 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7mpjm" Nov 24 19:20:07 crc kubenswrapper[4812]: I1124 19:20:07.736622 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rzgzm" Nov 24 19:20:07 crc kubenswrapper[4812]: I1124 19:20:07.764949 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7mpjm" Nov 24 19:20:08 crc kubenswrapper[4812]: I1124 19:20:08.523222 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7mpjm" Nov 24 19:20:08 crc kubenswrapper[4812]: I1124 19:20:08.526536 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rzgzm" Nov 24 19:20:08 crc kubenswrapper[4812]: I1124 19:20:08.972448 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mpjm"] Nov 24 19:20:10 crc kubenswrapper[4812]: I1124 19:20:10.476315 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7mpjm" podUID="47c3c513-ab2f-4a3a-9528-bd74c2d724a5" containerName="registry-server" containerID="cri-o://fb70e657763e3cc920259c09c7e6600f058f315fcaff4d0087713aed4f18b88a" gracePeriod=2 Nov 24 19:20:11 crc kubenswrapper[4812]: I1124 19:20:11.484540 4812 generic.go:334] "Generic (PLEG): container finished" podID="47c3c513-ab2f-4a3a-9528-bd74c2d724a5" containerID="fb70e657763e3cc920259c09c7e6600f058f315fcaff4d0087713aed4f18b88a" exitCode=0 Nov 24 19:20:11 crc kubenswrapper[4812]: I1124 19:20:11.484585 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mpjm" event={"ID":"47c3c513-ab2f-4a3a-9528-bd74c2d724a5","Type":"ContainerDied","Data":"fb70e657763e3cc920259c09c7e6600f058f315fcaff4d0087713aed4f18b88a"} Nov 24 19:20:11 crc kubenswrapper[4812]: I1124 19:20:11.575215 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7mpjm" Nov 24 19:20:11 crc kubenswrapper[4812]: I1124 19:20:11.765274 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47c3c513-ab2f-4a3a-9528-bd74c2d724a5-catalog-content\") pod \"47c3c513-ab2f-4a3a-9528-bd74c2d724a5\" (UID: \"47c3c513-ab2f-4a3a-9528-bd74c2d724a5\") " Nov 24 19:20:11 crc kubenswrapper[4812]: I1124 19:20:11.765353 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47c3c513-ab2f-4a3a-9528-bd74c2d724a5-utilities\") pod \"47c3c513-ab2f-4a3a-9528-bd74c2d724a5\" (UID: \"47c3c513-ab2f-4a3a-9528-bd74c2d724a5\") " Nov 24 19:20:11 crc kubenswrapper[4812]: I1124 19:20:11.765423 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpc8w\" (UniqueName: \"kubernetes.io/projected/47c3c513-ab2f-4a3a-9528-bd74c2d724a5-kube-api-access-jpc8w\") pod \"47c3c513-ab2f-4a3a-9528-bd74c2d724a5\" (UID: \"47c3c513-ab2f-4a3a-9528-bd74c2d724a5\") " Nov 24 19:20:11 crc kubenswrapper[4812]: I1124 19:20:11.766380 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47c3c513-ab2f-4a3a-9528-bd74c2d724a5-utilities" (OuterVolumeSpecName: "utilities") pod "47c3c513-ab2f-4a3a-9528-bd74c2d724a5" (UID: "47c3c513-ab2f-4a3a-9528-bd74c2d724a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:20:11 crc kubenswrapper[4812]: I1124 19:20:11.771521 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47c3c513-ab2f-4a3a-9528-bd74c2d724a5-kube-api-access-jpc8w" (OuterVolumeSpecName: "kube-api-access-jpc8w") pod "47c3c513-ab2f-4a3a-9528-bd74c2d724a5" (UID: "47c3c513-ab2f-4a3a-9528-bd74c2d724a5"). InnerVolumeSpecName "kube-api-access-jpc8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:20:11 crc kubenswrapper[4812]: I1124 19:20:11.782917 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47c3c513-ab2f-4a3a-9528-bd74c2d724a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47c3c513-ab2f-4a3a-9528-bd74c2d724a5" (UID: "47c3c513-ab2f-4a3a-9528-bd74c2d724a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:20:11 crc kubenswrapper[4812]: I1124 19:20:11.867298 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47c3c513-ab2f-4a3a-9528-bd74c2d724a5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:11 crc kubenswrapper[4812]: I1124 19:20:11.867372 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47c3c513-ab2f-4a3a-9528-bd74c2d724a5-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:11 crc kubenswrapper[4812]: I1124 19:20:11.867391 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpc8w\" (UniqueName: \"kubernetes.io/projected/47c3c513-ab2f-4a3a-9528-bd74c2d724a5-kube-api-access-jpc8w\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:12 crc kubenswrapper[4812]: I1124 19:20:12.495732 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mpjm" event={"ID":"47c3c513-ab2f-4a3a-9528-bd74c2d724a5","Type":"ContainerDied","Data":"6221694f7513ca4ec2537ede75656d6543ed1d30dfca03a7614fe566d281ae06"} Nov 24 19:20:12 crc kubenswrapper[4812]: I1124 19:20:12.499020 4812 scope.go:117] "RemoveContainer" containerID="fb70e657763e3cc920259c09c7e6600f058f315fcaff4d0087713aed4f18b88a" Nov 24 19:20:12 crc kubenswrapper[4812]: I1124 19:20:12.498933 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7mpjm" Nov 24 19:20:12 crc kubenswrapper[4812]: I1124 19:20:12.523971 4812 scope.go:117] "RemoveContainer" containerID="6d61146f7690eb6a13a2cd9a6138e599abbf906250cd5a4260ba0adff25a729c" Nov 24 19:20:12 crc kubenswrapper[4812]: I1124 19:20:12.534573 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mpjm"] Nov 24 19:20:12 crc kubenswrapper[4812]: I1124 19:20:12.537122 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mpjm"] Nov 24 19:20:12 crc kubenswrapper[4812]: I1124 19:20:12.552898 4812 scope.go:117] "RemoveContainer" containerID="89a1999513ac0762e87eabc488d024de2ebb6f20240f18256d564b12c08e2302" Nov 24 19:20:12 crc kubenswrapper[4812]: I1124 19:20:12.975652 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47c3c513-ab2f-4a3a-9528-bd74c2d724a5" path="/var/lib/kubelet/pods/47c3c513-ab2f-4a3a-9528-bd74c2d724a5/volumes" Nov 24 19:20:13 crc kubenswrapper[4812]: I1124 19:20:13.503284 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtqhp" event={"ID":"9605af63-3073-4b4f-bd24-f85dab94bac3","Type":"ContainerStarted","Data":"3e5591f9671cc45e293addbf8b4170c0549385993e588264c31156afd564462a"} Nov 24 19:20:13 crc kubenswrapper[4812]: I1124 19:20:13.505545 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjshd" event={"ID":"a8e7743d-c680-4492-a4fe-87196f7fd893","Type":"ContainerStarted","Data":"34fabadec49516f9da08eba722cc099062869228b0e21ac9aa2fd22c5646c10e"} Nov 24 19:20:13 crc kubenswrapper[4812]: I1124 19:20:13.508307 4812 generic.go:334] "Generic (PLEG): container finished" podID="f0e8e18b-0be2-41c1-9520-61ed4b767fc2" containerID="8a61e2bc88ed3d5f06054290eecf8b54b22aec5059baf636230cef2757e3c241" exitCode=0 Nov 24 19:20:13 crc kubenswrapper[4812]: I1124 19:20:13.508382 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzpbr" event={"ID":"f0e8e18b-0be2-41c1-9520-61ed4b767fc2","Type":"ContainerDied","Data":"8a61e2bc88ed3d5f06054290eecf8b54b22aec5059baf636230cef2757e3c241"} Nov 24 19:20:13 crc kubenswrapper[4812]: I1124 19:20:13.516180 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7wg6" event={"ID":"559e80e9-11e4-45ca-9e7d-5071fe04faa3","Type":"ContainerStarted","Data":"39bf320556a67475ffb4335b8e422c959f8189253e9852254d7f27e0e032b504"} Nov 24 19:20:13 crc kubenswrapper[4812]: I1124 19:20:13.517835 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhtx9" event={"ID":"d5b14a89-062b-410d-8d78-11239a856b78","Type":"ContainerStarted","Data":"5a6e3d26245f2cab58a66d82132432cdceddd3e40d76b7a9d7ec14f901613d1e"} Nov 24 19:20:14 crc kubenswrapper[4812]: I1124 19:20:14.526119 4812 generic.go:334] "Generic (PLEG): container finished" podID="559e80e9-11e4-45ca-9e7d-5071fe04faa3" containerID="39bf320556a67475ffb4335b8e422c959f8189253e9852254d7f27e0e032b504" exitCode=0 Nov 24 19:20:14 crc kubenswrapper[4812]: I1124 19:20:14.526253 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7wg6" event={"ID":"559e80e9-11e4-45ca-9e7d-5071fe04faa3","Type":"ContainerDied","Data":"39bf320556a67475ffb4335b8e422c959f8189253e9852254d7f27e0e032b504"} Nov 24 19:20:14 crc kubenswrapper[4812]: I1124 19:20:14.532286 4812 generic.go:334] "Generic (PLEG): container finished" podID="d5b14a89-062b-410d-8d78-11239a856b78" containerID="5a6e3d26245f2cab58a66d82132432cdceddd3e40d76b7a9d7ec14f901613d1e" exitCode=0 Nov 24 19:20:14 crc kubenswrapper[4812]: I1124 19:20:14.532414 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhtx9" event={"ID":"d5b14a89-062b-410d-8d78-11239a856b78","Type":"ContainerDied","Data":"5a6e3d26245f2cab58a66d82132432cdceddd3e40d76b7a9d7ec14f901613d1e"} Nov 24 19:20:14 crc kubenswrapper[4812]: I1124 19:20:14.536765 4812 generic.go:334] "Generic (PLEG): container finished" podID="9605af63-3073-4b4f-bd24-f85dab94bac3" containerID="3e5591f9671cc45e293addbf8b4170c0549385993e588264c31156afd564462a" exitCode=0 Nov 24 19:20:14 crc kubenswrapper[4812]: I1124 19:20:14.536824 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtqhp" event={"ID":"9605af63-3073-4b4f-bd24-f85dab94bac3","Type":"ContainerDied","Data":"3e5591f9671cc45e293addbf8b4170c0549385993e588264c31156afd564462a"} Nov 24 19:20:14 crc kubenswrapper[4812]: I1124 19:20:14.540457 4812 generic.go:334] "Generic (PLEG): container finished" podID="a8e7743d-c680-4492-a4fe-87196f7fd893" containerID="34fabadec49516f9da08eba722cc099062869228b0e21ac9aa2fd22c5646c10e" exitCode=0 Nov 24 19:20:14 crc kubenswrapper[4812]: I1124 19:20:14.540531 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjshd" event={"ID":"a8e7743d-c680-4492-a4fe-87196f7fd893","Type":"ContainerDied","Data":"34fabadec49516f9da08eba722cc099062869228b0e21ac9aa2fd22c5646c10e"} Nov 24 19:20:14 crc kubenswrapper[4812]: I1124 19:20:14.542267 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzpbr" event={"ID":"f0e8e18b-0be2-41c1-9520-61ed4b767fc2","Type":"ContainerStarted","Data":"216720d04eed70b2c3e42484dfcfa90f7f3519a8b08b9adcdeef930c48539bce"} Nov 24 19:20:14 crc kubenswrapper[4812]: I1124 19:20:14.577104 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zzpbr" podStartSLOduration=2.5783114879999998 podStartE2EDuration="49.577086908s" podCreationTimestamp="2025-11-24 19:19:25 +0000 UTC" firstStartedPulling="2025-11-24 19:19:26.90532744 +0000 UTC m=+160.694279831" lastFinishedPulling="2025-11-24 19:20:13.90410288 +0000 UTC m=+207.693055251" observedRunningTime="2025-11-24 19:20:14.573167779 +0000 UTC m=+208.362120150" watchObservedRunningTime="2025-11-24 19:20:14.577086908 +0000 UTC m=+208.366039279" Nov 24 19:20:15 crc kubenswrapper[4812]: I1124 19:20:15.526826 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zzpbr" Nov 24 19:20:15 crc kubenswrapper[4812]: I1124 19:20:15.527123 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zzpbr" Nov 24 19:20:15 crc kubenswrapper[4812]: I1124 19:20:15.550861 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhtx9" event={"ID":"d5b14a89-062b-410d-8d78-11239a856b78","Type":"ContainerStarted","Data":"71fbf7f8e145636f29fc61ecbb4c9324f63395023423ce884f10062340ec3b72"} Nov 24 19:20:15 crc kubenswrapper[4812]: I1124 19:20:15.554656 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtqhp" event={"ID":"9605af63-3073-4b4f-bd24-f85dab94bac3","Type":"ContainerStarted","Data":"0aa1a723a7963c378b5b61b53059ba1990e2ff8c764c7543db2fbfa537b2c433"} Nov 24 19:20:15 crc kubenswrapper[4812]: I1124 19:20:15.556894 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjshd" event={"ID":"a8e7743d-c680-4492-a4fe-87196f7fd893","Type":"ContainerStarted","Data":"9aa9270737adb4e2bfe3b0fa08b71071e060bc56b70409be0493bcb5aab78002"} Nov 24 19:20:15 crc kubenswrapper[4812]: I1124 19:20:15.559158 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7wg6" event={"ID":"559e80e9-11e4-45ca-9e7d-5071fe04faa3","Type":"ContainerStarted","Data":"2d9a6a6044291dd45851600e603a1b24bb47afcbb9876122c8766b4641f7af5f"} Nov 24 19:20:15 crc kubenswrapper[4812]: I1124 19:20:15.572042 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xhtx9" podStartSLOduration=3.56292847 podStartE2EDuration="51.572022704s" podCreationTimestamp="2025-11-24 19:19:24 +0000 UTC" firstStartedPulling="2025-11-24 19:19:26.931001006 +0000 UTC m=+160.719953417" lastFinishedPulling="2025-11-24 19:20:14.94009527 +0000 UTC m=+208.729047651" observedRunningTime="2025-11-24 19:20:15.569535925 +0000 UTC m=+209.358488296" watchObservedRunningTime="2025-11-24 19:20:15.572022704 +0000 UTC m=+209.360975085" Nov 24 19:20:15 crc kubenswrapper[4812]: I1124 19:20:15.588515 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mtqhp" podStartSLOduration=3.7227589869999997 podStartE2EDuration="48.588495363s" podCreationTimestamp="2025-11-24 19:19:27 +0000 UTC" firstStartedPulling="2025-11-24 19:19:30.107528318 +0000 UTC m=+163.896480689" lastFinishedPulling="2025-11-24 19:20:14.973264694 +0000 UTC m=+208.762217065" observedRunningTime="2025-11-24 19:20:15.587865866 +0000 UTC m=+209.376818247" watchObservedRunningTime="2025-11-24 19:20:15.588495363 +0000 UTC m=+209.377447734" Nov 24 19:20:15 crc kubenswrapper[4812]: I1124 19:20:15.608051 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jjshd" podStartSLOduration=2.509944515 podStartE2EDuration="50.608034478s" podCreationTimestamp="2025-11-24 19:19:25 +0000 UTC" firstStartedPulling="2025-11-24 19:19:26.902896553 +0000 UTC m=+160.691848954" lastFinishedPulling="2025-11-24 19:20:15.000986476 +0000 UTC m=+208.789938917" observedRunningTime="2025-11-24 19:20:15.607545944 +0000 UTC m=+209.396498325" watchObservedRunningTime="2025-11-24 19:20:15.608034478 +0000 UTC m=+209.396986849" Nov 24 19:20:15 crc kubenswrapper[4812]: I1124 19:20:15.630818 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x7wg6" podStartSLOduration=2.6045531840000002 podStartE2EDuration="47.630795342s" podCreationTimestamp="2025-11-24 19:19:28 +0000 UTC" firstStartedPulling="2025-11-24 19:19:30.108796503 +0000 UTC m=+163.897748874" lastFinishedPulling="2025-11-24 19:20:15.135038661 +0000 UTC m=+208.923991032" observedRunningTime="2025-11-24 19:20:15.630231286 +0000 UTC m=+209.419183677" watchObservedRunningTime="2025-11-24 19:20:15.630795342 +0000 UTC m=+209.419747713" Nov 24 19:20:15 crc kubenswrapper[4812]: I1124 19:20:15.738252 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jjshd" Nov 24 19:20:15 crc kubenswrapper[4812]: I1124 19:20:15.738369 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jjshd" Nov 24 19:20:16 crc kubenswrapper[4812]: I1124 19:20:16.567503 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zzpbr" podUID="f0e8e18b-0be2-41c1-9520-61ed4b767fc2" containerName="registry-server" probeResult="failure" output=< Nov 24 19:20:16 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 19:20:16 crc kubenswrapper[4812]: > Nov 24 19:20:16 crc kubenswrapper[4812]: I1124 19:20:16.783283 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jjshd" podUID="a8e7743d-c680-4492-a4fe-87196f7fd893" containerName="registry-server" probeResult="failure" output=< Nov 24 19:20:16 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 19:20:16 crc kubenswrapper[4812]: > Nov 24 19:20:17 crc kubenswrapper[4812]: I1124 19:20:17.561678 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fqzwn"] Nov 24 19:20:18 crc kubenswrapper[4812]: I1124 19:20:18.388302 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mtqhp" Nov 24 19:20:18 crc kubenswrapper[4812]: I1124 19:20:18.388655 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mtqhp" Nov 24 19:20:18 crc kubenswrapper[4812]: I1124 19:20:18.764240 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x7wg6" Nov 24 19:20:18 crc kubenswrapper[4812]: I1124 19:20:18.764295 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x7wg6" Nov 24 19:20:19 crc kubenswrapper[4812]: I1124 19:20:19.439663 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mtqhp" podUID="9605af63-3073-4b4f-bd24-f85dab94bac3" containerName="registry-server" probeResult="failure" output=< Nov 24 19:20:19 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 19:20:19 crc kubenswrapper[4812]: > Nov 24 19:20:19 crc kubenswrapper[4812]: I1124 19:20:19.815859 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x7wg6" podUID="559e80e9-11e4-45ca-9e7d-5071fe04faa3" containerName="registry-server" probeResult="failure" output=< Nov 24 19:20:19 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 19:20:19 crc kubenswrapper[4812]: > Nov 24 19:20:25 crc kubenswrapper[4812]: I1124 19:20:25.335741 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xhtx9" Nov 24 19:20:25 crc kubenswrapper[4812]: I1124 19:20:25.336530 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xhtx9" Nov 24 19:20:25 crc kubenswrapper[4812]: I1124 19:20:25.404630 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xhtx9" Nov 24 19:20:25 crc kubenswrapper[4812]: I1124 19:20:25.580322 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zzpbr" Nov 24 19:20:25 crc kubenswrapper[4812]: I1124 19:20:25.677501 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xhtx9" Nov 24 19:20:25 crc kubenswrapper[4812]: I1124 19:20:25.683891 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zzpbr" Nov 24 19:20:25 crc kubenswrapper[4812]: I1124 19:20:25.802391 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jjshd" Nov 24 19:20:25 crc kubenswrapper[4812]: I1124 19:20:25.852616 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jjshd" Nov 24 19:20:27 crc kubenswrapper[4812]: I1124 19:20:27.003837 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jjshd"] Nov 24 19:20:28 crc kubenswrapper[4812]: I1124 19:20:28.116660 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jjshd" podUID="a8e7743d-c680-4492-a4fe-87196f7fd893" containerName="registry-server" containerID="cri-o://9aa9270737adb4e2bfe3b0fa08b71071e060bc56b70409be0493bcb5aab78002" gracePeriod=2 Nov 24 19:20:28 crc kubenswrapper[4812]: I1124 19:20:28.449144 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mtqhp" Nov 24 19:20:28 crc kubenswrapper[4812]: I1124 19:20:28.512544 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mtqhp" Nov 24 19:20:28 crc kubenswrapper[4812]: I1124 19:20:28.802224 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zzpbr"] Nov 24 19:20:28 crc kubenswrapper[4812]: I1124 19:20:28.802482 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zzpbr" podUID="f0e8e18b-0be2-41c1-9520-61ed4b767fc2" containerName="registry-server" containerID="cri-o://216720d04eed70b2c3e42484dfcfa90f7f3519a8b08b9adcdeef930c48539bce" gracePeriod=2 Nov 24 19:20:28 crc kubenswrapper[4812]: I1124 19:20:28.841928 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x7wg6" Nov 24 19:20:28 crc kubenswrapper[4812]: I1124 19:20:28.885258 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x7wg6" Nov 24 19:20:29 crc kubenswrapper[4812]: E1124 19:20:29.054138 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0e8e18b_0be2_41c1_9520_61ed4b767fc2.slice/crio-216720d04eed70b2c3e42484dfcfa90f7f3519a8b08b9adcdeef930c48539bce.scope\": RecentStats: unable to find data in memory cache]" Nov 24 19:20:29 crc kubenswrapper[4812]: I1124 19:20:29.125325 4812 generic.go:334] "Generic (PLEG): container finished" podID="a8e7743d-c680-4492-a4fe-87196f7fd893" containerID="9aa9270737adb4e2bfe3b0fa08b71071e060bc56b70409be0493bcb5aab78002" exitCode=0 Nov 24 19:20:29 crc kubenswrapper[4812]: I1124 19:20:29.125513 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjshd" event={"ID":"a8e7743d-c680-4492-a4fe-87196f7fd893","Type":"ContainerDied","Data":"9aa9270737adb4e2bfe3b0fa08b71071e060bc56b70409be0493bcb5aab78002"} Nov 24 19:20:29 crc kubenswrapper[4812]: I1124 19:20:29.272072 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjshd" Nov 24 19:20:29 crc kubenswrapper[4812]: I1124 19:20:29.446938 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mnx6\" (UniqueName: \"kubernetes.io/projected/a8e7743d-c680-4492-a4fe-87196f7fd893-kube-api-access-5mnx6\") pod \"a8e7743d-c680-4492-a4fe-87196f7fd893\" (UID: \"a8e7743d-c680-4492-a4fe-87196f7fd893\") " Nov 24 19:20:29 crc kubenswrapper[4812]: I1124 19:20:29.447059 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e7743d-c680-4492-a4fe-87196f7fd893-utilities\") pod \"a8e7743d-c680-4492-a4fe-87196f7fd893\" (UID: \"a8e7743d-c680-4492-a4fe-87196f7fd893\") " Nov 24 19:20:29 crc kubenswrapper[4812]: I1124 19:20:29.447098 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e7743d-c680-4492-a4fe-87196f7fd893-catalog-content\") pod \"a8e7743d-c680-4492-a4fe-87196f7fd893\" (UID: \"a8e7743d-c680-4492-a4fe-87196f7fd893\") " Nov 24 19:20:29 crc kubenswrapper[4812]: I1124 19:20:29.453676 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8e7743d-c680-4492-a4fe-87196f7fd893-utilities" (OuterVolumeSpecName: "utilities") pod "a8e7743d-c680-4492-a4fe-87196f7fd893" (UID: "a8e7743d-c680-4492-a4fe-87196f7fd893"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:20:29 crc kubenswrapper[4812]: I1124 19:20:29.457445 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e7743d-c680-4492-a4fe-87196f7fd893-kube-api-access-5mnx6" (OuterVolumeSpecName: "kube-api-access-5mnx6") pod "a8e7743d-c680-4492-a4fe-87196f7fd893" (UID: "a8e7743d-c680-4492-a4fe-87196f7fd893"). InnerVolumeSpecName "kube-api-access-5mnx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:20:29 crc kubenswrapper[4812]: I1124 19:20:29.539326 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8e7743d-c680-4492-a4fe-87196f7fd893-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8e7743d-c680-4492-a4fe-87196f7fd893" (UID: "a8e7743d-c680-4492-a4fe-87196f7fd893"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:20:29 crc kubenswrapper[4812]: I1124 19:20:29.550055 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mnx6\" (UniqueName: \"kubernetes.io/projected/a8e7743d-c680-4492-a4fe-87196f7fd893-kube-api-access-5mnx6\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:29 crc kubenswrapper[4812]: I1124 19:20:29.550145 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8e7743d-c680-4492-a4fe-87196f7fd893-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:29 crc kubenswrapper[4812]: I1124 19:20:29.550162 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8e7743d-c680-4492-a4fe-87196f7fd893-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:29 crc kubenswrapper[4812]: I1124 19:20:29.781308 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zzpbr" Nov 24 19:20:29 crc kubenswrapper[4812]: I1124 19:20:29.954939 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0e8e18b-0be2-41c1-9520-61ed4b767fc2-utilities\") pod \"f0e8e18b-0be2-41c1-9520-61ed4b767fc2\" (UID: \"f0e8e18b-0be2-41c1-9520-61ed4b767fc2\") " Nov 24 19:20:29 crc kubenswrapper[4812]: I1124 19:20:29.955029 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf9p2\" (UniqueName: \"kubernetes.io/projected/f0e8e18b-0be2-41c1-9520-61ed4b767fc2-kube-api-access-qf9p2\") pod \"f0e8e18b-0be2-41c1-9520-61ed4b767fc2\" (UID: \"f0e8e18b-0be2-41c1-9520-61ed4b767fc2\") " Nov 24 19:20:29 crc kubenswrapper[4812]: I1124 19:20:29.955111 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0e8e18b-0be2-41c1-9520-61ed4b767fc2-catalog-content\") pod \"f0e8e18b-0be2-41c1-9520-61ed4b767fc2\" (UID: \"f0e8e18b-0be2-41c1-9520-61ed4b767fc2\") " Nov 24 19:20:29 crc kubenswrapper[4812]: I1124 19:20:29.957216 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0e8e18b-0be2-41c1-9520-61ed4b767fc2-utilities" (OuterVolumeSpecName: "utilities") pod "f0e8e18b-0be2-41c1-9520-61ed4b767fc2" (UID: "f0e8e18b-0be2-41c1-9520-61ed4b767fc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:20:29 crc kubenswrapper[4812]: I1124 19:20:29.958711 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e8e18b-0be2-41c1-9520-61ed4b767fc2-kube-api-access-qf9p2" (OuterVolumeSpecName: "kube-api-access-qf9p2") pod "f0e8e18b-0be2-41c1-9520-61ed4b767fc2" (UID: "f0e8e18b-0be2-41c1-9520-61ed4b767fc2"). InnerVolumeSpecName "kube-api-access-qf9p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.021393 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0e8e18b-0be2-41c1-9520-61ed4b767fc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0e8e18b-0be2-41c1-9520-61ed4b767fc2" (UID: "f0e8e18b-0be2-41c1-9520-61ed4b767fc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.057002 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0e8e18b-0be2-41c1-9520-61ed4b767fc2-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.057049 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf9p2\" (UniqueName: \"kubernetes.io/projected/f0e8e18b-0be2-41c1-9520-61ed4b767fc2-kube-api-access-qf9p2\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.057060 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0e8e18b-0be2-41c1-9520-61ed4b767fc2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.134426 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jjshd" event={"ID":"a8e7743d-c680-4492-a4fe-87196f7fd893","Type":"ContainerDied","Data":"83b27929aaceb32a92d114ac69952564c3945e37aeeacab89299bed056d4ad17"} Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.134805 4812 scope.go:117] "RemoveContainer" containerID="9aa9270737adb4e2bfe3b0fa08b71071e060bc56b70409be0493bcb5aab78002" Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.134520 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jjshd" Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.138479 4812 generic.go:334] "Generic (PLEG): container finished" podID="f0e8e18b-0be2-41c1-9520-61ed4b767fc2" containerID="216720d04eed70b2c3e42484dfcfa90f7f3519a8b08b9adcdeef930c48539bce" exitCode=0 Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.138542 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzpbr" event={"ID":"f0e8e18b-0be2-41c1-9520-61ed4b767fc2","Type":"ContainerDied","Data":"216720d04eed70b2c3e42484dfcfa90f7f3519a8b08b9adcdeef930c48539bce"} Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.138575 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zzpbr" Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.138597 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zzpbr" event={"ID":"f0e8e18b-0be2-41c1-9520-61ed4b767fc2","Type":"ContainerDied","Data":"9f5816b905ac1ff9bb8ae0eab84e36618b4833ed1299e52fc74b70062f92ea7e"} Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.155764 4812 scope.go:117] "RemoveContainer" containerID="34fabadec49516f9da08eba722cc099062869228b0e21ac9aa2fd22c5646c10e" Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.172647 4812 scope.go:117] "RemoveContainer" containerID="965a085c56e5b4a38ca87d1b74c8788094549e739a1544d6c7d4c0cfc08583cf" Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.199401 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jjshd"] Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.203496 4812 scope.go:117] "RemoveContainer" containerID="216720d04eed70b2c3e42484dfcfa90f7f3519a8b08b9adcdeef930c48539bce" Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.203631 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jjshd"] Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.213923 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zzpbr"] Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.216961 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zzpbr"] Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.218394 4812 scope.go:117] "RemoveContainer" containerID="8a61e2bc88ed3d5f06054290eecf8b54b22aec5059baf636230cef2757e3c241" Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.232118 4812 scope.go:117] "RemoveContainer" containerID="47bf703006b9b6f12319b7a900409da75b7a9c960c73a3ed44bd3e0e5df5f685" Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.245068 4812 scope.go:117] "RemoveContainer" containerID="216720d04eed70b2c3e42484dfcfa90f7f3519a8b08b9adcdeef930c48539bce" Nov 24 19:20:30 crc kubenswrapper[4812]: E1124 19:20:30.245484 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"216720d04eed70b2c3e42484dfcfa90f7f3519a8b08b9adcdeef930c48539bce\": container with ID starting with 216720d04eed70b2c3e42484dfcfa90f7f3519a8b08b9adcdeef930c48539bce not found: ID does not exist" containerID="216720d04eed70b2c3e42484dfcfa90f7f3519a8b08b9adcdeef930c48539bce" Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.245516 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"216720d04eed70b2c3e42484dfcfa90f7f3519a8b08b9adcdeef930c48539bce"} err="failed to get container status \"216720d04eed70b2c3e42484dfcfa90f7f3519a8b08b9adcdeef930c48539bce\": rpc error: code = NotFound desc = could not find container \"216720d04eed70b2c3e42484dfcfa90f7f3519a8b08b9adcdeef930c48539bce\": container with ID starting with 216720d04eed70b2c3e42484dfcfa90f7f3519a8b08b9adcdeef930c48539bce not found: ID does not exist" Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.245560 4812 scope.go:117] "RemoveContainer" containerID="8a61e2bc88ed3d5f06054290eecf8b54b22aec5059baf636230cef2757e3c241" Nov 24 19:20:30 crc kubenswrapper[4812]: E1124 19:20:30.245856 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a61e2bc88ed3d5f06054290eecf8b54b22aec5059baf636230cef2757e3c241\": container with ID starting with 8a61e2bc88ed3d5f06054290eecf8b54b22aec5059baf636230cef2757e3c241 not found: ID does not exist" containerID="8a61e2bc88ed3d5f06054290eecf8b54b22aec5059baf636230cef2757e3c241" Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.245878 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a61e2bc88ed3d5f06054290eecf8b54b22aec5059baf636230cef2757e3c241"} err="failed to get container status \"8a61e2bc88ed3d5f06054290eecf8b54b22aec5059baf636230cef2757e3c241\": rpc error: code = NotFound desc = could not find container \"8a61e2bc88ed3d5f06054290eecf8b54b22aec5059baf636230cef2757e3c241\": container with ID starting with 8a61e2bc88ed3d5f06054290eecf8b54b22aec5059baf636230cef2757e3c241 not found: ID does not exist" Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.245898 4812 scope.go:117] "RemoveContainer" containerID="47bf703006b9b6f12319b7a900409da75b7a9c960c73a3ed44bd3e0e5df5f685" Nov 24 19:20:30 crc kubenswrapper[4812]: E1124 19:20:30.246171 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47bf703006b9b6f12319b7a900409da75b7a9c960c73a3ed44bd3e0e5df5f685\": container with ID starting with 47bf703006b9b6f12319b7a900409da75b7a9c960c73a3ed44bd3e0e5df5f685 not found: ID does not exist" containerID="47bf703006b9b6f12319b7a900409da75b7a9c960c73a3ed44bd3e0e5df5f685" Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.246191 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47bf703006b9b6f12319b7a900409da75b7a9c960c73a3ed44bd3e0e5df5f685"} err="failed to get container status \"47bf703006b9b6f12319b7a900409da75b7a9c960c73a3ed44bd3e0e5df5f685\": rpc error: code = NotFound desc = could not find container \"47bf703006b9b6f12319b7a900409da75b7a9c960c73a3ed44bd3e0e5df5f685\": container with ID starting with 47bf703006b9b6f12319b7a900409da75b7a9c960c73a3ed44bd3e0e5df5f685 not found: ID does not exist" Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.978040 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8e7743d-c680-4492-a4fe-87196f7fd893" path="/var/lib/kubelet/pods/a8e7743d-c680-4492-a4fe-87196f7fd893/volumes" Nov 24 19:20:30 crc kubenswrapper[4812]: I1124 19:20:30.979608 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0e8e18b-0be2-41c1-9520-61ed4b767fc2" path="/var/lib/kubelet/pods/f0e8e18b-0be2-41c1-9520-61ed4b767fc2/volumes" Nov 24 19:20:31 crc kubenswrapper[4812]: I1124 19:20:31.204457 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x7wg6"] Nov 24 19:20:31 crc kubenswrapper[4812]: I1124 19:20:31.204827 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x7wg6" podUID="559e80e9-11e4-45ca-9e7d-5071fe04faa3" containerName="registry-server" containerID="cri-o://2d9a6a6044291dd45851600e603a1b24bb47afcbb9876122c8766b4641f7af5f" gracePeriod=2 Nov 24 19:20:31 crc kubenswrapper[4812]: I1124 19:20:31.626559 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7wg6" Nov 24 19:20:31 crc kubenswrapper[4812]: I1124 19:20:31.781250 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6kzt\" (UniqueName: \"kubernetes.io/projected/559e80e9-11e4-45ca-9e7d-5071fe04faa3-kube-api-access-x6kzt\") pod \"559e80e9-11e4-45ca-9e7d-5071fe04faa3\" (UID: \"559e80e9-11e4-45ca-9e7d-5071fe04faa3\") " Nov 24 19:20:31 crc kubenswrapper[4812]: I1124 19:20:31.781413 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/559e80e9-11e4-45ca-9e7d-5071fe04faa3-catalog-content\") pod \"559e80e9-11e4-45ca-9e7d-5071fe04faa3\" (UID: \"559e80e9-11e4-45ca-9e7d-5071fe04faa3\") " Nov 24 19:20:31 crc kubenswrapper[4812]: I1124 19:20:31.781457 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/559e80e9-11e4-45ca-9e7d-5071fe04faa3-utilities\") pod \"559e80e9-11e4-45ca-9e7d-5071fe04faa3\" (UID: \"559e80e9-11e4-45ca-9e7d-5071fe04faa3\") " Nov 24 19:20:31 crc kubenswrapper[4812]: I1124 19:20:31.782805 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/559e80e9-11e4-45ca-9e7d-5071fe04faa3-utilities" (OuterVolumeSpecName: "utilities") pod "559e80e9-11e4-45ca-9e7d-5071fe04faa3" (UID: "559e80e9-11e4-45ca-9e7d-5071fe04faa3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:20:31 crc kubenswrapper[4812]: I1124 19:20:31.787811 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/559e80e9-11e4-45ca-9e7d-5071fe04faa3-kube-api-access-x6kzt" (OuterVolumeSpecName: "kube-api-access-x6kzt") pod "559e80e9-11e4-45ca-9e7d-5071fe04faa3" (UID: "559e80e9-11e4-45ca-9e7d-5071fe04faa3"). InnerVolumeSpecName "kube-api-access-x6kzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:20:31 crc kubenswrapper[4812]: I1124 19:20:31.883811 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6kzt\" (UniqueName: \"kubernetes.io/projected/559e80e9-11e4-45ca-9e7d-5071fe04faa3-kube-api-access-x6kzt\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:31 crc kubenswrapper[4812]: I1124 19:20:31.883861 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/559e80e9-11e4-45ca-9e7d-5071fe04faa3-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:31 crc kubenswrapper[4812]: I1124 19:20:31.896488 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/559e80e9-11e4-45ca-9e7d-5071fe04faa3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "559e80e9-11e4-45ca-9e7d-5071fe04faa3" (UID: "559e80e9-11e4-45ca-9e7d-5071fe04faa3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:20:31 crc kubenswrapper[4812]: I1124 19:20:31.985322 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/559e80e9-11e4-45ca-9e7d-5071fe04faa3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:32 crc kubenswrapper[4812]: I1124 19:20:32.162929 4812 generic.go:334] "Generic (PLEG): container finished" podID="559e80e9-11e4-45ca-9e7d-5071fe04faa3" containerID="2d9a6a6044291dd45851600e603a1b24bb47afcbb9876122c8766b4641f7af5f" exitCode=0 Nov 24 19:20:32 crc kubenswrapper[4812]: I1124 19:20:32.163044 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7wg6" Nov 24 19:20:32 crc kubenswrapper[4812]: I1124 19:20:32.163056 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7wg6" event={"ID":"559e80e9-11e4-45ca-9e7d-5071fe04faa3","Type":"ContainerDied","Data":"2d9a6a6044291dd45851600e603a1b24bb47afcbb9876122c8766b4641f7af5f"} Nov 24 19:20:32 crc kubenswrapper[4812]: I1124 19:20:32.163113 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7wg6" event={"ID":"559e80e9-11e4-45ca-9e7d-5071fe04faa3","Type":"ContainerDied","Data":"f9b18bbaa7807accb91f9616185f4d726488d5dd54def12d4d75d1242fecc200"} Nov 24 19:20:32 crc kubenswrapper[4812]: I1124 19:20:32.163161 4812 scope.go:117] "RemoveContainer" containerID="2d9a6a6044291dd45851600e603a1b24bb47afcbb9876122c8766b4641f7af5f" Nov 24 19:20:32 crc kubenswrapper[4812]: I1124 19:20:32.189198 4812 scope.go:117] "RemoveContainer" containerID="39bf320556a67475ffb4335b8e422c959f8189253e9852254d7f27e0e032b504" Nov 24 19:20:32 crc kubenswrapper[4812]: I1124 19:20:32.218562 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x7wg6"] Nov 24 19:20:32 crc kubenswrapper[4812]: I1124 19:20:32.223260 4812 scope.go:117] "RemoveContainer" containerID="e0fec7e7067522b34bcd4686950c02f06b0b4279d16da552644e6210d7cd4918" Nov 24 19:20:32 crc kubenswrapper[4812]: I1124 19:20:32.223773 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x7wg6"] Nov 24 19:20:32 crc kubenswrapper[4812]: I1124 19:20:32.242185 4812 scope.go:117] "RemoveContainer" containerID="2d9a6a6044291dd45851600e603a1b24bb47afcbb9876122c8766b4641f7af5f" Nov 24 19:20:32 crc kubenswrapper[4812]: E1124 19:20:32.242790 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d9a6a6044291dd45851600e603a1b24bb47afcbb9876122c8766b4641f7af5f\": container with ID starting with 2d9a6a6044291dd45851600e603a1b24bb47afcbb9876122c8766b4641f7af5f not found: ID does not exist" containerID="2d9a6a6044291dd45851600e603a1b24bb47afcbb9876122c8766b4641f7af5f" Nov 24 19:20:32 crc kubenswrapper[4812]: I1124 19:20:32.242869 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d9a6a6044291dd45851600e603a1b24bb47afcbb9876122c8766b4641f7af5f"} err="failed to get container status \"2d9a6a6044291dd45851600e603a1b24bb47afcbb9876122c8766b4641f7af5f\": rpc error: code = NotFound desc = could not find container \"2d9a6a6044291dd45851600e603a1b24bb47afcbb9876122c8766b4641f7af5f\": container with ID starting with 2d9a6a6044291dd45851600e603a1b24bb47afcbb9876122c8766b4641f7af5f not found: ID does not exist" Nov 24 19:20:32 crc kubenswrapper[4812]: I1124 19:20:32.242919 4812 scope.go:117] "RemoveContainer" containerID="39bf320556a67475ffb4335b8e422c959f8189253e9852254d7f27e0e032b504" Nov 24 19:20:32 crc kubenswrapper[4812]: E1124 19:20:32.244579 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39bf320556a67475ffb4335b8e422c959f8189253e9852254d7f27e0e032b504\": container with ID starting with 39bf320556a67475ffb4335b8e422c959f8189253e9852254d7f27e0e032b504 not found: ID does not exist" containerID="39bf320556a67475ffb4335b8e422c959f8189253e9852254d7f27e0e032b504" Nov 24 19:20:32 crc kubenswrapper[4812]: I1124 19:20:32.244628 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39bf320556a67475ffb4335b8e422c959f8189253e9852254d7f27e0e032b504"} err="failed to get container status \"39bf320556a67475ffb4335b8e422c959f8189253e9852254d7f27e0e032b504\": rpc error: code = NotFound desc = could not find container \"39bf320556a67475ffb4335b8e422c959f8189253e9852254d7f27e0e032b504\": container with ID starting with 39bf320556a67475ffb4335b8e422c959f8189253e9852254d7f27e0e032b504 not found: ID does not exist" Nov 24 19:20:32 crc kubenswrapper[4812]: I1124 19:20:32.244657 4812 scope.go:117] "RemoveContainer" containerID="e0fec7e7067522b34bcd4686950c02f06b0b4279d16da552644e6210d7cd4918" Nov 24 19:20:32 crc kubenswrapper[4812]: E1124 19:20:32.245102 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0fec7e7067522b34bcd4686950c02f06b0b4279d16da552644e6210d7cd4918\": container with ID starting with e0fec7e7067522b34bcd4686950c02f06b0b4279d16da552644e6210d7cd4918 not found: ID does not exist" containerID="e0fec7e7067522b34bcd4686950c02f06b0b4279d16da552644e6210d7cd4918" Nov 24 19:20:32 crc kubenswrapper[4812]: I1124 19:20:32.245153 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0fec7e7067522b34bcd4686950c02f06b0b4279d16da552644e6210d7cd4918"} err="failed to get container status \"e0fec7e7067522b34bcd4686950c02f06b0b4279d16da552644e6210d7cd4918\": rpc error: code = NotFound desc = could not find container \"e0fec7e7067522b34bcd4686950c02f06b0b4279d16da552644e6210d7cd4918\": container with ID starting with e0fec7e7067522b34bcd4686950c02f06b0b4279d16da552644e6210d7cd4918 not found: ID does not exist" Nov 24 19:20:32 crc kubenswrapper[4812]: I1124 19:20:32.976670 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="559e80e9-11e4-45ca-9e7d-5071fe04faa3" path="/var/lib/kubelet/pods/559e80e9-11e4-45ca-9e7d-5071fe04faa3/volumes" Nov 24 19:20:32 crc kubenswrapper[4812]: I1124 19:20:32.998173 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:20:32 crc kubenswrapper[4812]: I1124 19:20:32.998259 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:20:32 crc kubenswrapper[4812]: I1124 19:20:32.998322 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:20:32 crc kubenswrapper[4812]: I1124 19:20:32.999005 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 19:20:32 crc kubenswrapper[4812]: I1124 19:20:32.999074 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b" gracePeriod=600 Nov 24 19:20:33 crc kubenswrapper[4812]: I1124 19:20:33.174091 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b" exitCode=0 Nov 24 19:20:33 crc kubenswrapper[4812]: I1124 19:20:33.174280 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b"} Nov 24 19:20:34 crc kubenswrapper[4812]: I1124 19:20:34.184503 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"9eaff0558087aaedaaf1b5c0557fb9301e3374c23c9d5289ea7c142bf667389a"} Nov 24 19:20:42 crc kubenswrapper[4812]: I1124 19:20:42.589789 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" podUID="03ad7961-295b-4e12-82b0-b75f196049b0" containerName="oauth-openshift" containerID="cri-o://edfced44cf5277e68981a2b2ba1c9dd542c00f04020ad22a1a0b1d096e5f05ee" gracePeriod=15 Nov 24 19:20:42 crc kubenswrapper[4812]: I1124 19:20:42.972430 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.019965 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-c78d49fb6-hfmvk"] Nov 24 19:20:43 crc kubenswrapper[4812]: E1124 19:20:43.020800 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03ad7961-295b-4e12-82b0-b75f196049b0" containerName="oauth-openshift" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.020835 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="03ad7961-295b-4e12-82b0-b75f196049b0" containerName="oauth-openshift" Nov 24 19:20:43 crc kubenswrapper[4812]: E1124 19:20:43.020866 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e7743d-c680-4492-a4fe-87196f7fd893" containerName="extract-content" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.020881 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e7743d-c680-4492-a4fe-87196f7fd893" containerName="extract-content" Nov 24 19:20:43 crc kubenswrapper[4812]: E1124 19:20:43.020904 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47c3c513-ab2f-4a3a-9528-bd74c2d724a5" containerName="registry-server" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.020920 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c3c513-ab2f-4a3a-9528-bd74c2d724a5" containerName="registry-server" Nov 24 19:20:43 crc kubenswrapper[4812]: E1124 19:20:43.020938 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e8e18b-0be2-41c1-9520-61ed4b767fc2" containerName="registry-server" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.020951 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e8e18b-0be2-41c1-9520-61ed4b767fc2" containerName="registry-server" Nov 24 19:20:43 crc kubenswrapper[4812]: E1124 19:20:43.020970 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="559e80e9-11e4-45ca-9e7d-5071fe04faa3" containerName="extract-content" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.020983 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="559e80e9-11e4-45ca-9e7d-5071fe04faa3" containerName="extract-content" Nov 24 19:20:43 crc kubenswrapper[4812]: E1124 19:20:43.021010 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="245d8e2f-a3f1-4415-a431-595eccb8892a" containerName="pruner" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.021022 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="245d8e2f-a3f1-4415-a431-595eccb8892a" containerName="pruner" Nov 24 19:20:43 crc kubenswrapper[4812]: E1124 19:20:43.021039 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4bbe994-34dd-4bc3-877f-f0cb17f1640b" containerName="pruner" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.021051 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4bbe994-34dd-4bc3-877f-f0cb17f1640b" containerName="pruner" Nov 24 19:20:43 crc kubenswrapper[4812]: E1124 19:20:43.021065 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="559e80e9-11e4-45ca-9e7d-5071fe04faa3" containerName="registry-server" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.021078 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="559e80e9-11e4-45ca-9e7d-5071fe04faa3" containerName="registry-server" Nov 24 19:20:43 crc kubenswrapper[4812]: E1124 19:20:43.021098 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e8e18b-0be2-41c1-9520-61ed4b767fc2" containerName="extract-utilities" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.021111 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e8e18b-0be2-41c1-9520-61ed4b767fc2" containerName="extract-utilities" Nov 24 19:20:43 crc kubenswrapper[4812]: E1124 19:20:43.021128 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e8e18b-0be2-41c1-9520-61ed4b767fc2" containerName="extract-content" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.021141 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e8e18b-0be2-41c1-9520-61ed4b767fc2" containerName="extract-content" Nov 24 19:20:43 crc kubenswrapper[4812]: E1124 19:20:43.021159 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47c3c513-ab2f-4a3a-9528-bd74c2d724a5" containerName="extract-utilities" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.021171 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c3c513-ab2f-4a3a-9528-bd74c2d724a5" containerName="extract-utilities" Nov 24 19:20:43 crc kubenswrapper[4812]: E1124 19:20:43.021192 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47c3c513-ab2f-4a3a-9528-bd74c2d724a5" containerName="extract-content" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.021205 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c3c513-ab2f-4a3a-9528-bd74c2d724a5" containerName="extract-content" Nov 24 19:20:43 crc kubenswrapper[4812]: E1124 19:20:43.021226 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="559e80e9-11e4-45ca-9e7d-5071fe04faa3" containerName="extract-utilities" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.021240 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="559e80e9-11e4-45ca-9e7d-5071fe04faa3" containerName="extract-utilities" Nov 24 19:20:43 crc kubenswrapper[4812]: E1124 19:20:43.021258 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e7743d-c680-4492-a4fe-87196f7fd893" containerName="registry-server" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.021270 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e7743d-c680-4492-a4fe-87196f7fd893" containerName="registry-server" Nov 24 19:20:43 crc kubenswrapper[4812]: E1124 19:20:43.021287 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e7743d-c680-4492-a4fe-87196f7fd893" containerName="extract-utilities" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.021300 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e7743d-c680-4492-a4fe-87196f7fd893" containerName="extract-utilities" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.021508 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e8e18b-0be2-41c1-9520-61ed4b767fc2" containerName="registry-server" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.021529 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="559e80e9-11e4-45ca-9e7d-5071fe04faa3" containerName="registry-server" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.021553 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e7743d-c680-4492-a4fe-87196f7fd893" containerName="registry-server" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.021574 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="47c3c513-ab2f-4a3a-9528-bd74c2d724a5" containerName="registry-server" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.021594 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4bbe994-34dd-4bc3-877f-f0cb17f1640b" containerName="pruner" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.021611 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="03ad7961-295b-4e12-82b0-b75f196049b0" containerName="oauth-openshift" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.021755 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="245d8e2f-a3f1-4415-a431-595eccb8892a" containerName="pruner" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.022655 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.024175 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-c78d49fb6-hfmvk"] Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.068382 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-user-template-login\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.068450 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.068477 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-system-router-certs\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.068579 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.068715 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.068745 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-system-session\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.068775 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.068800 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js5qh\" (UniqueName: \"kubernetes.io/projected/cbf45aa6-1325-4885-ac0a-024cf8a778d2-kube-api-access-js5qh\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.068830 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cbf45aa6-1325-4885-ac0a-024cf8a778d2-audit-dir\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.068858 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-user-template-error\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.068878 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.068901 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-system-service-ca\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.068927 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.068947 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cbf45aa6-1325-4885-ac0a-024cf8a778d2-audit-policies\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.169696 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-ocp-branding-template\") pod \"03ad7961-295b-4e12-82b0-b75f196049b0\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.169757 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-audit-policies\") pod \"03ad7961-295b-4e12-82b0-b75f196049b0\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.169788 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-trusted-ca-bundle\") pod \"03ad7961-295b-4e12-82b0-b75f196049b0\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.169812 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-template-provider-selection\") pod \"03ad7961-295b-4e12-82b0-b75f196049b0\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.169837 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-router-certs\") pod \"03ad7961-295b-4e12-82b0-b75f196049b0\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.169879 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-cliconfig\") pod \"03ad7961-295b-4e12-82b0-b75f196049b0\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.169917 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-idp-0-file-data\") pod \"03ad7961-295b-4e12-82b0-b75f196049b0\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.169969 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03ad7961-295b-4e12-82b0-b75f196049b0-audit-dir\") pod \"03ad7961-295b-4e12-82b0-b75f196049b0\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170003 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfbhn\" (UniqueName: \"kubernetes.io/projected/03ad7961-295b-4e12-82b0-b75f196049b0-kube-api-access-sfbhn\") pod \"03ad7961-295b-4e12-82b0-b75f196049b0\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170059 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-service-ca\") pod \"03ad7961-295b-4e12-82b0-b75f196049b0\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170091 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-session\") pod \"03ad7961-295b-4e12-82b0-b75f196049b0\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170123 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-template-login\") pod \"03ad7961-295b-4e12-82b0-b75f196049b0\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170162 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-template-error\") pod \"03ad7961-295b-4e12-82b0-b75f196049b0\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170149 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03ad7961-295b-4e12-82b0-b75f196049b0-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "03ad7961-295b-4e12-82b0-b75f196049b0" (UID: "03ad7961-295b-4e12-82b0-b75f196049b0"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170196 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-serving-cert\") pod \"03ad7961-295b-4e12-82b0-b75f196049b0\" (UID: \"03ad7961-295b-4e12-82b0-b75f196049b0\") " Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170381 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-system-session\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170420 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170462 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170503 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js5qh\" (UniqueName: \"kubernetes.io/projected/cbf45aa6-1325-4885-ac0a-024cf8a778d2-kube-api-access-js5qh\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170538 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cbf45aa6-1325-4885-ac0a-024cf8a778d2-audit-dir\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170565 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-user-template-error\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170595 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170627 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-system-service-ca\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170662 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170696 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cbf45aa6-1325-4885-ac0a-024cf8a778d2-audit-policies\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170734 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-user-template-login\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170771 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170772 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "03ad7961-295b-4e12-82b0-b75f196049b0" (UID: "03ad7961-295b-4e12-82b0-b75f196049b0"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170805 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-system-router-certs\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170825 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "03ad7961-295b-4e12-82b0-b75f196049b0" (UID: "03ad7961-295b-4e12-82b0-b75f196049b0"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170841 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170920 4812 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170943 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170961 4812 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/03ad7961-295b-4e12-82b0-b75f196049b0-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.170841 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cbf45aa6-1325-4885-ac0a-024cf8a778d2-audit-dir\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.171848 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "03ad7961-295b-4e12-82b0-b75f196049b0" (UID: "03ad7961-295b-4e12-82b0-b75f196049b0"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.172038 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cbf45aa6-1325-4885-ac0a-024cf8a778d2-audit-policies\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.172799 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-system-service-ca\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.172838 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.177234 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "03ad7961-295b-4e12-82b0-b75f196049b0" (UID: "03ad7961-295b-4e12-82b0-b75f196049b0"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.177586 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "03ad7961-295b-4e12-82b0-b75f196049b0" (UID: "03ad7961-295b-4e12-82b0-b75f196049b0"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.177802 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03ad7961-295b-4e12-82b0-b75f196049b0-kube-api-access-sfbhn" (OuterVolumeSpecName: "kube-api-access-sfbhn") pod "03ad7961-295b-4e12-82b0-b75f196049b0" (UID: "03ad7961-295b-4e12-82b0-b75f196049b0"). InnerVolumeSpecName "kube-api-access-sfbhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.177867 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "03ad7961-295b-4e12-82b0-b75f196049b0" (UID: "03ad7961-295b-4e12-82b0-b75f196049b0"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.177905 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-user-template-login\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.178110 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "03ad7961-295b-4e12-82b0-b75f196049b0" (UID: "03ad7961-295b-4e12-82b0-b75f196049b0"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.178236 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-system-session\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.178772 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "03ad7961-295b-4e12-82b0-b75f196049b0" (UID: "03ad7961-295b-4e12-82b0-b75f196049b0"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.179222 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.180271 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "03ad7961-295b-4e12-82b0-b75f196049b0" (UID: "03ad7961-295b-4e12-82b0-b75f196049b0"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.180630 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.181591 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.181799 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.181822 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "03ad7961-295b-4e12-82b0-b75f196049b0" (UID: "03ad7961-295b-4e12-82b0-b75f196049b0"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.182857 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-system-router-certs\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.183035 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "03ad7961-295b-4e12-82b0-b75f196049b0" (UID: "03ad7961-295b-4e12-82b0-b75f196049b0"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.185454 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "03ad7961-295b-4e12-82b0-b75f196049b0" (UID: "03ad7961-295b-4e12-82b0-b75f196049b0"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.192677 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-user-template-error\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.194643 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cbf45aa6-1325-4885-ac0a-024cf8a778d2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.202075 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js5qh\" (UniqueName: \"kubernetes.io/projected/cbf45aa6-1325-4885-ac0a-024cf8a778d2-kube-api-access-js5qh\") pod \"oauth-openshift-c78d49fb6-hfmvk\" (UID: \"cbf45aa6-1325-4885-ac0a-024cf8a778d2\") " pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.247304 4812 generic.go:334] "Generic (PLEG): container finished" podID="03ad7961-295b-4e12-82b0-b75f196049b0" containerID="edfced44cf5277e68981a2b2ba1c9dd542c00f04020ad22a1a0b1d096e5f05ee" exitCode=0 Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.247394 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.247464 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" event={"ID":"03ad7961-295b-4e12-82b0-b75f196049b0","Type":"ContainerDied","Data":"edfced44cf5277e68981a2b2ba1c9dd542c00f04020ad22a1a0b1d096e5f05ee"} Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.247531 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fqzwn" event={"ID":"03ad7961-295b-4e12-82b0-b75f196049b0","Type":"ContainerDied","Data":"05a0ae1ac223e06c4426ce0719640bc10179e0ee635b6b3165af7f32ea1643f3"} Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.247558 4812 scope.go:117] "RemoveContainer" containerID="edfced44cf5277e68981a2b2ba1c9dd542c00f04020ad22a1a0b1d096e5f05ee" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.272879 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.272992 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.273012 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfbhn\" (UniqueName: \"kubernetes.io/projected/03ad7961-295b-4e12-82b0-b75f196049b0-kube-api-access-sfbhn\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.273025 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.273041 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.273055 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.273070 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.273106 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.273124 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.273136 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.273217 4812 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/03ad7961-295b-4e12-82b0-b75f196049b0-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.288732 4812 scope.go:117] "RemoveContainer" containerID="edfced44cf5277e68981a2b2ba1c9dd542c00f04020ad22a1a0b1d096e5f05ee" Nov 24 19:20:43 crc kubenswrapper[4812]: E1124 19:20:43.289815 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edfced44cf5277e68981a2b2ba1c9dd542c00f04020ad22a1a0b1d096e5f05ee\": container with ID starting with edfced44cf5277e68981a2b2ba1c9dd542c00f04020ad22a1a0b1d096e5f05ee not found: ID does not exist" containerID="edfced44cf5277e68981a2b2ba1c9dd542c00f04020ad22a1a0b1d096e5f05ee" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.289854 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edfced44cf5277e68981a2b2ba1c9dd542c00f04020ad22a1a0b1d096e5f05ee"} err="failed to get container status \"edfced44cf5277e68981a2b2ba1c9dd542c00f04020ad22a1a0b1d096e5f05ee\": rpc error: code = NotFound desc = could not find container \"edfced44cf5277e68981a2b2ba1c9dd542c00f04020ad22a1a0b1d096e5f05ee\": container with ID starting with edfced44cf5277e68981a2b2ba1c9dd542c00f04020ad22a1a0b1d096e5f05ee not found: ID does not exist" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.294447 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fqzwn"] Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.299081 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fqzwn"] Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.335209 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:43 crc kubenswrapper[4812]: I1124 19:20:43.531435 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-c78d49fb6-hfmvk"] Nov 24 19:20:44 crc kubenswrapper[4812]: I1124 19:20:44.254670 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" event={"ID":"cbf45aa6-1325-4885-ac0a-024cf8a778d2","Type":"ContainerStarted","Data":"ae31b3bd6c3e503d8041cb59c70f1c8bdfb7f4133fee340e4700115a3b18ec9c"} Nov 24 19:20:44 crc kubenswrapper[4812]: I1124 19:20:44.255189 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:44 crc kubenswrapper[4812]: I1124 19:20:44.255207 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" event={"ID":"cbf45aa6-1325-4885-ac0a-024cf8a778d2","Type":"ContainerStarted","Data":"0fba9e87054faa4ab4bd2c8e493a4e0bd750120a67bb38b4558307e59b144ea6"} Nov 24 19:20:44 crc kubenswrapper[4812]: I1124 19:20:44.292209 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" podStartSLOduration=27.292186583 podStartE2EDuration="27.292186583s" podCreationTimestamp="2025-11-24 19:20:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:20:44.289433027 +0000 UTC m=+238.078385418" watchObservedRunningTime="2025-11-24 19:20:44.292186583 +0000 UTC m=+238.081138954" Nov 24 19:20:44 crc kubenswrapper[4812]: I1124 19:20:44.426737 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-c78d49fb6-hfmvk" Nov 24 19:20:44 crc kubenswrapper[4812]: I1124 19:20:44.976805 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03ad7961-295b-4e12-82b0-b75f196049b0" path="/var/lib/kubelet/pods/03ad7961-295b-4e12-82b0-b75f196049b0/volumes" Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.462780 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xhtx9"] Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.464038 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xhtx9" podUID="d5b14a89-062b-410d-8d78-11239a856b78" containerName="registry-server" containerID="cri-o://71fbf7f8e145636f29fc61ecbb4c9324f63395023423ce884f10062340ec3b72" gracePeriod=30 Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.471559 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-466wh"] Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.471874 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-466wh" podUID="da6d85be-188f-4631-af4a-49aa40bb0d3e" containerName="registry-server" containerID="cri-o://cf318b0b46bbf15cb885e5127bddee60a27daa06c14323f3e4a95e520dad72b2" gracePeriod=30 Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.483808 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nvpxb"] Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.484090 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" podUID="04267c57-8e76-48a4-b768-ce571525ed62" containerName="marketplace-operator" containerID="cri-o://d130346746318faad1b53cadb4d4d35b6cbe7d429c55588a7bb106672bf83a7c" gracePeriod=30 Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.494932 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzgzm"] Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.501494 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rzgzm" podUID="12d754a7-703d-4e0a-a803-8a65a41786ed" containerName="registry-server" containerID="cri-o://a1d7df426415432df476bafb1397191b92a124265083be60034b8a7fdb4ae03e" gracePeriod=30 Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.505072 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xbqxn"] Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.506124 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xbqxn" Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.508279 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mtqhp"] Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.508637 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mtqhp" podUID="9605af63-3073-4b4f-bd24-f85dab94bac3" containerName="registry-server" containerID="cri-o://0aa1a723a7963c378b5b61b53059ba1990e2ff8c764c7543db2fbfa537b2c433" gracePeriod=30 Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.522375 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdp56\" (UniqueName: \"kubernetes.io/projected/d8906e19-a1ac-49ea-b9c0-d4470aa2b806-kube-api-access-vdp56\") pod \"marketplace-operator-79b997595-xbqxn\" (UID: \"d8906e19-a1ac-49ea-b9c0-d4470aa2b806\") " pod="openshift-marketplace/marketplace-operator-79b997595-xbqxn" Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.522449 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8906e19-a1ac-49ea-b9c0-d4470aa2b806-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xbqxn\" (UID: \"d8906e19-a1ac-49ea-b9c0-d4470aa2b806\") " pod="openshift-marketplace/marketplace-operator-79b997595-xbqxn" Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.522477 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8906e19-a1ac-49ea-b9c0-d4470aa2b806-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xbqxn\" (UID: \"d8906e19-a1ac-49ea-b9c0-d4470aa2b806\") " pod="openshift-marketplace/marketplace-operator-79b997595-xbqxn" Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.532208 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xbqxn"] Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.624101 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdp56\" (UniqueName: \"kubernetes.io/projected/d8906e19-a1ac-49ea-b9c0-d4470aa2b806-kube-api-access-vdp56\") pod \"marketplace-operator-79b997595-xbqxn\" (UID: \"d8906e19-a1ac-49ea-b9c0-d4470aa2b806\") " pod="openshift-marketplace/marketplace-operator-79b997595-xbqxn" Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.624166 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8906e19-a1ac-49ea-b9c0-d4470aa2b806-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xbqxn\" (UID: \"d8906e19-a1ac-49ea-b9c0-d4470aa2b806\") " pod="openshift-marketplace/marketplace-operator-79b997595-xbqxn" Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.624195 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8906e19-a1ac-49ea-b9c0-d4470aa2b806-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xbqxn\" (UID: \"d8906e19-a1ac-49ea-b9c0-d4470aa2b806\") " pod="openshift-marketplace/marketplace-operator-79b997595-xbqxn" Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.626135 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d8906e19-a1ac-49ea-b9c0-d4470aa2b806-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xbqxn\" (UID: \"d8906e19-a1ac-49ea-b9c0-d4470aa2b806\") " pod="openshift-marketplace/marketplace-operator-79b997595-xbqxn" Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.632109 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d8906e19-a1ac-49ea-b9c0-d4470aa2b806-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xbqxn\" (UID: \"d8906e19-a1ac-49ea-b9c0-d4470aa2b806\") " pod="openshift-marketplace/marketplace-operator-79b997595-xbqxn" Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.647648 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdp56\" (UniqueName: \"kubernetes.io/projected/d8906e19-a1ac-49ea-b9c0-d4470aa2b806-kube-api-access-vdp56\") pod \"marketplace-operator-79b997595-xbqxn\" (UID: \"d8906e19-a1ac-49ea-b9c0-d4470aa2b806\") " pod="openshift-marketplace/marketplace-operator-79b997595-xbqxn" Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.839769 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xbqxn" Nov 24 19:21:00 crc kubenswrapper[4812]: I1124 19:21:00.930320 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-466wh" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.004862 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.031189 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/04267c57-8e76-48a4-b768-ce571525ed62-marketplace-operator-metrics\") pod \"04267c57-8e76-48a4-b768-ce571525ed62\" (UID: \"04267c57-8e76-48a4-b768-ce571525ed62\") " Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.031226 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04267c57-8e76-48a4-b768-ce571525ed62-marketplace-trusted-ca\") pod \"04267c57-8e76-48a4-b768-ce571525ed62\" (UID: \"04267c57-8e76-48a4-b768-ce571525ed62\") " Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.031345 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb6l6\" (UniqueName: \"kubernetes.io/projected/04267c57-8e76-48a4-b768-ce571525ed62-kube-api-access-xb6l6\") pod \"04267c57-8e76-48a4-b768-ce571525ed62\" (UID: \"04267c57-8e76-48a4-b768-ce571525ed62\") " Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.031367 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da6d85be-188f-4631-af4a-49aa40bb0d3e-catalog-content\") pod \"da6d85be-188f-4631-af4a-49aa40bb0d3e\" (UID: \"da6d85be-188f-4631-af4a-49aa40bb0d3e\") " Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.031389 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trvrd\" (UniqueName: \"kubernetes.io/projected/da6d85be-188f-4631-af4a-49aa40bb0d3e-kube-api-access-trvrd\") pod \"da6d85be-188f-4631-af4a-49aa40bb0d3e\" (UID: \"da6d85be-188f-4631-af4a-49aa40bb0d3e\") " Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.031414 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da6d85be-188f-4631-af4a-49aa40bb0d3e-utilities\") pod \"da6d85be-188f-4631-af4a-49aa40bb0d3e\" (UID: \"da6d85be-188f-4631-af4a-49aa40bb0d3e\") " Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.034223 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da6d85be-188f-4631-af4a-49aa40bb0d3e-utilities" (OuterVolumeSpecName: "utilities") pod "da6d85be-188f-4631-af4a-49aa40bb0d3e" (UID: "da6d85be-188f-4631-af4a-49aa40bb0d3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.035192 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mtqhp" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.036845 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04267c57-8e76-48a4-b768-ce571525ed62-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "04267c57-8e76-48a4-b768-ce571525ed62" (UID: "04267c57-8e76-48a4-b768-ce571525ed62"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.036969 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04267c57-8e76-48a4-b768-ce571525ed62-kube-api-access-xb6l6" (OuterVolumeSpecName: "kube-api-access-xb6l6") pod "04267c57-8e76-48a4-b768-ce571525ed62" (UID: "04267c57-8e76-48a4-b768-ce571525ed62"). InnerVolumeSpecName "kube-api-access-xb6l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.039145 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04267c57-8e76-48a4-b768-ce571525ed62-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "04267c57-8e76-48a4-b768-ce571525ed62" (UID: "04267c57-8e76-48a4-b768-ce571525ed62"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.040132 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da6d85be-188f-4631-af4a-49aa40bb0d3e-kube-api-access-trvrd" (OuterVolumeSpecName: "kube-api-access-trvrd") pod "da6d85be-188f-4631-af4a-49aa40bb0d3e" (UID: "da6d85be-188f-4631-af4a-49aa40bb0d3e"). InnerVolumeSpecName "kube-api-access-trvrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.054170 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xhtx9" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.106194 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rzgzm" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.133967 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da6d85be-188f-4631-af4a-49aa40bb0d3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da6d85be-188f-4631-af4a-49aa40bb0d3e" (UID: "da6d85be-188f-4631-af4a-49aa40bb0d3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.134328 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5b14a89-062b-410d-8d78-11239a856b78-catalog-content\") pod \"d5b14a89-062b-410d-8d78-11239a856b78\" (UID: \"d5b14a89-062b-410d-8d78-11239a856b78\") " Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.134445 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5b14a89-062b-410d-8d78-11239a856b78-utilities\") pod \"d5b14a89-062b-410d-8d78-11239a856b78\" (UID: \"d5b14a89-062b-410d-8d78-11239a856b78\") " Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.134501 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12d754a7-703d-4e0a-a803-8a65a41786ed-catalog-content\") pod \"12d754a7-703d-4e0a-a803-8a65a41786ed\" (UID: \"12d754a7-703d-4e0a-a803-8a65a41786ed\") " Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.134539 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87wkq\" (UniqueName: \"kubernetes.io/projected/d5b14a89-062b-410d-8d78-11239a856b78-kube-api-access-87wkq\") pod \"d5b14a89-062b-410d-8d78-11239a856b78\" (UID: \"d5b14a89-062b-410d-8d78-11239a856b78\") " Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.136015 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5b14a89-062b-410d-8d78-11239a856b78-utilities" (OuterVolumeSpecName: "utilities") pod "d5b14a89-062b-410d-8d78-11239a856b78" (UID: "d5b14a89-062b-410d-8d78-11239a856b78"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.136040 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9605af63-3073-4b4f-bd24-f85dab94bac3-catalog-content\") pod \"9605af63-3073-4b4f-bd24-f85dab94bac3\" (UID: \"9605af63-3073-4b4f-bd24-f85dab94bac3\") " Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.136069 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12d754a7-703d-4e0a-a803-8a65a41786ed-utilities\") pod \"12d754a7-703d-4e0a-a803-8a65a41786ed\" (UID: \"12d754a7-703d-4e0a-a803-8a65a41786ed\") " Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.136101 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twtxz\" (UniqueName: \"kubernetes.io/projected/12d754a7-703d-4e0a-a803-8a65a41786ed-kube-api-access-twtxz\") pod \"12d754a7-703d-4e0a-a803-8a65a41786ed\" (UID: \"12d754a7-703d-4e0a-a803-8a65a41786ed\") " Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.136121 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66ct8\" (UniqueName: \"kubernetes.io/projected/9605af63-3073-4b4f-bd24-f85dab94bac3-kube-api-access-66ct8\") pod \"9605af63-3073-4b4f-bd24-f85dab94bac3\" (UID: \"9605af63-3073-4b4f-bd24-f85dab94bac3\") " Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.136137 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9605af63-3073-4b4f-bd24-f85dab94bac3-utilities\") pod \"9605af63-3073-4b4f-bd24-f85dab94bac3\" (UID: \"9605af63-3073-4b4f-bd24-f85dab94bac3\") " Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.136380 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5b14a89-062b-410d-8d78-11239a856b78-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.136397 4812 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/04267c57-8e76-48a4-b768-ce571525ed62-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.136409 4812 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04267c57-8e76-48a4-b768-ce571525ed62-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.136418 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb6l6\" (UniqueName: \"kubernetes.io/projected/04267c57-8e76-48a4-b768-ce571525ed62-kube-api-access-xb6l6\") on node \"crc\" DevicePath \"\"" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.136427 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da6d85be-188f-4631-af4a-49aa40bb0d3e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.136435 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trvrd\" (UniqueName: \"kubernetes.io/projected/da6d85be-188f-4631-af4a-49aa40bb0d3e-kube-api-access-trvrd\") on node \"crc\" DevicePath \"\"" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.136443 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da6d85be-188f-4631-af4a-49aa40bb0d3e-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.136880 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12d754a7-703d-4e0a-a803-8a65a41786ed-utilities" (OuterVolumeSpecName: "utilities") pod "12d754a7-703d-4e0a-a803-8a65a41786ed" (UID: "12d754a7-703d-4e0a-a803-8a65a41786ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.139280 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12d754a7-703d-4e0a-a803-8a65a41786ed-kube-api-access-twtxz" (OuterVolumeSpecName: "kube-api-access-twtxz") pod "12d754a7-703d-4e0a-a803-8a65a41786ed" (UID: "12d754a7-703d-4e0a-a803-8a65a41786ed"). InnerVolumeSpecName "kube-api-access-twtxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.141571 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b14a89-062b-410d-8d78-11239a856b78-kube-api-access-87wkq" (OuterVolumeSpecName: "kube-api-access-87wkq") pod "d5b14a89-062b-410d-8d78-11239a856b78" (UID: "d5b14a89-062b-410d-8d78-11239a856b78"). InnerVolumeSpecName "kube-api-access-87wkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.156837 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9605af63-3073-4b4f-bd24-f85dab94bac3-utilities" (OuterVolumeSpecName: "utilities") pod "9605af63-3073-4b4f-bd24-f85dab94bac3" (UID: "9605af63-3073-4b4f-bd24-f85dab94bac3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.159156 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9605af63-3073-4b4f-bd24-f85dab94bac3-kube-api-access-66ct8" (OuterVolumeSpecName: "kube-api-access-66ct8") pod "9605af63-3073-4b4f-bd24-f85dab94bac3" (UID: "9605af63-3073-4b4f-bd24-f85dab94bac3"). InnerVolumeSpecName "kube-api-access-66ct8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.161207 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12d754a7-703d-4e0a-a803-8a65a41786ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12d754a7-703d-4e0a-a803-8a65a41786ed" (UID: "12d754a7-703d-4e0a-a803-8a65a41786ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.189067 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5b14a89-062b-410d-8d78-11239a856b78-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5b14a89-062b-410d-8d78-11239a856b78" (UID: "d5b14a89-062b-410d-8d78-11239a856b78"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.240256 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twtxz\" (UniqueName: \"kubernetes.io/projected/12d754a7-703d-4e0a-a803-8a65a41786ed-kube-api-access-twtxz\") on node \"crc\" DevicePath \"\"" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.240300 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66ct8\" (UniqueName: \"kubernetes.io/projected/9605af63-3073-4b4f-bd24-f85dab94bac3-kube-api-access-66ct8\") on node \"crc\" DevicePath \"\"" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.240310 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9605af63-3073-4b4f-bd24-f85dab94bac3-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.240319 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5b14a89-062b-410d-8d78-11239a856b78-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.240329 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12d754a7-703d-4e0a-a803-8a65a41786ed-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.240357 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87wkq\" (UniqueName: \"kubernetes.io/projected/d5b14a89-062b-410d-8d78-11239a856b78-kube-api-access-87wkq\") on node \"crc\" DevicePath \"\"" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.240367 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12d754a7-703d-4e0a-a803-8a65a41786ed-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.265793 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9605af63-3073-4b4f-bd24-f85dab94bac3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9605af63-3073-4b4f-bd24-f85dab94bac3" (UID: "9605af63-3073-4b4f-bd24-f85dab94bac3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.341322 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9605af63-3073-4b4f-bd24-f85dab94bac3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.363576 4812 generic.go:334] "Generic (PLEG): container finished" podID="da6d85be-188f-4631-af4a-49aa40bb0d3e" containerID="cf318b0b46bbf15cb885e5127bddee60a27daa06c14323f3e4a95e520dad72b2" exitCode=0 Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.363668 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-466wh" event={"ID":"da6d85be-188f-4631-af4a-49aa40bb0d3e","Type":"ContainerDied","Data":"cf318b0b46bbf15cb885e5127bddee60a27daa06c14323f3e4a95e520dad72b2"} Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.363692 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-466wh" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.363722 4812 scope.go:117] "RemoveContainer" containerID="cf318b0b46bbf15cb885e5127bddee60a27daa06c14323f3e4a95e520dad72b2" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.363707 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-466wh" event={"ID":"da6d85be-188f-4631-af4a-49aa40bb0d3e","Type":"ContainerDied","Data":"fdd626f4907fad15ac70b8e614d607cd7e152f68f22bf083032a026d26cd009e"} Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.367238 4812 generic.go:334] "Generic (PLEG): container finished" podID="d5b14a89-062b-410d-8d78-11239a856b78" containerID="71fbf7f8e145636f29fc61ecbb4c9324f63395023423ce884f10062340ec3b72" exitCode=0 Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.367308 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xhtx9" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.367360 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhtx9" event={"ID":"d5b14a89-062b-410d-8d78-11239a856b78","Type":"ContainerDied","Data":"71fbf7f8e145636f29fc61ecbb4c9324f63395023423ce884f10062340ec3b72"} Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.367402 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xhtx9" event={"ID":"d5b14a89-062b-410d-8d78-11239a856b78","Type":"ContainerDied","Data":"6dbb30bfb051cc7b69c16e11b45fae5235d1e600171e8bfb3bc8a11efa59d107"} Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.369532 4812 generic.go:334] "Generic (PLEG): container finished" podID="04267c57-8e76-48a4-b768-ce571525ed62" containerID="d130346746318faad1b53cadb4d4d35b6cbe7d429c55588a7bb106672bf83a7c" exitCode=0 Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.369581 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" event={"ID":"04267c57-8e76-48a4-b768-ce571525ed62","Type":"ContainerDied","Data":"d130346746318faad1b53cadb4d4d35b6cbe7d429c55588a7bb106672bf83a7c"} Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.369600 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" event={"ID":"04267c57-8e76-48a4-b768-ce571525ed62","Type":"ContainerDied","Data":"8047621a4f613eee1e27dca4e22c8386e3209902069afc802a3a1ee60c9f8146"} Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.369662 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nvpxb" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.382212 4812 generic.go:334] "Generic (PLEG): container finished" podID="9605af63-3073-4b4f-bd24-f85dab94bac3" containerID="0aa1a723a7963c378b5b61b53059ba1990e2ff8c764c7543db2fbfa537b2c433" exitCode=0 Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.382272 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtqhp" event={"ID":"9605af63-3073-4b4f-bd24-f85dab94bac3","Type":"ContainerDied","Data":"0aa1a723a7963c378b5b61b53059ba1990e2ff8c764c7543db2fbfa537b2c433"} Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.382301 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mtqhp" event={"ID":"9605af63-3073-4b4f-bd24-f85dab94bac3","Type":"ContainerDied","Data":"2bee4f81d27829cd6c30959fe059872fcb499342e89d291710730784443af8a5"} Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.382407 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mtqhp" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.385515 4812 scope.go:117] "RemoveContainer" containerID="cd0394f546ee821992cd2a53bef521feaf5f9c79b502f40c3cdc0835686015fd" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.389264 4812 generic.go:334] "Generic (PLEG): container finished" podID="12d754a7-703d-4e0a-a803-8a65a41786ed" containerID="a1d7df426415432df476bafb1397191b92a124265083be60034b8a7fdb4ae03e" exitCode=0 Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.389353 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rzgzm" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.389514 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzgzm" event={"ID":"12d754a7-703d-4e0a-a803-8a65a41786ed","Type":"ContainerDied","Data":"a1d7df426415432df476bafb1397191b92a124265083be60034b8a7fdb4ae03e"} Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.389626 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzgzm" event={"ID":"12d754a7-703d-4e0a-a803-8a65a41786ed","Type":"ContainerDied","Data":"4fe13b36d42ce55963524ac66446f11721c4b41d9e45994a60de86e2ff8f734c"} Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.408504 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xbqxn"] Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.419179 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-466wh"] Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.420385 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-466wh"] Nov 24 19:21:01 crc kubenswrapper[4812]: W1124 19:21:01.424475 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8906e19_a1ac_49ea_b9c0_d4470aa2b806.slice/crio-8a84cf4ff804b475d601de9e8943a0bdad462c63a865670d01fcecf249fabf6c WatchSource:0}: Error finding container 8a84cf4ff804b475d601de9e8943a0bdad462c63a865670d01fcecf249fabf6c: Status 404 returned error can't find the container with id 8a84cf4ff804b475d601de9e8943a0bdad462c63a865670d01fcecf249fabf6c Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.424516 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xhtx9"] Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.428507 4812 scope.go:117] "RemoveContainer" containerID="35f4f9bf22c02117b94b246051a46060ef3c96182ced99175f0d8235fa81af03" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.434386 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xhtx9"] Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.452248 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mtqhp"] Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.454582 4812 scope.go:117] "RemoveContainer" containerID="cf318b0b46bbf15cb885e5127bddee60a27daa06c14323f3e4a95e520dad72b2" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.460112 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mtqhp"] Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.464180 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nvpxb"] Nov 24 19:21:01 crc kubenswrapper[4812]: E1124 19:21:01.464428 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf318b0b46bbf15cb885e5127bddee60a27daa06c14323f3e4a95e520dad72b2\": container with ID starting with cf318b0b46bbf15cb885e5127bddee60a27daa06c14323f3e4a95e520dad72b2 not found: ID does not exist" containerID="cf318b0b46bbf15cb885e5127bddee60a27daa06c14323f3e4a95e520dad72b2" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.464470 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf318b0b46bbf15cb885e5127bddee60a27daa06c14323f3e4a95e520dad72b2"} err="failed to get container status \"cf318b0b46bbf15cb885e5127bddee60a27daa06c14323f3e4a95e520dad72b2\": rpc error: code = NotFound desc = could not find container \"cf318b0b46bbf15cb885e5127bddee60a27daa06c14323f3e4a95e520dad72b2\": container with ID starting with cf318b0b46bbf15cb885e5127bddee60a27daa06c14323f3e4a95e520dad72b2 not found: ID does not exist" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.464497 4812 scope.go:117] "RemoveContainer" containerID="cd0394f546ee821992cd2a53bef521feaf5f9c79b502f40c3cdc0835686015fd" Nov 24 19:21:01 crc kubenswrapper[4812]: E1124 19:21:01.465133 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd0394f546ee821992cd2a53bef521feaf5f9c79b502f40c3cdc0835686015fd\": container with ID starting with cd0394f546ee821992cd2a53bef521feaf5f9c79b502f40c3cdc0835686015fd not found: ID does not exist" containerID="cd0394f546ee821992cd2a53bef521feaf5f9c79b502f40c3cdc0835686015fd" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.465174 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd0394f546ee821992cd2a53bef521feaf5f9c79b502f40c3cdc0835686015fd"} err="failed to get container status \"cd0394f546ee821992cd2a53bef521feaf5f9c79b502f40c3cdc0835686015fd\": rpc error: code = NotFound desc = could not find container \"cd0394f546ee821992cd2a53bef521feaf5f9c79b502f40c3cdc0835686015fd\": container with ID starting with cd0394f546ee821992cd2a53bef521feaf5f9c79b502f40c3cdc0835686015fd not found: ID does not exist" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.465208 4812 scope.go:117] "RemoveContainer" containerID="35f4f9bf22c02117b94b246051a46060ef3c96182ced99175f0d8235fa81af03" Nov 24 19:21:01 crc kubenswrapper[4812]: E1124 19:21:01.465873 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35f4f9bf22c02117b94b246051a46060ef3c96182ced99175f0d8235fa81af03\": container with ID starting with 35f4f9bf22c02117b94b246051a46060ef3c96182ced99175f0d8235fa81af03 not found: ID does not exist" containerID="35f4f9bf22c02117b94b246051a46060ef3c96182ced99175f0d8235fa81af03" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.465900 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35f4f9bf22c02117b94b246051a46060ef3c96182ced99175f0d8235fa81af03"} err="failed to get container status \"35f4f9bf22c02117b94b246051a46060ef3c96182ced99175f0d8235fa81af03\": rpc error: code = NotFound desc = could not find container \"35f4f9bf22c02117b94b246051a46060ef3c96182ced99175f0d8235fa81af03\": container with ID starting with 35f4f9bf22c02117b94b246051a46060ef3c96182ced99175f0d8235fa81af03 not found: ID does not exist" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.465914 4812 scope.go:117] "RemoveContainer" containerID="71fbf7f8e145636f29fc61ecbb4c9324f63395023423ce884f10062340ec3b72" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.466423 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nvpxb"] Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.469138 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzgzm"] Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.470794 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzgzm"] Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.497443 4812 scope.go:117] "RemoveContainer" containerID="5a6e3d26245f2cab58a66d82132432cdceddd3e40d76b7a9d7ec14f901613d1e" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.518025 4812 scope.go:117] "RemoveContainer" containerID="06e2f1f7cd36047fe043aa09b17443deeba2848e0dde5676e423fca03b083440" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.530281 4812 scope.go:117] "RemoveContainer" containerID="71fbf7f8e145636f29fc61ecbb4c9324f63395023423ce884f10062340ec3b72" Nov 24 19:21:01 crc kubenswrapper[4812]: E1124 19:21:01.530838 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71fbf7f8e145636f29fc61ecbb4c9324f63395023423ce884f10062340ec3b72\": container with ID starting with 71fbf7f8e145636f29fc61ecbb4c9324f63395023423ce884f10062340ec3b72 not found: ID does not exist" containerID="71fbf7f8e145636f29fc61ecbb4c9324f63395023423ce884f10062340ec3b72" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.530872 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71fbf7f8e145636f29fc61ecbb4c9324f63395023423ce884f10062340ec3b72"} err="failed to get container status \"71fbf7f8e145636f29fc61ecbb4c9324f63395023423ce884f10062340ec3b72\": rpc error: code = NotFound desc = could not find container \"71fbf7f8e145636f29fc61ecbb4c9324f63395023423ce884f10062340ec3b72\": container with ID starting with 71fbf7f8e145636f29fc61ecbb4c9324f63395023423ce884f10062340ec3b72 not found: ID does not exist" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.530897 4812 scope.go:117] "RemoveContainer" containerID="5a6e3d26245f2cab58a66d82132432cdceddd3e40d76b7a9d7ec14f901613d1e" Nov 24 19:21:01 crc kubenswrapper[4812]: E1124 19:21:01.531367 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a6e3d26245f2cab58a66d82132432cdceddd3e40d76b7a9d7ec14f901613d1e\": container with ID starting with 5a6e3d26245f2cab58a66d82132432cdceddd3e40d76b7a9d7ec14f901613d1e not found: ID does not exist" containerID="5a6e3d26245f2cab58a66d82132432cdceddd3e40d76b7a9d7ec14f901613d1e" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.531402 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a6e3d26245f2cab58a66d82132432cdceddd3e40d76b7a9d7ec14f901613d1e"} err="failed to get container status \"5a6e3d26245f2cab58a66d82132432cdceddd3e40d76b7a9d7ec14f901613d1e\": rpc error: code = NotFound desc = could not find container \"5a6e3d26245f2cab58a66d82132432cdceddd3e40d76b7a9d7ec14f901613d1e\": container with ID starting with 5a6e3d26245f2cab58a66d82132432cdceddd3e40d76b7a9d7ec14f901613d1e not found: ID does not exist" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.531424 4812 scope.go:117] "RemoveContainer" containerID="06e2f1f7cd36047fe043aa09b17443deeba2848e0dde5676e423fca03b083440" Nov 24 19:21:01 crc kubenswrapper[4812]: E1124 19:21:01.531900 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06e2f1f7cd36047fe043aa09b17443deeba2848e0dde5676e423fca03b083440\": container with ID starting with 06e2f1f7cd36047fe043aa09b17443deeba2848e0dde5676e423fca03b083440 not found: ID does not exist" containerID="06e2f1f7cd36047fe043aa09b17443deeba2848e0dde5676e423fca03b083440" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.531934 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06e2f1f7cd36047fe043aa09b17443deeba2848e0dde5676e423fca03b083440"} err="failed to get container status \"06e2f1f7cd36047fe043aa09b17443deeba2848e0dde5676e423fca03b083440\": rpc error: code = NotFound desc = could not find container \"06e2f1f7cd36047fe043aa09b17443deeba2848e0dde5676e423fca03b083440\": container with ID starting with 06e2f1f7cd36047fe043aa09b17443deeba2848e0dde5676e423fca03b083440 not found: ID does not exist" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.531949 4812 scope.go:117] "RemoveContainer" containerID="d130346746318faad1b53cadb4d4d35b6cbe7d429c55588a7bb106672bf83a7c" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.548126 4812 scope.go:117] "RemoveContainer" containerID="d130346746318faad1b53cadb4d4d35b6cbe7d429c55588a7bb106672bf83a7c" Nov 24 19:21:01 crc kubenswrapper[4812]: E1124 19:21:01.548649 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d130346746318faad1b53cadb4d4d35b6cbe7d429c55588a7bb106672bf83a7c\": container with ID starting with d130346746318faad1b53cadb4d4d35b6cbe7d429c55588a7bb106672bf83a7c not found: ID does not exist" containerID="d130346746318faad1b53cadb4d4d35b6cbe7d429c55588a7bb106672bf83a7c" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.548690 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d130346746318faad1b53cadb4d4d35b6cbe7d429c55588a7bb106672bf83a7c"} err="failed to get container status \"d130346746318faad1b53cadb4d4d35b6cbe7d429c55588a7bb106672bf83a7c\": rpc error: code = NotFound desc = could not find container \"d130346746318faad1b53cadb4d4d35b6cbe7d429c55588a7bb106672bf83a7c\": container with ID starting with d130346746318faad1b53cadb4d4d35b6cbe7d429c55588a7bb106672bf83a7c not found: ID does not exist" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.548714 4812 scope.go:117] "RemoveContainer" containerID="0aa1a723a7963c378b5b61b53059ba1990e2ff8c764c7543db2fbfa537b2c433" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.561752 4812 scope.go:117] "RemoveContainer" containerID="3e5591f9671cc45e293addbf8b4170c0549385993e588264c31156afd564462a" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.575660 4812 scope.go:117] "RemoveContainer" containerID="24a337ebed84e374390629efbb4ad413fa11671d6e4cf637e4a605dd479d4206" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.590242 4812 scope.go:117] "RemoveContainer" containerID="0aa1a723a7963c378b5b61b53059ba1990e2ff8c764c7543db2fbfa537b2c433" Nov 24 19:21:01 crc kubenswrapper[4812]: E1124 19:21:01.591151 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aa1a723a7963c378b5b61b53059ba1990e2ff8c764c7543db2fbfa537b2c433\": container with ID starting with 0aa1a723a7963c378b5b61b53059ba1990e2ff8c764c7543db2fbfa537b2c433 not found: ID does not exist" containerID="0aa1a723a7963c378b5b61b53059ba1990e2ff8c764c7543db2fbfa537b2c433" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.591185 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aa1a723a7963c378b5b61b53059ba1990e2ff8c764c7543db2fbfa537b2c433"} err="failed to get container status \"0aa1a723a7963c378b5b61b53059ba1990e2ff8c764c7543db2fbfa537b2c433\": rpc error: code = NotFound desc = could not find container \"0aa1a723a7963c378b5b61b53059ba1990e2ff8c764c7543db2fbfa537b2c433\": container with ID starting with 0aa1a723a7963c378b5b61b53059ba1990e2ff8c764c7543db2fbfa537b2c433 not found: ID does not exist" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.591210 4812 scope.go:117] "RemoveContainer" containerID="3e5591f9671cc45e293addbf8b4170c0549385993e588264c31156afd564462a" Nov 24 19:21:01 crc kubenswrapper[4812]: E1124 19:21:01.591530 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e5591f9671cc45e293addbf8b4170c0549385993e588264c31156afd564462a\": container with ID starting with 3e5591f9671cc45e293addbf8b4170c0549385993e588264c31156afd564462a not found: ID does not exist" containerID="3e5591f9671cc45e293addbf8b4170c0549385993e588264c31156afd564462a" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.591585 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e5591f9671cc45e293addbf8b4170c0549385993e588264c31156afd564462a"} err="failed to get container status \"3e5591f9671cc45e293addbf8b4170c0549385993e588264c31156afd564462a\": rpc error: code = NotFound desc = could not find container \"3e5591f9671cc45e293addbf8b4170c0549385993e588264c31156afd564462a\": container with ID starting with 3e5591f9671cc45e293addbf8b4170c0549385993e588264c31156afd564462a not found: ID does not exist" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.591622 4812 scope.go:117] "RemoveContainer" containerID="24a337ebed84e374390629efbb4ad413fa11671d6e4cf637e4a605dd479d4206" Nov 24 19:21:01 crc kubenswrapper[4812]: E1124 19:21:01.592006 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24a337ebed84e374390629efbb4ad413fa11671d6e4cf637e4a605dd479d4206\": container with ID starting with 24a337ebed84e374390629efbb4ad413fa11671d6e4cf637e4a605dd479d4206 not found: ID does not exist" containerID="24a337ebed84e374390629efbb4ad413fa11671d6e4cf637e4a605dd479d4206" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.592063 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24a337ebed84e374390629efbb4ad413fa11671d6e4cf637e4a605dd479d4206"} err="failed to get container status \"24a337ebed84e374390629efbb4ad413fa11671d6e4cf637e4a605dd479d4206\": rpc error: code = NotFound desc = could not find container \"24a337ebed84e374390629efbb4ad413fa11671d6e4cf637e4a605dd479d4206\": container with ID starting with 24a337ebed84e374390629efbb4ad413fa11671d6e4cf637e4a605dd479d4206 not found: ID does not exist" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.592106 4812 scope.go:117] "RemoveContainer" containerID="a1d7df426415432df476bafb1397191b92a124265083be60034b8a7fdb4ae03e" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.652696 4812 scope.go:117] "RemoveContainer" containerID="65b33d6a11e219e7e8780776b70faf2e31990c304b36675f18015d223e99abbb" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.670927 4812 scope.go:117] "RemoveContainer" containerID="80e397673b4d128e8553cb59f5250ef55a9ba8cce46be3f93ed24c609c23f993" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.688797 4812 scope.go:117] "RemoveContainer" containerID="a1d7df426415432df476bafb1397191b92a124265083be60034b8a7fdb4ae03e" Nov 24 19:21:01 crc kubenswrapper[4812]: E1124 19:21:01.689416 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1d7df426415432df476bafb1397191b92a124265083be60034b8a7fdb4ae03e\": container with ID starting with a1d7df426415432df476bafb1397191b92a124265083be60034b8a7fdb4ae03e not found: ID does not exist" containerID="a1d7df426415432df476bafb1397191b92a124265083be60034b8a7fdb4ae03e" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.689459 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1d7df426415432df476bafb1397191b92a124265083be60034b8a7fdb4ae03e"} err="failed to get container status \"a1d7df426415432df476bafb1397191b92a124265083be60034b8a7fdb4ae03e\": rpc error: code = NotFound desc = could not find container \"a1d7df426415432df476bafb1397191b92a124265083be60034b8a7fdb4ae03e\": container with ID starting with a1d7df426415432df476bafb1397191b92a124265083be60034b8a7fdb4ae03e not found: ID does not exist" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.689507 4812 scope.go:117] "RemoveContainer" containerID="65b33d6a11e219e7e8780776b70faf2e31990c304b36675f18015d223e99abbb" Nov 24 19:21:01 crc kubenswrapper[4812]: E1124 19:21:01.690254 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65b33d6a11e219e7e8780776b70faf2e31990c304b36675f18015d223e99abbb\": container with ID starting with 65b33d6a11e219e7e8780776b70faf2e31990c304b36675f18015d223e99abbb not found: ID does not exist" containerID="65b33d6a11e219e7e8780776b70faf2e31990c304b36675f18015d223e99abbb" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.690285 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65b33d6a11e219e7e8780776b70faf2e31990c304b36675f18015d223e99abbb"} err="failed to get container status \"65b33d6a11e219e7e8780776b70faf2e31990c304b36675f18015d223e99abbb\": rpc error: code = NotFound desc = could not find container \"65b33d6a11e219e7e8780776b70faf2e31990c304b36675f18015d223e99abbb\": container with ID starting with 65b33d6a11e219e7e8780776b70faf2e31990c304b36675f18015d223e99abbb not found: ID does not exist" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.690307 4812 scope.go:117] "RemoveContainer" containerID="80e397673b4d128e8553cb59f5250ef55a9ba8cce46be3f93ed24c609c23f993" Nov 24 19:21:01 crc kubenswrapper[4812]: E1124 19:21:01.690729 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80e397673b4d128e8553cb59f5250ef55a9ba8cce46be3f93ed24c609c23f993\": container with ID starting with 80e397673b4d128e8553cb59f5250ef55a9ba8cce46be3f93ed24c609c23f993 not found: ID does not exist" containerID="80e397673b4d128e8553cb59f5250ef55a9ba8cce46be3f93ed24c609c23f993" Nov 24 19:21:01 crc kubenswrapper[4812]: I1124 19:21:01.690767 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80e397673b4d128e8553cb59f5250ef55a9ba8cce46be3f93ed24c609c23f993"} err="failed to get container status \"80e397673b4d128e8553cb59f5250ef55a9ba8cce46be3f93ed24c609c23f993\": rpc error: code = NotFound desc = could not find container \"80e397673b4d128e8553cb59f5250ef55a9ba8cce46be3f93ed24c609c23f993\": container with ID starting with 80e397673b4d128e8553cb59f5250ef55a9ba8cce46be3f93ed24c609c23f993 not found: ID does not exist" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.398106 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xbqxn" event={"ID":"d8906e19-a1ac-49ea-b9c0-d4470aa2b806","Type":"ContainerStarted","Data":"5ed70df17aeaf9019c12898b2ef61d00ab2ca744b30c3d5915c3c61f0b4dc553"} Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.398391 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xbqxn" event={"ID":"d8906e19-a1ac-49ea-b9c0-d4470aa2b806","Type":"ContainerStarted","Data":"8a84cf4ff804b475d601de9e8943a0bdad462c63a865670d01fcecf249fabf6c"} Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.398479 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xbqxn" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.403123 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xbqxn" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.426220 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xbqxn" podStartSLOduration=2.426197933 podStartE2EDuration="2.426197933s" podCreationTimestamp="2025-11-24 19:21:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:21:02.419151077 +0000 UTC m=+256.208103448" watchObservedRunningTime="2025-11-24 19:21:02.426197933 +0000 UTC m=+256.215150304" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.682198 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xqfb9"] Nov 24 19:21:02 crc kubenswrapper[4812]: E1124 19:21:02.682551 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b14a89-062b-410d-8d78-11239a856b78" containerName="registry-server" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.682579 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b14a89-062b-410d-8d78-11239a856b78" containerName="registry-server" Nov 24 19:21:02 crc kubenswrapper[4812]: E1124 19:21:02.682600 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9605af63-3073-4b4f-bd24-f85dab94bac3" containerName="registry-server" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.682617 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9605af63-3073-4b4f-bd24-f85dab94bac3" containerName="registry-server" Nov 24 19:21:02 crc kubenswrapper[4812]: E1124 19:21:02.682645 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da6d85be-188f-4631-af4a-49aa40bb0d3e" containerName="extract-content" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.682659 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="da6d85be-188f-4631-af4a-49aa40bb0d3e" containerName="extract-content" Nov 24 19:21:02 crc kubenswrapper[4812]: E1124 19:21:02.682687 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b14a89-062b-410d-8d78-11239a856b78" containerName="extract-utilities" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.682699 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b14a89-062b-410d-8d78-11239a856b78" containerName="extract-utilities" Nov 24 19:21:02 crc kubenswrapper[4812]: E1124 19:21:02.682715 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9605af63-3073-4b4f-bd24-f85dab94bac3" containerName="extract-utilities" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.682727 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9605af63-3073-4b4f-bd24-f85dab94bac3" containerName="extract-utilities" Nov 24 19:21:02 crc kubenswrapper[4812]: E1124 19:21:02.682744 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da6d85be-188f-4631-af4a-49aa40bb0d3e" containerName="extract-utilities" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.682756 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="da6d85be-188f-4631-af4a-49aa40bb0d3e" containerName="extract-utilities" Nov 24 19:21:02 crc kubenswrapper[4812]: E1124 19:21:02.682774 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9605af63-3073-4b4f-bd24-f85dab94bac3" containerName="extract-content" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.682786 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9605af63-3073-4b4f-bd24-f85dab94bac3" containerName="extract-content" Nov 24 19:21:02 crc kubenswrapper[4812]: E1124 19:21:02.682806 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12d754a7-703d-4e0a-a803-8a65a41786ed" containerName="registry-server" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.682819 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="12d754a7-703d-4e0a-a803-8a65a41786ed" containerName="registry-server" Nov 24 19:21:02 crc kubenswrapper[4812]: E1124 19:21:02.682835 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b14a89-062b-410d-8d78-11239a856b78" containerName="extract-content" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.682848 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b14a89-062b-410d-8d78-11239a856b78" containerName="extract-content" Nov 24 19:21:02 crc kubenswrapper[4812]: E1124 19:21:02.682863 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04267c57-8e76-48a4-b768-ce571525ed62" containerName="marketplace-operator" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.682875 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="04267c57-8e76-48a4-b768-ce571525ed62" containerName="marketplace-operator" Nov 24 19:21:02 crc kubenswrapper[4812]: E1124 19:21:02.683455 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12d754a7-703d-4e0a-a803-8a65a41786ed" containerName="extract-content" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.683476 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="12d754a7-703d-4e0a-a803-8a65a41786ed" containerName="extract-content" Nov 24 19:21:02 crc kubenswrapper[4812]: E1124 19:21:02.683491 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12d754a7-703d-4e0a-a803-8a65a41786ed" containerName="extract-utilities" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.683499 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="12d754a7-703d-4e0a-a803-8a65a41786ed" containerName="extract-utilities" Nov 24 19:21:02 crc kubenswrapper[4812]: E1124 19:21:02.683515 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da6d85be-188f-4631-af4a-49aa40bb0d3e" containerName="registry-server" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.683526 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="da6d85be-188f-4631-af4a-49aa40bb0d3e" containerName="registry-server" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.683768 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5b14a89-062b-410d-8d78-11239a856b78" containerName="registry-server" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.683792 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="12d754a7-703d-4e0a-a803-8a65a41786ed" containerName="registry-server" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.683803 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="da6d85be-188f-4631-af4a-49aa40bb0d3e" containerName="registry-server" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.683814 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="04267c57-8e76-48a4-b768-ce571525ed62" containerName="marketplace-operator" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.683823 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="9605af63-3073-4b4f-bd24-f85dab94bac3" containerName="registry-server" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.684754 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqfb9" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.688057 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.695511 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xqfb9"] Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.862411 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5l4l\" (UniqueName: \"kubernetes.io/projected/580a9469-6660-444c-ae5b-d5eb1de8554c-kube-api-access-z5l4l\") pod \"redhat-operators-xqfb9\" (UID: \"580a9469-6660-444c-ae5b-d5eb1de8554c\") " pod="openshift-marketplace/redhat-operators-xqfb9" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.862472 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/580a9469-6660-444c-ae5b-d5eb1de8554c-utilities\") pod \"redhat-operators-xqfb9\" (UID: \"580a9469-6660-444c-ae5b-d5eb1de8554c\") " pod="openshift-marketplace/redhat-operators-xqfb9" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.862502 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/580a9469-6660-444c-ae5b-d5eb1de8554c-catalog-content\") pod \"redhat-operators-xqfb9\" (UID: \"580a9469-6660-444c-ae5b-d5eb1de8554c\") " pod="openshift-marketplace/redhat-operators-xqfb9" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.879891 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vdg8s"] Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.881016 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vdg8s" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.882846 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.890068 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdg8s"] Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.967369 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5l4l\" (UniqueName: \"kubernetes.io/projected/580a9469-6660-444c-ae5b-d5eb1de8554c-kube-api-access-z5l4l\") pod \"redhat-operators-xqfb9\" (UID: \"580a9469-6660-444c-ae5b-d5eb1de8554c\") " pod="openshift-marketplace/redhat-operators-xqfb9" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.967489 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/580a9469-6660-444c-ae5b-d5eb1de8554c-utilities\") pod \"redhat-operators-xqfb9\" (UID: \"580a9469-6660-444c-ae5b-d5eb1de8554c\") " pod="openshift-marketplace/redhat-operators-xqfb9" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.967533 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/580a9469-6660-444c-ae5b-d5eb1de8554c-catalog-content\") pod \"redhat-operators-xqfb9\" (UID: \"580a9469-6660-444c-ae5b-d5eb1de8554c\") " pod="openshift-marketplace/redhat-operators-xqfb9" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.968490 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/580a9469-6660-444c-ae5b-d5eb1de8554c-utilities\") pod \"redhat-operators-xqfb9\" (UID: \"580a9469-6660-444c-ae5b-d5eb1de8554c\") " pod="openshift-marketplace/redhat-operators-xqfb9" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.969171 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/580a9469-6660-444c-ae5b-d5eb1de8554c-catalog-content\") pod \"redhat-operators-xqfb9\" (UID: \"580a9469-6660-444c-ae5b-d5eb1de8554c\") " pod="openshift-marketplace/redhat-operators-xqfb9" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.976408 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04267c57-8e76-48a4-b768-ce571525ed62" path="/var/lib/kubelet/pods/04267c57-8e76-48a4-b768-ce571525ed62/volumes" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.977488 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12d754a7-703d-4e0a-a803-8a65a41786ed" path="/var/lib/kubelet/pods/12d754a7-703d-4e0a-a803-8a65a41786ed/volumes" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.978655 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9605af63-3073-4b4f-bd24-f85dab94bac3" path="/var/lib/kubelet/pods/9605af63-3073-4b4f-bd24-f85dab94bac3/volumes" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.980695 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5b14a89-062b-410d-8d78-11239a856b78" path="/var/lib/kubelet/pods/d5b14a89-062b-410d-8d78-11239a856b78/volumes" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.982456 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da6d85be-188f-4631-af4a-49aa40bb0d3e" path="/var/lib/kubelet/pods/da6d85be-188f-4631-af4a-49aa40bb0d3e/volumes" Nov 24 19:21:02 crc kubenswrapper[4812]: I1124 19:21:02.994084 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5l4l\" (UniqueName: \"kubernetes.io/projected/580a9469-6660-444c-ae5b-d5eb1de8554c-kube-api-access-z5l4l\") pod \"redhat-operators-xqfb9\" (UID: \"580a9469-6660-444c-ae5b-d5eb1de8554c\") " pod="openshift-marketplace/redhat-operators-xqfb9" Nov 24 19:21:03 crc kubenswrapper[4812]: I1124 19:21:03.018576 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqfb9" Nov 24 19:21:03 crc kubenswrapper[4812]: I1124 19:21:03.068501 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86c7f\" (UniqueName: \"kubernetes.io/projected/de8b682e-4003-4de2-b2b2-68a61dbb5835-kube-api-access-86c7f\") pod \"redhat-marketplace-vdg8s\" (UID: \"de8b682e-4003-4de2-b2b2-68a61dbb5835\") " pod="openshift-marketplace/redhat-marketplace-vdg8s" Nov 24 19:21:03 crc kubenswrapper[4812]: I1124 19:21:03.068645 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de8b682e-4003-4de2-b2b2-68a61dbb5835-catalog-content\") pod \"redhat-marketplace-vdg8s\" (UID: \"de8b682e-4003-4de2-b2b2-68a61dbb5835\") " pod="openshift-marketplace/redhat-marketplace-vdg8s" Nov 24 19:21:03 crc kubenswrapper[4812]: I1124 19:21:03.068854 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de8b682e-4003-4de2-b2b2-68a61dbb5835-utilities\") pod \"redhat-marketplace-vdg8s\" (UID: \"de8b682e-4003-4de2-b2b2-68a61dbb5835\") " pod="openshift-marketplace/redhat-marketplace-vdg8s" Nov 24 19:21:03 crc kubenswrapper[4812]: I1124 19:21:03.169850 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86c7f\" (UniqueName: \"kubernetes.io/projected/de8b682e-4003-4de2-b2b2-68a61dbb5835-kube-api-access-86c7f\") pod \"redhat-marketplace-vdg8s\" (UID: \"de8b682e-4003-4de2-b2b2-68a61dbb5835\") " pod="openshift-marketplace/redhat-marketplace-vdg8s" Nov 24 19:21:03 crc kubenswrapper[4812]: I1124 19:21:03.169896 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de8b682e-4003-4de2-b2b2-68a61dbb5835-catalog-content\") pod \"redhat-marketplace-vdg8s\" (UID: \"de8b682e-4003-4de2-b2b2-68a61dbb5835\") " pod="openshift-marketplace/redhat-marketplace-vdg8s" Nov 24 19:21:03 crc kubenswrapper[4812]: I1124 19:21:03.169944 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de8b682e-4003-4de2-b2b2-68a61dbb5835-utilities\") pod \"redhat-marketplace-vdg8s\" (UID: \"de8b682e-4003-4de2-b2b2-68a61dbb5835\") " pod="openshift-marketplace/redhat-marketplace-vdg8s" Nov 24 19:21:03 crc kubenswrapper[4812]: I1124 19:21:03.170417 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/de8b682e-4003-4de2-b2b2-68a61dbb5835-utilities\") pod \"redhat-marketplace-vdg8s\" (UID: \"de8b682e-4003-4de2-b2b2-68a61dbb5835\") " pod="openshift-marketplace/redhat-marketplace-vdg8s" Nov 24 19:21:03 crc kubenswrapper[4812]: I1124 19:21:03.170709 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/de8b682e-4003-4de2-b2b2-68a61dbb5835-catalog-content\") pod \"redhat-marketplace-vdg8s\" (UID: \"de8b682e-4003-4de2-b2b2-68a61dbb5835\") " pod="openshift-marketplace/redhat-marketplace-vdg8s" Nov 24 19:21:03 crc kubenswrapper[4812]: I1124 19:21:03.190366 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86c7f\" (UniqueName: \"kubernetes.io/projected/de8b682e-4003-4de2-b2b2-68a61dbb5835-kube-api-access-86c7f\") pod \"redhat-marketplace-vdg8s\" (UID: \"de8b682e-4003-4de2-b2b2-68a61dbb5835\") " pod="openshift-marketplace/redhat-marketplace-vdg8s" Nov 24 19:21:03 crc kubenswrapper[4812]: I1124 19:21:03.208246 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vdg8s" Nov 24 19:21:03 crc kubenswrapper[4812]: I1124 19:21:03.407745 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xqfb9"] Nov 24 19:21:03 crc kubenswrapper[4812]: W1124 19:21:03.410257 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod580a9469_6660_444c_ae5b_d5eb1de8554c.slice/crio-67072fb41c821e24cc63b825126fcea4f144e3427dec2eae3882e3181f811df5 WatchSource:0}: Error finding container 67072fb41c821e24cc63b825126fcea4f144e3427dec2eae3882e3181f811df5: Status 404 returned error can't find the container with id 67072fb41c821e24cc63b825126fcea4f144e3427dec2eae3882e3181f811df5 Nov 24 19:21:03 crc kubenswrapper[4812]: I1124 19:21:03.606479 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdg8s"] Nov 24 19:21:03 crc kubenswrapper[4812]: W1124 19:21:03.612608 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde8b682e_4003_4de2_b2b2_68a61dbb5835.slice/crio-1ed60610304751aa0ac7f0cdfbdae06cd12c61877ee530a73469f487d36ef802 WatchSource:0}: Error finding container 1ed60610304751aa0ac7f0cdfbdae06cd12c61877ee530a73469f487d36ef802: Status 404 returned error can't find the container with id 1ed60610304751aa0ac7f0cdfbdae06cd12c61877ee530a73469f487d36ef802 Nov 24 19:21:04 crc kubenswrapper[4812]: I1124 19:21:04.421154 4812 generic.go:334] "Generic (PLEG): container finished" podID="de8b682e-4003-4de2-b2b2-68a61dbb5835" containerID="ac4dc747cd714b3531b14555a3a78a92d49d80e851366073cabe95c7dbefe1c6" exitCode=0 Nov 24 19:21:04 crc kubenswrapper[4812]: I1124 19:21:04.421224 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdg8s" event={"ID":"de8b682e-4003-4de2-b2b2-68a61dbb5835","Type":"ContainerDied","Data":"ac4dc747cd714b3531b14555a3a78a92d49d80e851366073cabe95c7dbefe1c6"} Nov 24 19:21:04 crc kubenswrapper[4812]: I1124 19:21:04.421556 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdg8s" event={"ID":"de8b682e-4003-4de2-b2b2-68a61dbb5835","Type":"ContainerStarted","Data":"1ed60610304751aa0ac7f0cdfbdae06cd12c61877ee530a73469f487d36ef802"} Nov 24 19:21:04 crc kubenswrapper[4812]: I1124 19:21:04.423904 4812 generic.go:334] "Generic (PLEG): container finished" podID="580a9469-6660-444c-ae5b-d5eb1de8554c" containerID="7ec51cc2ec24d10524b321475684565c4a384e46f23525ab9ff80bb515821170" exitCode=0 Nov 24 19:21:04 crc kubenswrapper[4812]: I1124 19:21:04.424458 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqfb9" event={"ID":"580a9469-6660-444c-ae5b-d5eb1de8554c","Type":"ContainerDied","Data":"7ec51cc2ec24d10524b321475684565c4a384e46f23525ab9ff80bb515821170"} Nov 24 19:21:04 crc kubenswrapper[4812]: I1124 19:21:04.424510 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqfb9" event={"ID":"580a9469-6660-444c-ae5b-d5eb1de8554c","Type":"ContainerStarted","Data":"67072fb41c821e24cc63b825126fcea4f144e3427dec2eae3882e3181f811df5"} Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.079657 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6rkng"] Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.082000 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rkng" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.085083 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.097509 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6rkng"] Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.196530 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps6fg\" (UniqueName: \"kubernetes.io/projected/338b7e7c-a0ec-4709-87d2-27b5d5852f3c-kube-api-access-ps6fg\") pod \"community-operators-6rkng\" (UID: \"338b7e7c-a0ec-4709-87d2-27b5d5852f3c\") " pod="openshift-marketplace/community-operators-6rkng" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.196670 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/338b7e7c-a0ec-4709-87d2-27b5d5852f3c-utilities\") pod \"community-operators-6rkng\" (UID: \"338b7e7c-a0ec-4709-87d2-27b5d5852f3c\") " pod="openshift-marketplace/community-operators-6rkng" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.196803 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/338b7e7c-a0ec-4709-87d2-27b5d5852f3c-catalog-content\") pod \"community-operators-6rkng\" (UID: \"338b7e7c-a0ec-4709-87d2-27b5d5852f3c\") " pod="openshift-marketplace/community-operators-6rkng" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.284958 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7zz5v"] Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.286040 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zz5v" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.289259 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.298174 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps6fg\" (UniqueName: \"kubernetes.io/projected/338b7e7c-a0ec-4709-87d2-27b5d5852f3c-kube-api-access-ps6fg\") pod \"community-operators-6rkng\" (UID: \"338b7e7c-a0ec-4709-87d2-27b5d5852f3c\") " pod="openshift-marketplace/community-operators-6rkng" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.298210 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/338b7e7c-a0ec-4709-87d2-27b5d5852f3c-utilities\") pod \"community-operators-6rkng\" (UID: \"338b7e7c-a0ec-4709-87d2-27b5d5852f3c\") " pod="openshift-marketplace/community-operators-6rkng" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.298284 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4bc7\" (UniqueName: \"kubernetes.io/projected/8114203a-6cba-4533-b7d5-7379db397421-kube-api-access-v4bc7\") pod \"certified-operators-7zz5v\" (UID: \"8114203a-6cba-4533-b7d5-7379db397421\") " pod="openshift-marketplace/certified-operators-7zz5v" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.298306 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/338b7e7c-a0ec-4709-87d2-27b5d5852f3c-catalog-content\") pod \"community-operators-6rkng\" (UID: \"338b7e7c-a0ec-4709-87d2-27b5d5852f3c\") " pod="openshift-marketplace/community-operators-6rkng" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.298325 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8114203a-6cba-4533-b7d5-7379db397421-utilities\") pod \"certified-operators-7zz5v\" (UID: \"8114203a-6cba-4533-b7d5-7379db397421\") " pod="openshift-marketplace/certified-operators-7zz5v" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.298373 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8114203a-6cba-4533-b7d5-7379db397421-catalog-content\") pod \"certified-operators-7zz5v\" (UID: \"8114203a-6cba-4533-b7d5-7379db397421\") " pod="openshift-marketplace/certified-operators-7zz5v" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.298880 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/338b7e7c-a0ec-4709-87d2-27b5d5852f3c-utilities\") pod \"community-operators-6rkng\" (UID: \"338b7e7c-a0ec-4709-87d2-27b5d5852f3c\") " pod="openshift-marketplace/community-operators-6rkng" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.298915 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/338b7e7c-a0ec-4709-87d2-27b5d5852f3c-catalog-content\") pod \"community-operators-6rkng\" (UID: \"338b7e7c-a0ec-4709-87d2-27b5d5852f3c\") " pod="openshift-marketplace/community-operators-6rkng" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.305989 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7zz5v"] Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.321747 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps6fg\" (UniqueName: \"kubernetes.io/projected/338b7e7c-a0ec-4709-87d2-27b5d5852f3c-kube-api-access-ps6fg\") pod \"community-operators-6rkng\" (UID: \"338b7e7c-a0ec-4709-87d2-27b5d5852f3c\") " pod="openshift-marketplace/community-operators-6rkng" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.399321 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4bc7\" (UniqueName: \"kubernetes.io/projected/8114203a-6cba-4533-b7d5-7379db397421-kube-api-access-v4bc7\") pod \"certified-operators-7zz5v\" (UID: \"8114203a-6cba-4533-b7d5-7379db397421\") " pod="openshift-marketplace/certified-operators-7zz5v" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.399399 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8114203a-6cba-4533-b7d5-7379db397421-utilities\") pod \"certified-operators-7zz5v\" (UID: \"8114203a-6cba-4533-b7d5-7379db397421\") " pod="openshift-marketplace/certified-operators-7zz5v" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.399430 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8114203a-6cba-4533-b7d5-7379db397421-catalog-content\") pod \"certified-operators-7zz5v\" (UID: \"8114203a-6cba-4533-b7d5-7379db397421\") " pod="openshift-marketplace/certified-operators-7zz5v" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.399985 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8114203a-6cba-4533-b7d5-7379db397421-catalog-content\") pod \"certified-operators-7zz5v\" (UID: \"8114203a-6cba-4533-b7d5-7379db397421\") " pod="openshift-marketplace/certified-operators-7zz5v" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.399992 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8114203a-6cba-4533-b7d5-7379db397421-utilities\") pod \"certified-operators-7zz5v\" (UID: \"8114203a-6cba-4533-b7d5-7379db397421\") " pod="openshift-marketplace/certified-operators-7zz5v" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.411492 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rkng" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.422713 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4bc7\" (UniqueName: \"kubernetes.io/projected/8114203a-6cba-4533-b7d5-7379db397421-kube-api-access-v4bc7\") pod \"certified-operators-7zz5v\" (UID: \"8114203a-6cba-4533-b7d5-7379db397421\") " pod="openshift-marketplace/certified-operators-7zz5v" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.432841 4812 generic.go:334] "Generic (PLEG): container finished" podID="de8b682e-4003-4de2-b2b2-68a61dbb5835" containerID="5b81c327d0bf489ae0bac205d2c43d143625396357b98aed77a3b973032f96f3" exitCode=0 Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.432881 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdg8s" event={"ID":"de8b682e-4003-4de2-b2b2-68a61dbb5835","Type":"ContainerDied","Data":"5b81c327d0bf489ae0bac205d2c43d143625396357b98aed77a3b973032f96f3"} Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.613229 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zz5v" Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.624149 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6rkng"] Nov 24 19:21:05 crc kubenswrapper[4812]: I1124 19:21:05.804519 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7zz5v"] Nov 24 19:21:05 crc kubenswrapper[4812]: W1124 19:21:05.850906 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8114203a_6cba_4533_b7d5_7379db397421.slice/crio-42e85340d25deec1db994ef1498c74c18f1b809d2ae846c9242ba2ba393580a7 WatchSource:0}: Error finding container 42e85340d25deec1db994ef1498c74c18f1b809d2ae846c9242ba2ba393580a7: Status 404 returned error can't find the container with id 42e85340d25deec1db994ef1498c74c18f1b809d2ae846c9242ba2ba393580a7 Nov 24 19:21:06 crc kubenswrapper[4812]: I1124 19:21:06.441285 4812 generic.go:334] "Generic (PLEG): container finished" podID="338b7e7c-a0ec-4709-87d2-27b5d5852f3c" containerID="4b26e7a83720c04967b07ce65d04dbc083615c3e71ebc38f542645e207840aba" exitCode=0 Nov 24 19:21:06 crc kubenswrapper[4812]: I1124 19:21:06.441822 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rkng" event={"ID":"338b7e7c-a0ec-4709-87d2-27b5d5852f3c","Type":"ContainerDied","Data":"4b26e7a83720c04967b07ce65d04dbc083615c3e71ebc38f542645e207840aba"} Nov 24 19:21:06 crc kubenswrapper[4812]: I1124 19:21:06.441870 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rkng" event={"ID":"338b7e7c-a0ec-4709-87d2-27b5d5852f3c","Type":"ContainerStarted","Data":"fa0fa0277c081417424a8ca3206fe33825a8adec2a7b6357acb4368ed1eb1975"} Nov 24 19:21:06 crc kubenswrapper[4812]: I1124 19:21:06.449957 4812 generic.go:334] "Generic (PLEG): container finished" podID="580a9469-6660-444c-ae5b-d5eb1de8554c" containerID="2a63396e133a6e9387c01e584952c194d0121a49df9bd6b1bac8476c88db8d5f" exitCode=0 Nov 24 19:21:06 crc kubenswrapper[4812]: I1124 19:21:06.450003 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqfb9" event={"ID":"580a9469-6660-444c-ae5b-d5eb1de8554c","Type":"ContainerDied","Data":"2a63396e133a6e9387c01e584952c194d0121a49df9bd6b1bac8476c88db8d5f"} Nov 24 19:21:06 crc kubenswrapper[4812]: I1124 19:21:06.452542 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdg8s" event={"ID":"de8b682e-4003-4de2-b2b2-68a61dbb5835","Type":"ContainerStarted","Data":"e1d36d37d458b96cb84124dc988ad88fde17b8c8b9ffcd820f7f825c81232713"} Nov 24 19:21:06 crc kubenswrapper[4812]: I1124 19:21:06.453535 4812 generic.go:334] "Generic (PLEG): container finished" podID="8114203a-6cba-4533-b7d5-7379db397421" containerID="e6688398ac818e298dcea0f0fa2556f3a7e0230aced1e5471a93e24f15899b82" exitCode=0 Nov 24 19:21:06 crc kubenswrapper[4812]: I1124 19:21:06.453573 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zz5v" event={"ID":"8114203a-6cba-4533-b7d5-7379db397421","Type":"ContainerDied","Data":"e6688398ac818e298dcea0f0fa2556f3a7e0230aced1e5471a93e24f15899b82"} Nov 24 19:21:06 crc kubenswrapper[4812]: I1124 19:21:06.453593 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zz5v" event={"ID":"8114203a-6cba-4533-b7d5-7379db397421","Type":"ContainerStarted","Data":"42e85340d25deec1db994ef1498c74c18f1b809d2ae846c9242ba2ba393580a7"} Nov 24 19:21:06 crc kubenswrapper[4812]: I1124 19:21:06.510953 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vdg8s" podStartSLOduration=3.016952139 podStartE2EDuration="4.510930027s" podCreationTimestamp="2025-11-24 19:21:02 +0000 UTC" firstStartedPulling="2025-11-24 19:21:04.42509312 +0000 UTC m=+258.214045491" lastFinishedPulling="2025-11-24 19:21:05.919071018 +0000 UTC m=+259.708023379" observedRunningTime="2025-11-24 19:21:06.510129944 +0000 UTC m=+260.299082315" watchObservedRunningTime="2025-11-24 19:21:06.510930027 +0000 UTC m=+260.299882418" Nov 24 19:21:07 crc kubenswrapper[4812]: I1124 19:21:07.460089 4812 generic.go:334] "Generic (PLEG): container finished" podID="8114203a-6cba-4533-b7d5-7379db397421" containerID="5126243c6243f164f9b0e13e295778a92d1413c5e2c77bfa4c33ba5c7d3809de" exitCode=0 Nov 24 19:21:07 crc kubenswrapper[4812]: I1124 19:21:07.460167 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zz5v" event={"ID":"8114203a-6cba-4533-b7d5-7379db397421","Type":"ContainerDied","Data":"5126243c6243f164f9b0e13e295778a92d1413c5e2c77bfa4c33ba5c7d3809de"} Nov 24 19:21:07 crc kubenswrapper[4812]: I1124 19:21:07.469127 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqfb9" event={"ID":"580a9469-6660-444c-ae5b-d5eb1de8554c","Type":"ContainerStarted","Data":"79faaac93a722290577326645cad912dfbda941d95b22dc9659bb82d93df365c"} Nov 24 19:21:07 crc kubenswrapper[4812]: I1124 19:21:07.506884 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xqfb9" podStartSLOduration=3.060072439 podStartE2EDuration="5.506867931s" podCreationTimestamp="2025-11-24 19:21:02 +0000 UTC" firstStartedPulling="2025-11-24 19:21:04.425219233 +0000 UTC m=+258.214171604" lastFinishedPulling="2025-11-24 19:21:06.872014725 +0000 UTC m=+260.660967096" observedRunningTime="2025-11-24 19:21:07.504497175 +0000 UTC m=+261.293449546" watchObservedRunningTime="2025-11-24 19:21:07.506867931 +0000 UTC m=+261.295820302" Nov 24 19:21:08 crc kubenswrapper[4812]: I1124 19:21:08.476881 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zz5v" event={"ID":"8114203a-6cba-4533-b7d5-7379db397421","Type":"ContainerStarted","Data":"5144499394f8e032427dc8c346888fa30685ee7e43fe7a70cf9d6e442b6071f3"} Nov 24 19:21:08 crc kubenswrapper[4812]: I1124 19:21:08.491001 4812 generic.go:334] "Generic (PLEG): container finished" podID="338b7e7c-a0ec-4709-87d2-27b5d5852f3c" containerID="e084fd25f281daf7340d7d23c543c755a3bafc2da7bd402232c58fc8d8292fc7" exitCode=0 Nov 24 19:21:08 crc kubenswrapper[4812]: I1124 19:21:08.491075 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rkng" event={"ID":"338b7e7c-a0ec-4709-87d2-27b5d5852f3c","Type":"ContainerDied","Data":"e084fd25f281daf7340d7d23c543c755a3bafc2da7bd402232c58fc8d8292fc7"} Nov 24 19:21:08 crc kubenswrapper[4812]: I1124 19:21:08.506055 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7zz5v" podStartSLOduration=2.115795943 podStartE2EDuration="3.506036272s" podCreationTimestamp="2025-11-24 19:21:05 +0000 UTC" firstStartedPulling="2025-11-24 19:21:06.454713351 +0000 UTC m=+260.243665722" lastFinishedPulling="2025-11-24 19:21:07.84495368 +0000 UTC m=+261.633906051" observedRunningTime="2025-11-24 19:21:08.504395381 +0000 UTC m=+262.293347772" watchObservedRunningTime="2025-11-24 19:21:08.506036272 +0000 UTC m=+262.294988643" Nov 24 19:21:09 crc kubenswrapper[4812]: I1124 19:21:09.504820 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rkng" event={"ID":"338b7e7c-a0ec-4709-87d2-27b5d5852f3c","Type":"ContainerStarted","Data":"717c9e4f77e821c85ac6a6236ebd9594d822f2148ea4cc2eedd2347910139ce1"} Nov 24 19:21:09 crc kubenswrapper[4812]: I1124 19:21:09.527654 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6rkng" podStartSLOduration=2.061089207 podStartE2EDuration="4.527632112s" podCreationTimestamp="2025-11-24 19:21:05 +0000 UTC" firstStartedPulling="2025-11-24 19:21:06.445460483 +0000 UTC m=+260.234412864" lastFinishedPulling="2025-11-24 19:21:08.912003398 +0000 UTC m=+262.700955769" observedRunningTime="2025-11-24 19:21:09.525003331 +0000 UTC m=+263.313955712" watchObservedRunningTime="2025-11-24 19:21:09.527632112 +0000 UTC m=+263.316584503" Nov 24 19:21:13 crc kubenswrapper[4812]: I1124 19:21:13.019578 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xqfb9" Nov 24 19:21:13 crc kubenswrapper[4812]: I1124 19:21:13.020147 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xqfb9" Nov 24 19:21:13 crc kubenswrapper[4812]: I1124 19:21:13.068005 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xqfb9" Nov 24 19:21:13 crc kubenswrapper[4812]: I1124 19:21:13.208770 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vdg8s" Nov 24 19:21:13 crc kubenswrapper[4812]: I1124 19:21:13.208833 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vdg8s" Nov 24 19:21:13 crc kubenswrapper[4812]: I1124 19:21:13.270019 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vdg8s" Nov 24 19:21:13 crc kubenswrapper[4812]: I1124 19:21:13.561603 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vdg8s" Nov 24 19:21:13 crc kubenswrapper[4812]: I1124 19:21:13.571716 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xqfb9" Nov 24 19:21:15 crc kubenswrapper[4812]: I1124 19:21:15.412756 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6rkng" Nov 24 19:21:15 crc kubenswrapper[4812]: I1124 19:21:15.413393 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6rkng" Nov 24 19:21:15 crc kubenswrapper[4812]: I1124 19:21:15.466279 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6rkng" Nov 24 19:21:15 crc kubenswrapper[4812]: I1124 19:21:15.590127 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6rkng" Nov 24 19:21:15 crc kubenswrapper[4812]: I1124 19:21:15.614507 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7zz5v" Nov 24 19:21:15 crc kubenswrapper[4812]: I1124 19:21:15.615104 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7zz5v" Nov 24 19:21:15 crc kubenswrapper[4812]: I1124 19:21:15.678497 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7zz5v" Nov 24 19:21:16 crc kubenswrapper[4812]: I1124 19:21:16.584690 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7zz5v" Nov 24 19:23:02 crc kubenswrapper[4812]: I1124 19:23:02.998683 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:23:02 crc kubenswrapper[4812]: I1124 19:23:02.999457 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:23:23 crc kubenswrapper[4812]: I1124 19:23:23.990654 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-kvgwp"] Nov 24 19:23:23 crc kubenswrapper[4812]: I1124 19:23:23.992781 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.011778 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-kvgwp"] Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.142412 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04a943b6-a9f5-45fb-bddd-cbcc4040619d-trusted-ca\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.142491 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/04a943b6-a9f5-45fb-bddd-cbcc4040619d-registry-tls\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.142522 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.142545 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04a943b6-a9f5-45fb-bddd-cbcc4040619d-bound-sa-token\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.142726 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/04a943b6-a9f5-45fb-bddd-cbcc4040619d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.142810 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/04a943b6-a9f5-45fb-bddd-cbcc4040619d-registry-certificates\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.142902 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xmlt\" (UniqueName: \"kubernetes.io/projected/04a943b6-a9f5-45fb-bddd-cbcc4040619d-kube-api-access-2xmlt\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.142969 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/04a943b6-a9f5-45fb-bddd-cbcc4040619d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.167523 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.243916 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/04a943b6-a9f5-45fb-bddd-cbcc4040619d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.244035 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04a943b6-a9f5-45fb-bddd-cbcc4040619d-trusted-ca\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.244087 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/04a943b6-a9f5-45fb-bddd-cbcc4040619d-registry-tls\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.244128 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04a943b6-a9f5-45fb-bddd-cbcc4040619d-bound-sa-token\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.244176 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/04a943b6-a9f5-45fb-bddd-cbcc4040619d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.244218 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/04a943b6-a9f5-45fb-bddd-cbcc4040619d-registry-certificates\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.244266 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xmlt\" (UniqueName: \"kubernetes.io/projected/04a943b6-a9f5-45fb-bddd-cbcc4040619d-kube-api-access-2xmlt\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.245601 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/04a943b6-a9f5-45fb-bddd-cbcc4040619d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.246674 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/04a943b6-a9f5-45fb-bddd-cbcc4040619d-registry-certificates\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.247807 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/04a943b6-a9f5-45fb-bddd-cbcc4040619d-trusted-ca\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.252299 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/04a943b6-a9f5-45fb-bddd-cbcc4040619d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.253132 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/04a943b6-a9f5-45fb-bddd-cbcc4040619d-registry-tls\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.287916 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/04a943b6-a9f5-45fb-bddd-cbcc4040619d-bound-sa-token\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.290450 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xmlt\" (UniqueName: \"kubernetes.io/projected/04a943b6-a9f5-45fb-bddd-cbcc4040619d-kube-api-access-2xmlt\") pod \"image-registry-66df7c8f76-kvgwp\" (UID: \"04a943b6-a9f5-45fb-bddd-cbcc4040619d\") " pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.317151 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:24 crc kubenswrapper[4812]: I1124 19:23:24.787854 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-kvgwp"] Nov 24 19:23:25 crc kubenswrapper[4812]: I1124 19:23:25.536518 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" event={"ID":"04a943b6-a9f5-45fb-bddd-cbcc4040619d","Type":"ContainerStarted","Data":"8a6b2e8c8a0ce911d01db0c4c364faa390f6aeef01d459a59b693477b611e6a2"} Nov 24 19:23:25 crc kubenswrapper[4812]: I1124 19:23:25.536940 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:25 crc kubenswrapper[4812]: I1124 19:23:25.536958 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" event={"ID":"04a943b6-a9f5-45fb-bddd-cbcc4040619d","Type":"ContainerStarted","Data":"3f47a003789b2b7bedc4a5ac27a312bf3925440b814a4e6f6de60973a8d35051"} Nov 24 19:23:25 crc kubenswrapper[4812]: I1124 19:23:25.555041 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" podStartSLOduration=2.555022161 podStartE2EDuration="2.555022161s" podCreationTimestamp="2025-11-24 19:23:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:23:25.554401003 +0000 UTC m=+399.343353404" watchObservedRunningTime="2025-11-24 19:23:25.555022161 +0000 UTC m=+399.343974542" Nov 24 19:23:32 crc kubenswrapper[4812]: I1124 19:23:32.998703 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:23:33 crc kubenswrapper[4812]: I1124 19:23:32.999869 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:23:44 crc kubenswrapper[4812]: I1124 19:23:44.322389 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-kvgwp" Nov 24 19:23:44 crc kubenswrapper[4812]: I1124 19:23:44.393641 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bd6qw"] Nov 24 19:24:02 crc kubenswrapper[4812]: I1124 19:24:02.998996 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:24:03 crc kubenswrapper[4812]: I1124 19:24:02.999832 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:24:03 crc kubenswrapper[4812]: I1124 19:24:02.999917 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:24:03 crc kubenswrapper[4812]: I1124 19:24:03.000976 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9eaff0558087aaedaaf1b5c0557fb9301e3374c23c9d5289ea7c142bf667389a"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 19:24:03 crc kubenswrapper[4812]: I1124 19:24:03.001110 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://9eaff0558087aaedaaf1b5c0557fb9301e3374c23c9d5289ea7c142bf667389a" gracePeriod=600 Nov 24 19:24:03 crc kubenswrapper[4812]: I1124 19:24:03.799143 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="9eaff0558087aaedaaf1b5c0557fb9301e3374c23c9d5289ea7c142bf667389a" exitCode=0 Nov 24 19:24:03 crc kubenswrapper[4812]: I1124 19:24:03.799217 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"9eaff0558087aaedaaf1b5c0557fb9301e3374c23c9d5289ea7c142bf667389a"} Nov 24 19:24:03 crc kubenswrapper[4812]: I1124 19:24:03.799881 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"507ee7ea38a17f74128285fb7607b35f01e6c8bdad11e77d054df56b321b0fc0"} Nov 24 19:24:03 crc kubenswrapper[4812]: I1124 19:24:03.799920 4812 scope.go:117] "RemoveContainer" containerID="79400aabb1ee98cb416cda0b0553c8dd9e8e472afc96576aadc24bbf3ab66e6b" Nov 24 19:24:09 crc kubenswrapper[4812]: I1124 19:24:09.452808 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" podUID="74998661-1cd3-4ea2-ae49-b1e1a17da3e4" containerName="registry" containerID="cri-o://5ef9c0a0654c3b8b66f7544de0348e3426931d117121c7c27d213fc6a258200f" gracePeriod=30 Nov 24 19:24:09 crc kubenswrapper[4812]: I1124 19:24:09.842198 4812 generic.go:334] "Generic (PLEG): container finished" podID="74998661-1cd3-4ea2-ae49-b1e1a17da3e4" containerID="5ef9c0a0654c3b8b66f7544de0348e3426931d117121c7c27d213fc6a258200f" exitCode=0 Nov 24 19:24:09 crc kubenswrapper[4812]: I1124 19:24:09.842302 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" event={"ID":"74998661-1cd3-4ea2-ae49-b1e1a17da3e4","Type":"ContainerDied","Data":"5ef9c0a0654c3b8b66f7544de0348e3426931d117121c7c27d213fc6a258200f"} Nov 24 19:24:09 crc kubenswrapper[4812]: I1124 19:24:09.842592 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" event={"ID":"74998661-1cd3-4ea2-ae49-b1e1a17da3e4","Type":"ContainerDied","Data":"eb54f30a369f16fd9bdea35066417a2cd87a84227b6b211d964c197fca2c28d2"} Nov 24 19:24:09 crc kubenswrapper[4812]: I1124 19:24:09.842611 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb54f30a369f16fd9bdea35066417a2cd87a84227b6b211d964c197fca2c28d2" Nov 24 19:24:09 crc kubenswrapper[4812]: I1124 19:24:09.884027 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.062623 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-registry-tls\") pod \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.062687 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-bound-sa-token\") pod \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.062748 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-trusted-ca\") pod \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.062831 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-ca-trust-extracted\") pod \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.062894 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfsnq\" (UniqueName: \"kubernetes.io/projected/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-kube-api-access-zfsnq\") pod \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.063247 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.063375 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-registry-certificates\") pod \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.063485 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-installation-pull-secrets\") pod \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\" (UID: \"74998661-1cd3-4ea2-ae49-b1e1a17da3e4\") " Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.064401 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "74998661-1cd3-4ea2-ae49-b1e1a17da3e4" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.068022 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "74998661-1cd3-4ea2-ae49-b1e1a17da3e4" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.072502 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "74998661-1cd3-4ea2-ae49-b1e1a17da3e4" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.073187 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-kube-api-access-zfsnq" (OuterVolumeSpecName: "kube-api-access-zfsnq") pod "74998661-1cd3-4ea2-ae49-b1e1a17da3e4" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4"). InnerVolumeSpecName "kube-api-access-zfsnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.083801 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "74998661-1cd3-4ea2-ae49-b1e1a17da3e4" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.084955 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "74998661-1cd3-4ea2-ae49-b1e1a17da3e4" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.085080 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "74998661-1cd3-4ea2-ae49-b1e1a17da3e4" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.087530 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "74998661-1cd3-4ea2-ae49-b1e1a17da3e4" (UID: "74998661-1cd3-4ea2-ae49-b1e1a17da3e4"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.165110 4812 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.165165 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfsnq\" (UniqueName: \"kubernetes.io/projected/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-kube-api-access-zfsnq\") on node \"crc\" DevicePath \"\"" Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.165178 4812 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.165189 4812 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.165200 4812 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.165212 4812 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.165222 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74998661-1cd3-4ea2-ae49-b1e1a17da3e4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.848446 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bd6qw" Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.889558 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bd6qw"] Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.895048 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bd6qw"] Nov 24 19:24:10 crc kubenswrapper[4812]: I1124 19:24:10.978157 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74998661-1cd3-4ea2-ae49-b1e1a17da3e4" path="/var/lib/kubelet/pods/74998661-1cd3-4ea2-ae49-b1e1a17da3e4/volumes" Nov 24 19:25:47 crc kubenswrapper[4812]: I1124 19:25:47.174862 4812 scope.go:117] "RemoveContainer" containerID="5ef9c0a0654c3b8b66f7544de0348e3426931d117121c7c27d213fc6a258200f" Nov 24 19:26:32 crc kubenswrapper[4812]: I1124 19:26:32.998380 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:26:32 crc kubenswrapper[4812]: I1124 19:26:32.999067 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:27:02 crc kubenswrapper[4812]: I1124 19:27:02.998423 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:27:03 crc kubenswrapper[4812]: I1124 19:27:02.999091 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:27:32 crc kubenswrapper[4812]: I1124 19:27:32.998672 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:27:33 crc kubenswrapper[4812]: I1124 19:27:32.999370 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:27:33 crc kubenswrapper[4812]: I1124 19:27:32.999450 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:27:33 crc kubenswrapper[4812]: I1124 19:27:33.000166 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"507ee7ea38a17f74128285fb7607b35f01e6c8bdad11e77d054df56b321b0fc0"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 19:27:33 crc kubenswrapper[4812]: I1124 19:27:33.000246 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://507ee7ea38a17f74128285fb7607b35f01e6c8bdad11e77d054df56b321b0fc0" gracePeriod=600 Nov 24 19:27:33 crc kubenswrapper[4812]: I1124 19:27:33.278807 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="507ee7ea38a17f74128285fb7607b35f01e6c8bdad11e77d054df56b321b0fc0" exitCode=0 Nov 24 19:27:33 crc kubenswrapper[4812]: I1124 19:27:33.279045 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"507ee7ea38a17f74128285fb7607b35f01e6c8bdad11e77d054df56b321b0fc0"} Nov 24 19:27:33 crc kubenswrapper[4812]: I1124 19:27:33.279219 4812 scope.go:117] "RemoveContainer" containerID="9eaff0558087aaedaaf1b5c0557fb9301e3374c23c9d5289ea7c142bf667389a" Nov 24 19:27:34 crc kubenswrapper[4812]: I1124 19:27:34.288310 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"90a8aabf222225c1c2c6e28ca52be1f8419a4b406dd9e47fff7f3d92641c4332"} Nov 24 19:28:29 crc kubenswrapper[4812]: I1124 19:28:29.617407 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dgm54"] Nov 24 19:28:29 crc kubenswrapper[4812]: I1124 19:28:29.618250 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovn-controller" containerID="cri-o://227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0" gracePeriod=30 Nov 24 19:28:29 crc kubenswrapper[4812]: I1124 19:28:29.618398 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="kube-rbac-proxy-node" containerID="cri-o://2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085" gracePeriod=30 Nov 24 19:28:29 crc kubenswrapper[4812]: I1124 19:28:29.618327 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="nbdb" containerID="cri-o://9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189" gracePeriod=30 Nov 24 19:28:29 crc kubenswrapper[4812]: I1124 19:28:29.618458 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovn-acl-logging" containerID="cri-o://f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0" gracePeriod=30 Nov 24 19:28:29 crc kubenswrapper[4812]: I1124 19:28:29.618417 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c" gracePeriod=30 Nov 24 19:28:29 crc kubenswrapper[4812]: I1124 19:28:29.618570 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="sbdb" containerID="cri-o://fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33" gracePeriod=30 Nov 24 19:28:29 crc kubenswrapper[4812]: I1124 19:28:29.618692 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="northd" containerID="cri-o://522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee" gracePeriod=30 Nov 24 19:28:29 crc kubenswrapper[4812]: I1124 19:28:29.688202 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovnkube-controller" containerID="cri-o://dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a" gracePeriod=30 Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.008970 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dgm54_b24bf762-6020-46b4-b9e8-589eb8ed0650/ovnkube-controller/3.log" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.012168 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dgm54_b24bf762-6020-46b4-b9e8-589eb8ed0650/ovn-acl-logging/0.log" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.012770 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dgm54_b24bf762-6020-46b4-b9e8-589eb8ed0650/ovn-controller/0.log" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.013382 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088202 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-z57nr"] Nov 24 19:28:30 crc kubenswrapper[4812]: E1124 19:28:30.088503 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovn-acl-logging" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088521 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovn-acl-logging" Nov 24 19:28:30 crc kubenswrapper[4812]: E1124 19:28:30.088532 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovnkube-controller" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088539 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovnkube-controller" Nov 24 19:28:30 crc kubenswrapper[4812]: E1124 19:28:30.088546 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovnkube-controller" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088553 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovnkube-controller" Nov 24 19:28:30 crc kubenswrapper[4812]: E1124 19:28:30.088562 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088568 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 19:28:30 crc kubenswrapper[4812]: E1124 19:28:30.088581 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="nbdb" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088587 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="nbdb" Nov 24 19:28:30 crc kubenswrapper[4812]: E1124 19:28:30.088596 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovn-controller" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088602 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovn-controller" Nov 24 19:28:30 crc kubenswrapper[4812]: E1124 19:28:30.088611 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="northd" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088616 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="northd" Nov 24 19:28:30 crc kubenswrapper[4812]: E1124 19:28:30.088626 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="kube-rbac-proxy-node" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088635 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="kube-rbac-proxy-node" Nov 24 19:28:30 crc kubenswrapper[4812]: E1124 19:28:30.088652 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovnkube-controller" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088659 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovnkube-controller" Nov 24 19:28:30 crc kubenswrapper[4812]: E1124 19:28:30.088667 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74998661-1cd3-4ea2-ae49-b1e1a17da3e4" containerName="registry" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088675 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="74998661-1cd3-4ea2-ae49-b1e1a17da3e4" containerName="registry" Nov 24 19:28:30 crc kubenswrapper[4812]: E1124 19:28:30.088685 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovnkube-controller" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088692 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovnkube-controller" Nov 24 19:28:30 crc kubenswrapper[4812]: E1124 19:28:30.088702 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="kubecfg-setup" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088709 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="kubecfg-setup" Nov 24 19:28:30 crc kubenswrapper[4812]: E1124 19:28:30.088718 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="sbdb" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088742 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="sbdb" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088844 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovn-acl-logging" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088856 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovnkube-controller" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088862 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovnkube-controller" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088869 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovnkube-controller" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088876 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088885 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovnkube-controller" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088893 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="74998661-1cd3-4ea2-ae49-b1e1a17da3e4" containerName="registry" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088901 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovn-controller" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088908 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="northd" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088916 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="nbdb" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088923 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="sbdb" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.088930 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="kube-rbac-proxy-node" Nov 24 19:28:30 crc kubenswrapper[4812]: E1124 19:28:30.089036 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovnkube-controller" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.089044 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovnkube-controller" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.089150 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerName="ovnkube-controller" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.091086 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.175681 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b24bf762-6020-46b4-b9e8-589eb8ed0650-ovnkube-script-lib\") pod \"b24bf762-6020-46b4-b9e8-589eb8ed0650\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176170 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-systemd-units\") pod \"b24bf762-6020-46b4-b9e8-589eb8ed0650\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176212 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-cni-bin\") pod \"b24bf762-6020-46b4-b9e8-589eb8ed0650\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176246 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-log-socket\") pod \"b24bf762-6020-46b4-b9e8-589eb8ed0650\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176269 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-cni-netd\") pod \"b24bf762-6020-46b4-b9e8-589eb8ed0650\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176319 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b24bf762-6020-46b4-b9e8-589eb8ed0650-env-overrides\") pod \"b24bf762-6020-46b4-b9e8-589eb8ed0650\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176352 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-log-socket" (OuterVolumeSpecName: "log-socket") pod "b24bf762-6020-46b4-b9e8-589eb8ed0650" (UID: "b24bf762-6020-46b4-b9e8-589eb8ed0650"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176323 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b24bf762-6020-46b4-b9e8-589eb8ed0650-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b24bf762-6020-46b4-b9e8-589eb8ed0650" (UID: "b24bf762-6020-46b4-b9e8-589eb8ed0650"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176392 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b24bf762-6020-46b4-b9e8-589eb8ed0650" (UID: "b24bf762-6020-46b4-b9e8-589eb8ed0650"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176359 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-node-log\") pod \"b24bf762-6020-46b4-b9e8-589eb8ed0650\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176422 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b24bf762-6020-46b4-b9e8-589eb8ed0650" (UID: "b24bf762-6020-46b4-b9e8-589eb8ed0650"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176429 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-node-log" (OuterVolumeSpecName: "node-log") pod "b24bf762-6020-46b4-b9e8-589eb8ed0650" (UID: "b24bf762-6020-46b4-b9e8-589eb8ed0650"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176423 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b24bf762-6020-46b4-b9e8-589eb8ed0650" (UID: "b24bf762-6020-46b4-b9e8-589eb8ed0650"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176483 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-var-lib-openvswitch\") pod \"b24bf762-6020-46b4-b9e8-589eb8ed0650\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176538 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b24bf762-6020-46b4-b9e8-589eb8ed0650-ovn-node-metrics-cert\") pod \"b24bf762-6020-46b4-b9e8-589eb8ed0650\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176570 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b24bf762-6020-46b4-b9e8-589eb8ed0650" (UID: "b24bf762-6020-46b4-b9e8-589eb8ed0650"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176591 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-kubelet\") pod \"b24bf762-6020-46b4-b9e8-589eb8ed0650\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176619 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-run-systemd\") pod \"b24bf762-6020-46b4-b9e8-589eb8ed0650\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176618 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b24bf762-6020-46b4-b9e8-589eb8ed0650" (UID: "b24bf762-6020-46b4-b9e8-589eb8ed0650"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176649 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b24bf762-6020-46b4-b9e8-589eb8ed0650-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b24bf762-6020-46b4-b9e8-589eb8ed0650" (UID: "b24bf762-6020-46b4-b9e8-589eb8ed0650"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176660 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-run-ovn-kubernetes\") pod \"b24bf762-6020-46b4-b9e8-589eb8ed0650\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176689 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b24bf762-6020-46b4-b9e8-589eb8ed0650" (UID: "b24bf762-6020-46b4-b9e8-589eb8ed0650"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176719 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-slash\") pod \"b24bf762-6020-46b4-b9e8-589eb8ed0650\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176747 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-etc-openvswitch\") pod \"b24bf762-6020-46b4-b9e8-589eb8ed0650\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176790 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zx7\" (UniqueName: \"kubernetes.io/projected/b24bf762-6020-46b4-b9e8-589eb8ed0650-kube-api-access-x4zx7\") pod \"b24bf762-6020-46b4-b9e8-589eb8ed0650\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176812 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b24bf762-6020-46b4-b9e8-589eb8ed0650\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176836 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-run-netns\") pod \"b24bf762-6020-46b4-b9e8-589eb8ed0650\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176858 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b24bf762-6020-46b4-b9e8-589eb8ed0650-ovnkube-config\") pod \"b24bf762-6020-46b4-b9e8-589eb8ed0650\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176888 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-run-openvswitch\") pod \"b24bf762-6020-46b4-b9e8-589eb8ed0650\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176908 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-run-ovn\") pod \"b24bf762-6020-46b4-b9e8-589eb8ed0650\" (UID: \"b24bf762-6020-46b4-b9e8-589eb8ed0650\") " Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.176990 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c8ae1181-23f5-4856-b47f-cc99c0455edf-ovnkube-script-lib\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177014 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-host-kubelet\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177033 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbvkn\" (UniqueName: \"kubernetes.io/projected/c8ae1181-23f5-4856-b47f-cc99c0455edf-kube-api-access-xbvkn\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177052 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c8ae1181-23f5-4856-b47f-cc99c0455edf-env-overrides\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177073 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-node-log\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177090 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-run-systemd\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177109 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-systemd-units\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177134 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-var-lib-openvswitch\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177151 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c8ae1181-23f5-4856-b47f-cc99c0455edf-ovnkube-config\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177225 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177245 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-host-cni-bin\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177263 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-run-openvswitch\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177288 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-host-run-netns\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177310 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-host-slash\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177328 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-host-run-ovn-kubernetes\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177370 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c8ae1181-23f5-4856-b47f-cc99c0455edf-ovn-node-metrics-cert\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177393 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-run-ovn\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177419 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-log-socket\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177445 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-host-cni-netd\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177469 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-etc-openvswitch\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177511 4812 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b24bf762-6020-46b4-b9e8-589eb8ed0650-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177523 4812 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177534 4812 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177543 4812 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-log-socket\") on node \"crc\" DevicePath \"\"" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177553 4812 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177562 4812 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b24bf762-6020-46b4-b9e8-589eb8ed0650-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177572 4812 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-node-log\") on node \"crc\" DevicePath \"\"" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177583 4812 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177592 4812 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177602 4812 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177710 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b24bf762-6020-46b4-b9e8-589eb8ed0650" (UID: "b24bf762-6020-46b4-b9e8-589eb8ed0650"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177772 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-slash" (OuterVolumeSpecName: "host-slash") pod "b24bf762-6020-46b4-b9e8-589eb8ed0650" (UID: "b24bf762-6020-46b4-b9e8-589eb8ed0650"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177804 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b24bf762-6020-46b4-b9e8-589eb8ed0650" (UID: "b24bf762-6020-46b4-b9e8-589eb8ed0650"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177852 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b24bf762-6020-46b4-b9e8-589eb8ed0650" (UID: "b24bf762-6020-46b4-b9e8-589eb8ed0650"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177894 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b24bf762-6020-46b4-b9e8-589eb8ed0650" (UID: "b24bf762-6020-46b4-b9e8-589eb8ed0650"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.177944 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b24bf762-6020-46b4-b9e8-589eb8ed0650" (UID: "b24bf762-6020-46b4-b9e8-589eb8ed0650"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.178108 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b24bf762-6020-46b4-b9e8-589eb8ed0650-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b24bf762-6020-46b4-b9e8-589eb8ed0650" (UID: "b24bf762-6020-46b4-b9e8-589eb8ed0650"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.184135 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24bf762-6020-46b4-b9e8-589eb8ed0650-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b24bf762-6020-46b4-b9e8-589eb8ed0650" (UID: "b24bf762-6020-46b4-b9e8-589eb8ed0650"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.185109 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b24bf762-6020-46b4-b9e8-589eb8ed0650-kube-api-access-x4zx7" (OuterVolumeSpecName: "kube-api-access-x4zx7") pod "b24bf762-6020-46b4-b9e8-589eb8ed0650" (UID: "b24bf762-6020-46b4-b9e8-589eb8ed0650"). InnerVolumeSpecName "kube-api-access-x4zx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.192541 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b24bf762-6020-46b4-b9e8-589eb8ed0650" (UID: "b24bf762-6020-46b4-b9e8-589eb8ed0650"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.279444 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c8ae1181-23f5-4856-b47f-cc99c0455edf-ovnkube-script-lib\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.279487 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-host-kubelet\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.279514 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbvkn\" (UniqueName: \"kubernetes.io/projected/c8ae1181-23f5-4856-b47f-cc99c0455edf-kube-api-access-xbvkn\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.279533 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c8ae1181-23f5-4856-b47f-cc99c0455edf-env-overrides\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.279555 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-node-log\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.279577 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-run-systemd\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.279596 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-systemd-units\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.279624 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-var-lib-openvswitch\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.279642 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c8ae1181-23f5-4856-b47f-cc99c0455edf-ovnkube-config\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.279662 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.279661 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-host-kubelet\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.279716 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-host-cni-bin\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.279684 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-host-cni-bin\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.279737 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-run-systemd\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.279746 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-var-lib-openvswitch\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.279835 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-systemd-units\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280023 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280089 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-node-log\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280138 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-run-openvswitch\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280203 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-host-run-netns\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280244 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-host-slash\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280261 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-run-openvswitch\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280276 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c8ae1181-23f5-4856-b47f-cc99c0455edf-ovn-node-metrics-cert\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280324 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-host-run-netns\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280327 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-host-run-ovn-kubernetes\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280378 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-host-slash\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280408 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-run-ovn\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280421 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-host-run-ovn-kubernetes\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280451 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-run-ovn\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280499 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-log-socket\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280570 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-host-cni-netd\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280624 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-etc-openvswitch\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280618 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-host-cni-netd\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280576 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-log-socket\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280682 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ae1181-23f5-4856-b47f-cc99c0455edf-etc-openvswitch\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280754 4812 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-slash\") on node \"crc\" DevicePath \"\"" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280777 4812 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280798 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zx7\" (UniqueName: \"kubernetes.io/projected/b24bf762-6020-46b4-b9e8-589eb8ed0650-kube-api-access-x4zx7\") on node \"crc\" DevicePath \"\"" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280801 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c8ae1181-23f5-4856-b47f-cc99c0455edf-env-overrides\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280818 4812 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280891 4812 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280912 4812 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b24bf762-6020-46b4-b9e8-589eb8ed0650-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280930 4812 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280949 4812 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280970 4812 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b24bf762-6020-46b4-b9e8-589eb8ed0650-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280990 4812 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b24bf762-6020-46b4-b9e8-589eb8ed0650-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.280945 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c8ae1181-23f5-4856-b47f-cc99c0455edf-ovnkube-config\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.281587 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c8ae1181-23f5-4856-b47f-cc99c0455edf-ovnkube-script-lib\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.286422 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c8ae1181-23f5-4856-b47f-cc99c0455edf-ovn-node-metrics-cert\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.299049 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbvkn\" (UniqueName: \"kubernetes.io/projected/c8ae1181-23f5-4856-b47f-cc99c0455edf-kube-api-access-xbvkn\") pod \"ovnkube-node-z57nr\" (UID: \"c8ae1181-23f5-4856-b47f-cc99c0455edf\") " pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.414980 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.666777 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lhgj5_c270cb89-97c2-48c4-94c3-9b8420d81cfd/kube-multus/2.log" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.667395 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lhgj5_c270cb89-97c2-48c4-94c3-9b8420d81cfd/kube-multus/1.log" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.667437 4812 generic.go:334] "Generic (PLEG): container finished" podID="c270cb89-97c2-48c4-94c3-9b8420d81cfd" containerID="3bb81f02ea22620bd0101c7d8fdcad7d10ed08c41d42d32cd204a571e256cf0f" exitCode=2 Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.667531 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lhgj5" event={"ID":"c270cb89-97c2-48c4-94c3-9b8420d81cfd","Type":"ContainerDied","Data":"3bb81f02ea22620bd0101c7d8fdcad7d10ed08c41d42d32cd204a571e256cf0f"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.667580 4812 scope.go:117] "RemoveContainer" containerID="a3218c66661f13d16c61591f5b6e09f9aecf9fcad3093917cf7f65d64c5756a0" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.668296 4812 scope.go:117] "RemoveContainer" containerID="3bb81f02ea22620bd0101c7d8fdcad7d10ed08c41d42d32cd204a571e256cf0f" Nov 24 19:28:30 crc kubenswrapper[4812]: E1124 19:28:30.668684 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lhgj5_openshift-multus(c270cb89-97c2-48c4-94c3-9b8420d81cfd)\"" pod="openshift-multus/multus-lhgj5" podUID="c270cb89-97c2-48c4-94c3-9b8420d81cfd" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.674039 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dgm54_b24bf762-6020-46b4-b9e8-589eb8ed0650/ovnkube-controller/3.log" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.677287 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dgm54_b24bf762-6020-46b4-b9e8-589eb8ed0650/ovn-acl-logging/0.log" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.678081 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-dgm54_b24bf762-6020-46b4-b9e8-589eb8ed0650/ovn-controller/0.log" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.678548 4812 generic.go:334] "Generic (PLEG): container finished" podID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerID="dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a" exitCode=0 Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.678593 4812 generic.go:334] "Generic (PLEG): container finished" podID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerID="fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33" exitCode=0 Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.678607 4812 generic.go:334] "Generic (PLEG): container finished" podID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerID="9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189" exitCode=0 Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.678617 4812 generic.go:334] "Generic (PLEG): container finished" podID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerID="522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee" exitCode=0 Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.678628 4812 generic.go:334] "Generic (PLEG): container finished" podID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerID="2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c" exitCode=0 Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.678637 4812 generic.go:334] "Generic (PLEG): container finished" podID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerID="2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085" exitCode=0 Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.678646 4812 generic.go:334] "Generic (PLEG): container finished" podID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerID="f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0" exitCode=143 Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.678663 4812 generic.go:334] "Generic (PLEG): container finished" podID="b24bf762-6020-46b4-b9e8-589eb8ed0650" containerID="227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0" exitCode=143 Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.678682 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.678659 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerDied","Data":"dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.678869 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerDied","Data":"fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.678898 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerDied","Data":"9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.678920 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerDied","Data":"522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679051 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerDied","Data":"2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679093 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerDied","Data":"2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679128 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679153 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679167 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679180 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679193 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679206 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679218 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679230 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679242 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679254 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679270 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerDied","Data":"f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679289 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679302 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679314 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679326 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679365 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679376 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679387 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679399 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679411 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679425 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679441 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerDied","Data":"227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679458 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679471 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679482 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679492 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679503 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679514 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679525 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679580 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679592 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679605 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679621 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-dgm54" event={"ID":"b24bf762-6020-46b4-b9e8-589eb8ed0650","Type":"ContainerDied","Data":"4c9662b6a6eccb4a86d933bc4d0f93a8ff37764244d1ea70eb4bd823cebd2a82"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679639 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679651 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679663 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679677 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679689 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679700 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679712 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679722 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679735 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.679747 4812 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.680758 4812 generic.go:334] "Generic (PLEG): container finished" podID="c8ae1181-23f5-4856-b47f-cc99c0455edf" containerID="4ba1a9ca603d27386c84fcc2bd01a9a93ea508ad5058a332af5eb2b34a319dc1" exitCode=0 Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.680808 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" event={"ID":"c8ae1181-23f5-4856-b47f-cc99c0455edf","Type":"ContainerDied","Data":"4ba1a9ca603d27386c84fcc2bd01a9a93ea508ad5058a332af5eb2b34a319dc1"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.680909 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" event={"ID":"c8ae1181-23f5-4856-b47f-cc99c0455edf","Type":"ContainerStarted","Data":"93991a472245d5c3e361dfc8a8dbefb76ce5143bb28452b71ca5cabd064aae56"} Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.695941 4812 scope.go:117] "RemoveContainer" containerID="dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.723579 4812 scope.go:117] "RemoveContainer" containerID="e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.756272 4812 scope.go:117] "RemoveContainer" containerID="fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.761852 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dgm54"] Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.764898 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-dgm54"] Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.799083 4812 scope.go:117] "RemoveContainer" containerID="9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.818030 4812 scope.go:117] "RemoveContainer" containerID="522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.835147 4812 scope.go:117] "RemoveContainer" containerID="2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.872249 4812 scope.go:117] "RemoveContainer" containerID="2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.941452 4812 scope.go:117] "RemoveContainer" containerID="f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.964498 4812 scope.go:117] "RemoveContainer" containerID="227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.972004 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b24bf762-6020-46b4-b9e8-589eb8ed0650" path="/var/lib/kubelet/pods/b24bf762-6020-46b4-b9e8-589eb8ed0650/volumes" Nov 24 19:28:30 crc kubenswrapper[4812]: I1124 19:28:30.993315 4812 scope.go:117] "RemoveContainer" containerID="ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.025195 4812 scope.go:117] "RemoveContainer" containerID="dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a" Nov 24 19:28:31 crc kubenswrapper[4812]: E1124 19:28:31.025741 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a\": container with ID starting with dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a not found: ID does not exist" containerID="dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.025804 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a"} err="failed to get container status \"dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a\": rpc error: code = NotFound desc = could not find container \"dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a\": container with ID starting with dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.025849 4812 scope.go:117] "RemoveContainer" containerID="e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63" Nov 24 19:28:31 crc kubenswrapper[4812]: E1124 19:28:31.026117 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63\": container with ID starting with e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63 not found: ID does not exist" containerID="e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.026141 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63"} err="failed to get container status \"e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63\": rpc error: code = NotFound desc = could not find container \"e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63\": container with ID starting with e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.026155 4812 scope.go:117] "RemoveContainer" containerID="fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33" Nov 24 19:28:31 crc kubenswrapper[4812]: E1124 19:28:31.026419 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\": container with ID starting with fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33 not found: ID does not exist" containerID="fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.026459 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33"} err="failed to get container status \"fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\": rpc error: code = NotFound desc = could not find container \"fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\": container with ID starting with fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.026484 4812 scope.go:117] "RemoveContainer" containerID="9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189" Nov 24 19:28:31 crc kubenswrapper[4812]: E1124 19:28:31.026853 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\": container with ID starting with 9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189 not found: ID does not exist" containerID="9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.026883 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189"} err="failed to get container status \"9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\": rpc error: code = NotFound desc = could not find container \"9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\": container with ID starting with 9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.026896 4812 scope.go:117] "RemoveContainer" containerID="522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee" Nov 24 19:28:31 crc kubenswrapper[4812]: E1124 19:28:31.027223 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\": container with ID starting with 522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee not found: ID does not exist" containerID="522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.027287 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee"} err="failed to get container status \"522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\": rpc error: code = NotFound desc = could not find container \"522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\": container with ID starting with 522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.027328 4812 scope.go:117] "RemoveContainer" containerID="2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c" Nov 24 19:28:31 crc kubenswrapper[4812]: E1124 19:28:31.027633 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\": container with ID starting with 2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c not found: ID does not exist" containerID="2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.027655 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c"} err="failed to get container status \"2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\": rpc error: code = NotFound desc = could not find container \"2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\": container with ID starting with 2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.027668 4812 scope.go:117] "RemoveContainer" containerID="2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085" Nov 24 19:28:31 crc kubenswrapper[4812]: E1124 19:28:31.028043 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\": container with ID starting with 2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085 not found: ID does not exist" containerID="2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.028124 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085"} err="failed to get container status \"2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\": rpc error: code = NotFound desc = could not find container \"2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\": container with ID starting with 2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.028152 4812 scope.go:117] "RemoveContainer" containerID="f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0" Nov 24 19:28:31 crc kubenswrapper[4812]: E1124 19:28:31.030599 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\": container with ID starting with f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0 not found: ID does not exist" containerID="f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.030648 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0"} err="failed to get container status \"f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\": rpc error: code = NotFound desc = could not find container \"f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\": container with ID starting with f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.030696 4812 scope.go:117] "RemoveContainer" containerID="227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0" Nov 24 19:28:31 crc kubenswrapper[4812]: E1124 19:28:31.032004 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\": container with ID starting with 227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0 not found: ID does not exist" containerID="227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.032049 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0"} err="failed to get container status \"227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\": rpc error: code = NotFound desc = could not find container \"227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\": container with ID starting with 227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.032069 4812 scope.go:117] "RemoveContainer" containerID="ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312" Nov 24 19:28:31 crc kubenswrapper[4812]: E1124 19:28:31.033854 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\": container with ID starting with ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312 not found: ID does not exist" containerID="ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.033905 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312"} err="failed to get container status \"ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\": rpc error: code = NotFound desc = could not find container \"ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\": container with ID starting with ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.033935 4812 scope.go:117] "RemoveContainer" containerID="dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.034827 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a"} err="failed to get container status \"dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a\": rpc error: code = NotFound desc = could not find container \"dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a\": container with ID starting with dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.034865 4812 scope.go:117] "RemoveContainer" containerID="e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.035249 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63"} err="failed to get container status \"e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63\": rpc error: code = NotFound desc = could not find container \"e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63\": container with ID starting with e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.035292 4812 scope.go:117] "RemoveContainer" containerID="fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.039753 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33"} err="failed to get container status \"fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\": rpc error: code = NotFound desc = could not find container \"fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\": container with ID starting with fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.039794 4812 scope.go:117] "RemoveContainer" containerID="9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.040089 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189"} err="failed to get container status \"9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\": rpc error: code = NotFound desc = could not find container \"9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\": container with ID starting with 9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.040115 4812 scope.go:117] "RemoveContainer" containerID="522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.040406 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee"} err="failed to get container status \"522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\": rpc error: code = NotFound desc = could not find container \"522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\": container with ID starting with 522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.040448 4812 scope.go:117] "RemoveContainer" containerID="2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.040696 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c"} err="failed to get container status \"2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\": rpc error: code = NotFound desc = could not find container \"2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\": container with ID starting with 2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.040719 4812 scope.go:117] "RemoveContainer" containerID="2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.040973 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085"} err="failed to get container status \"2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\": rpc error: code = NotFound desc = could not find container \"2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\": container with ID starting with 2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.041015 4812 scope.go:117] "RemoveContainer" containerID="f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.041290 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0"} err="failed to get container status \"f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\": rpc error: code = NotFound desc = could not find container \"f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\": container with ID starting with f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.041326 4812 scope.go:117] "RemoveContainer" containerID="227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.041570 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0"} err="failed to get container status \"227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\": rpc error: code = NotFound desc = could not find container \"227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\": container with ID starting with 227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.041595 4812 scope.go:117] "RemoveContainer" containerID="ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.041827 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312"} err="failed to get container status \"ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\": rpc error: code = NotFound desc = could not find container \"ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\": container with ID starting with ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.041866 4812 scope.go:117] "RemoveContainer" containerID="dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.042080 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a"} err="failed to get container status \"dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a\": rpc error: code = NotFound desc = could not find container \"dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a\": container with ID starting with dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.042103 4812 scope.go:117] "RemoveContainer" containerID="e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.042359 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63"} err="failed to get container status \"e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63\": rpc error: code = NotFound desc = could not find container \"e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63\": container with ID starting with e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.042378 4812 scope.go:117] "RemoveContainer" containerID="fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.042611 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33"} err="failed to get container status \"fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\": rpc error: code = NotFound desc = could not find container \"fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\": container with ID starting with fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.042634 4812 scope.go:117] "RemoveContainer" containerID="9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.042855 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189"} err="failed to get container status \"9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\": rpc error: code = NotFound desc = could not find container \"9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\": container with ID starting with 9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.042895 4812 scope.go:117] "RemoveContainer" containerID="522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.043122 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee"} err="failed to get container status \"522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\": rpc error: code = NotFound desc = could not find container \"522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\": container with ID starting with 522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.043141 4812 scope.go:117] "RemoveContainer" containerID="2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.043444 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c"} err="failed to get container status \"2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\": rpc error: code = NotFound desc = could not find container \"2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\": container with ID starting with 2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.043537 4812 scope.go:117] "RemoveContainer" containerID="2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.043774 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085"} err="failed to get container status \"2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\": rpc error: code = NotFound desc = could not find container \"2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\": container with ID starting with 2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.043792 4812 scope.go:117] "RemoveContainer" containerID="f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.044018 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0"} err="failed to get container status \"f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\": rpc error: code = NotFound desc = could not find container \"f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\": container with ID starting with f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.044058 4812 scope.go:117] "RemoveContainer" containerID="227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.044320 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0"} err="failed to get container status \"227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\": rpc error: code = NotFound desc = could not find container \"227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\": container with ID starting with 227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.044349 4812 scope.go:117] "RemoveContainer" containerID="ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.044575 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312"} err="failed to get container status \"ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\": rpc error: code = NotFound desc = could not find container \"ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\": container with ID starting with ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.044620 4812 scope.go:117] "RemoveContainer" containerID="dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.044836 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a"} err="failed to get container status \"dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a\": rpc error: code = NotFound desc = could not find container \"dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a\": container with ID starting with dd4fdc528657c9b0dd6691b205e7aeb8fe4c9864fc59951489f5ee6282a3366a not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.044853 4812 scope.go:117] "RemoveContainer" containerID="e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.045016 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63"} err="failed to get container status \"e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63\": rpc error: code = NotFound desc = could not find container \"e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63\": container with ID starting with e1dce7638b271146175a55c17520366ff0f3e0bc33f3ccbc853d94a11648ee63 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.045033 4812 scope.go:117] "RemoveContainer" containerID="fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.045258 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33"} err="failed to get container status \"fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\": rpc error: code = NotFound desc = could not find container \"fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33\": container with ID starting with fba882f4e6ab9d37f8ad168528841d8fe5634e96286ed97c51c50d2884498e33 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.045296 4812 scope.go:117] "RemoveContainer" containerID="9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.045569 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189"} err="failed to get container status \"9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\": rpc error: code = NotFound desc = could not find container \"9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189\": container with ID starting with 9ee76f35d9a9e68e0b75bf6a6aad28e82069b80099a9ff052659c9c4fe149189 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.045606 4812 scope.go:117] "RemoveContainer" containerID="522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.045860 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee"} err="failed to get container status \"522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\": rpc error: code = NotFound desc = could not find container \"522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee\": container with ID starting with 522ecbc478739ddf6820c5a71da271a4cdb7e19ccca74cf9b50f3628d2f35eee not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.045894 4812 scope.go:117] "RemoveContainer" containerID="2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.046166 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c"} err="failed to get container status \"2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\": rpc error: code = NotFound desc = could not find container \"2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c\": container with ID starting with 2810b3ea3b5e04290c221c14f5f225051ddff288b1b3b5f2d8353720ecb9ad3c not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.046202 4812 scope.go:117] "RemoveContainer" containerID="2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.046450 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085"} err="failed to get container status \"2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\": rpc error: code = NotFound desc = could not find container \"2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085\": container with ID starting with 2360d738be30a232337fb2561f6d32947f9c9273b04e3bd23075563f43726085 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.046485 4812 scope.go:117] "RemoveContainer" containerID="f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.046742 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0"} err="failed to get container status \"f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\": rpc error: code = NotFound desc = could not find container \"f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0\": container with ID starting with f5950ad3aefc5a1b998075791562ba23e140d6ade5638e0a457cbaedb4fd84a0 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.046779 4812 scope.go:117] "RemoveContainer" containerID="227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.047023 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0"} err="failed to get container status \"227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\": rpc error: code = NotFound desc = could not find container \"227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0\": container with ID starting with 227f320508e5bacb9746f4588313048b88c78fbbada21d2d25f56fabb3c322b0 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.047065 4812 scope.go:117] "RemoveContainer" containerID="ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.047302 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312"} err="failed to get container status \"ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\": rpc error: code = NotFound desc = could not find container \"ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312\": container with ID starting with ee7a182ce7d2a30b7f9a47256d2afb75f084b13a75172498778e563501591312 not found: ID does not exist" Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.689604 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" event={"ID":"c8ae1181-23f5-4856-b47f-cc99c0455edf","Type":"ContainerStarted","Data":"32e58dee184e4521fe4b520bd7bd2306a7602efb8e5f4e659d82964acd39bbe8"} Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.689997 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" event={"ID":"c8ae1181-23f5-4856-b47f-cc99c0455edf","Type":"ContainerStarted","Data":"d0e8b9fd97a4123d22be1e0e5adddcc87943076d1ed8d884be1488fa6090783b"} Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.690025 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" event={"ID":"c8ae1181-23f5-4856-b47f-cc99c0455edf","Type":"ContainerStarted","Data":"cdcbee69d44d787ad2081c640a153c80d89d945f0f72661caf0f9702d26adc01"} Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.690040 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" event={"ID":"c8ae1181-23f5-4856-b47f-cc99c0455edf","Type":"ContainerStarted","Data":"bc261856368fd927666ab008d2d2ebbd2bad929138c10f339db91fd23837eac0"} Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.690053 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" event={"ID":"c8ae1181-23f5-4856-b47f-cc99c0455edf","Type":"ContainerStarted","Data":"1d13f038aa0dec8d2b17a54520c61c3e38e993dd05714ea26de1bdfba144645c"} Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.690064 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" event={"ID":"c8ae1181-23f5-4856-b47f-cc99c0455edf","Type":"ContainerStarted","Data":"6a35a63b0cf43e43e312b0e03819de26fe43c50e28c0616a0f09c4eb4308bec8"} Nov 24 19:28:31 crc kubenswrapper[4812]: I1124 19:28:31.692312 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lhgj5_c270cb89-97c2-48c4-94c3-9b8420d81cfd/kube-multus/2.log" Nov 24 19:28:34 crc kubenswrapper[4812]: I1124 19:28:34.722792 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" event={"ID":"c8ae1181-23f5-4856-b47f-cc99c0455edf","Type":"ContainerStarted","Data":"5d9584246ffd646349b7bbb997b269e078a58508f3b3cd1e6265fea5428a71b3"} Nov 24 19:28:35 crc kubenswrapper[4812]: I1124 19:28:35.412241 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-lr8tv"] Nov 24 19:28:35 crc kubenswrapper[4812]: I1124 19:28:35.414641 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:28:35 crc kubenswrapper[4812]: I1124 19:28:35.417830 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 24 19:28:35 crc kubenswrapper[4812]: I1124 19:28:35.418047 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 24 19:28:35 crc kubenswrapper[4812]: I1124 19:28:35.420900 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 24 19:28:35 crc kubenswrapper[4812]: I1124 19:28:35.421167 4812 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-sg6gf" Nov 24 19:28:35 crc kubenswrapper[4812]: I1124 19:28:35.554731 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/740086f8-f1a4-484e-946c-03e70a2a55fa-node-mnt\") pod \"crc-storage-crc-lr8tv\" (UID: \"740086f8-f1a4-484e-946c-03e70a2a55fa\") " pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:28:35 crc kubenswrapper[4812]: I1124 19:28:35.554821 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/740086f8-f1a4-484e-946c-03e70a2a55fa-crc-storage\") pod \"crc-storage-crc-lr8tv\" (UID: \"740086f8-f1a4-484e-946c-03e70a2a55fa\") " pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:28:35 crc kubenswrapper[4812]: I1124 19:28:35.554865 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drhqf\" (UniqueName: \"kubernetes.io/projected/740086f8-f1a4-484e-946c-03e70a2a55fa-kube-api-access-drhqf\") pod \"crc-storage-crc-lr8tv\" (UID: \"740086f8-f1a4-484e-946c-03e70a2a55fa\") " pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:28:35 crc kubenswrapper[4812]: I1124 19:28:35.656657 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/740086f8-f1a4-484e-946c-03e70a2a55fa-node-mnt\") pod \"crc-storage-crc-lr8tv\" (UID: \"740086f8-f1a4-484e-946c-03e70a2a55fa\") " pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:28:35 crc kubenswrapper[4812]: I1124 19:28:35.656763 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/740086f8-f1a4-484e-946c-03e70a2a55fa-crc-storage\") pod \"crc-storage-crc-lr8tv\" (UID: \"740086f8-f1a4-484e-946c-03e70a2a55fa\") " pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:28:35 crc kubenswrapper[4812]: I1124 19:28:35.656831 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drhqf\" (UniqueName: \"kubernetes.io/projected/740086f8-f1a4-484e-946c-03e70a2a55fa-kube-api-access-drhqf\") pod \"crc-storage-crc-lr8tv\" (UID: \"740086f8-f1a4-484e-946c-03e70a2a55fa\") " pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:28:35 crc kubenswrapper[4812]: I1124 19:28:35.657097 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/740086f8-f1a4-484e-946c-03e70a2a55fa-node-mnt\") pod \"crc-storage-crc-lr8tv\" (UID: \"740086f8-f1a4-484e-946c-03e70a2a55fa\") " pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:28:35 crc kubenswrapper[4812]: I1124 19:28:35.658801 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/740086f8-f1a4-484e-946c-03e70a2a55fa-crc-storage\") pod \"crc-storage-crc-lr8tv\" (UID: \"740086f8-f1a4-484e-946c-03e70a2a55fa\") " pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:28:35 crc kubenswrapper[4812]: I1124 19:28:35.691636 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drhqf\" (UniqueName: \"kubernetes.io/projected/740086f8-f1a4-484e-946c-03e70a2a55fa-kube-api-access-drhqf\") pod \"crc-storage-crc-lr8tv\" (UID: \"740086f8-f1a4-484e-946c-03e70a2a55fa\") " pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:28:35 crc kubenswrapper[4812]: I1124 19:28:35.734151 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:28:35 crc kubenswrapper[4812]: E1124 19:28:35.777686 4812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-lr8tv_crc-storage_740086f8-f1a4-484e-946c-03e70a2a55fa_0(cbf9bc47cdbe416ac7f84e6429d237d9cee1fe80cb25c5221ebeb4e794a1011c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 19:28:35 crc kubenswrapper[4812]: E1124 19:28:35.778241 4812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-lr8tv_crc-storage_740086f8-f1a4-484e-946c-03e70a2a55fa_0(cbf9bc47cdbe416ac7f84e6429d237d9cee1fe80cb25c5221ebeb4e794a1011c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:28:35 crc kubenswrapper[4812]: E1124 19:28:35.778295 4812 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-lr8tv_crc-storage_740086f8-f1a4-484e-946c-03e70a2a55fa_0(cbf9bc47cdbe416ac7f84e6429d237d9cee1fe80cb25c5221ebeb4e794a1011c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:28:35 crc kubenswrapper[4812]: E1124 19:28:35.778421 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-lr8tv_crc-storage(740086f8-f1a4-484e-946c-03e70a2a55fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-lr8tv_crc-storage(740086f8-f1a4-484e-946c-03e70a2a55fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-lr8tv_crc-storage_740086f8-f1a4-484e-946c-03e70a2a55fa_0(cbf9bc47cdbe416ac7f84e6429d237d9cee1fe80cb25c5221ebeb4e794a1011c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-lr8tv" podUID="740086f8-f1a4-484e-946c-03e70a2a55fa" Nov 24 19:28:36 crc kubenswrapper[4812]: I1124 19:28:36.744891 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" event={"ID":"c8ae1181-23f5-4856-b47f-cc99c0455edf","Type":"ContainerStarted","Data":"68dd76f9965cf5ec980219242c7ea307c8c0638653a58f86d615a5b2cf47547e"} Nov 24 19:28:36 crc kubenswrapper[4812]: I1124 19:28:36.745478 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:36 crc kubenswrapper[4812]: I1124 19:28:36.745609 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:36 crc kubenswrapper[4812]: I1124 19:28:36.745712 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:36 crc kubenswrapper[4812]: I1124 19:28:36.772261 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:36 crc kubenswrapper[4812]: I1124 19:28:36.773988 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:28:36 crc kubenswrapper[4812]: I1124 19:28:36.784231 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" podStartSLOduration=6.784214059 podStartE2EDuration="6.784214059s" podCreationTimestamp="2025-11-24 19:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:28:36.782878022 +0000 UTC m=+710.571830413" watchObservedRunningTime="2025-11-24 19:28:36.784214059 +0000 UTC m=+710.573166430" Nov 24 19:28:36 crc kubenswrapper[4812]: I1124 19:28:36.988449 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-lr8tv"] Nov 24 19:28:36 crc kubenswrapper[4812]: I1124 19:28:36.988654 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:28:36 crc kubenswrapper[4812]: I1124 19:28:36.989428 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:28:37 crc kubenswrapper[4812]: E1124 19:28:37.015240 4812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-lr8tv_crc-storage_740086f8-f1a4-484e-946c-03e70a2a55fa_0(a2b0bbf1eb509a7094589b791c9b9790d0ade6b6dcf402eea232f3972ff10ea4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 19:28:37 crc kubenswrapper[4812]: E1124 19:28:37.015312 4812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-lr8tv_crc-storage_740086f8-f1a4-484e-946c-03e70a2a55fa_0(a2b0bbf1eb509a7094589b791c9b9790d0ade6b6dcf402eea232f3972ff10ea4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:28:37 crc kubenswrapper[4812]: E1124 19:28:37.015360 4812 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-lr8tv_crc-storage_740086f8-f1a4-484e-946c-03e70a2a55fa_0(a2b0bbf1eb509a7094589b791c9b9790d0ade6b6dcf402eea232f3972ff10ea4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:28:37 crc kubenswrapper[4812]: E1124 19:28:37.015427 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-lr8tv_crc-storage(740086f8-f1a4-484e-946c-03e70a2a55fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-lr8tv_crc-storage(740086f8-f1a4-484e-946c-03e70a2a55fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-lr8tv_crc-storage_740086f8-f1a4-484e-946c-03e70a2a55fa_0(a2b0bbf1eb509a7094589b791c9b9790d0ade6b6dcf402eea232f3972ff10ea4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-lr8tv" podUID="740086f8-f1a4-484e-946c-03e70a2a55fa" Nov 24 19:28:41 crc kubenswrapper[4812]: I1124 19:28:41.968138 4812 scope.go:117] "RemoveContainer" containerID="3bb81f02ea22620bd0101c7d8fdcad7d10ed08c41d42d32cd204a571e256cf0f" Nov 24 19:28:41 crc kubenswrapper[4812]: E1124 19:28:41.969202 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lhgj5_openshift-multus(c270cb89-97c2-48c4-94c3-9b8420d81cfd)\"" pod="openshift-multus/multus-lhgj5" podUID="c270cb89-97c2-48c4-94c3-9b8420d81cfd" Nov 24 19:28:50 crc kubenswrapper[4812]: I1124 19:28:50.965272 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:28:50 crc kubenswrapper[4812]: I1124 19:28:50.966732 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:28:51 crc kubenswrapper[4812]: E1124 19:28:51.008037 4812 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-lr8tv_crc-storage_740086f8-f1a4-484e-946c-03e70a2a55fa_0(8c544b98d827c09ee5a60b08ddc276e57029993ff4a6a2f1b6441fdad57a5316): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 19:28:51 crc kubenswrapper[4812]: E1124 19:28:51.008519 4812 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-lr8tv_crc-storage_740086f8-f1a4-484e-946c-03e70a2a55fa_0(8c544b98d827c09ee5a60b08ddc276e57029993ff4a6a2f1b6441fdad57a5316): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:28:51 crc kubenswrapper[4812]: E1124 19:28:51.008556 4812 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-lr8tv_crc-storage_740086f8-f1a4-484e-946c-03e70a2a55fa_0(8c544b98d827c09ee5a60b08ddc276e57029993ff4a6a2f1b6441fdad57a5316): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:28:51 crc kubenswrapper[4812]: E1124 19:28:51.008636 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-lr8tv_crc-storage(740086f8-f1a4-484e-946c-03e70a2a55fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-lr8tv_crc-storage(740086f8-f1a4-484e-946c-03e70a2a55fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-lr8tv_crc-storage_740086f8-f1a4-484e-946c-03e70a2a55fa_0(8c544b98d827c09ee5a60b08ddc276e57029993ff4a6a2f1b6441fdad57a5316): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-lr8tv" podUID="740086f8-f1a4-484e-946c-03e70a2a55fa" Nov 24 19:28:56 crc kubenswrapper[4812]: I1124 19:28:56.971105 4812 scope.go:117] "RemoveContainer" containerID="3bb81f02ea22620bd0101c7d8fdcad7d10ed08c41d42d32cd204a571e256cf0f" Nov 24 19:28:57 crc kubenswrapper[4812]: I1124 19:28:57.889938 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lhgj5_c270cb89-97c2-48c4-94c3-9b8420d81cfd/kube-multus/2.log" Nov 24 19:28:57 crc kubenswrapper[4812]: I1124 19:28:57.890520 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lhgj5" event={"ID":"c270cb89-97c2-48c4-94c3-9b8420d81cfd","Type":"ContainerStarted","Data":"5dfd47f937659ef1ecab9f73b50ad9f1b3f18b81f6218bfe9017080db0a92f27"} Nov 24 19:29:00 crc kubenswrapper[4812]: I1124 19:29:00.447105 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-z57nr" Nov 24 19:29:02 crc kubenswrapper[4812]: I1124 19:29:02.965618 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:29:02 crc kubenswrapper[4812]: I1124 19:29:02.966588 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:29:03 crc kubenswrapper[4812]: I1124 19:29:03.242969 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-lr8tv"] Nov 24 19:29:03 crc kubenswrapper[4812]: I1124 19:29:03.257095 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 19:29:03 crc kubenswrapper[4812]: I1124 19:29:03.937437 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-lr8tv" event={"ID":"740086f8-f1a4-484e-946c-03e70a2a55fa","Type":"ContainerStarted","Data":"ee92281079f621f41060066b05be6901c71a003660edb3762bd928aff5f34d68"} Nov 24 19:29:05 crc kubenswrapper[4812]: I1124 19:29:05.951316 4812 generic.go:334] "Generic (PLEG): container finished" podID="740086f8-f1a4-484e-946c-03e70a2a55fa" containerID="0e0ff0ce14b9486e1f912226d61dd91d1d734c6454cb4c48de9d2540698b03ca" exitCode=0 Nov 24 19:29:05 crc kubenswrapper[4812]: I1124 19:29:05.951523 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-lr8tv" event={"ID":"740086f8-f1a4-484e-946c-03e70a2a55fa","Type":"ContainerDied","Data":"0e0ff0ce14b9486e1f912226d61dd91d1d734c6454cb4c48de9d2540698b03ca"} Nov 24 19:29:07 crc kubenswrapper[4812]: I1124 19:29:07.261140 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:29:07 crc kubenswrapper[4812]: I1124 19:29:07.372177 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/740086f8-f1a4-484e-946c-03e70a2a55fa-crc-storage\") pod \"740086f8-f1a4-484e-946c-03e70a2a55fa\" (UID: \"740086f8-f1a4-484e-946c-03e70a2a55fa\") " Nov 24 19:29:07 crc kubenswrapper[4812]: I1124 19:29:07.372384 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drhqf\" (UniqueName: \"kubernetes.io/projected/740086f8-f1a4-484e-946c-03e70a2a55fa-kube-api-access-drhqf\") pod \"740086f8-f1a4-484e-946c-03e70a2a55fa\" (UID: \"740086f8-f1a4-484e-946c-03e70a2a55fa\") " Nov 24 19:29:07 crc kubenswrapper[4812]: I1124 19:29:07.372463 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/740086f8-f1a4-484e-946c-03e70a2a55fa-node-mnt\") pod \"740086f8-f1a4-484e-946c-03e70a2a55fa\" (UID: \"740086f8-f1a4-484e-946c-03e70a2a55fa\") " Nov 24 19:29:07 crc kubenswrapper[4812]: I1124 19:29:07.372638 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/740086f8-f1a4-484e-946c-03e70a2a55fa-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "740086f8-f1a4-484e-946c-03e70a2a55fa" (UID: "740086f8-f1a4-484e-946c-03e70a2a55fa"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:29:07 crc kubenswrapper[4812]: I1124 19:29:07.372839 4812 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/740086f8-f1a4-484e-946c-03e70a2a55fa-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 24 19:29:07 crc kubenswrapper[4812]: I1124 19:29:07.378068 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/740086f8-f1a4-484e-946c-03e70a2a55fa-kube-api-access-drhqf" (OuterVolumeSpecName: "kube-api-access-drhqf") pod "740086f8-f1a4-484e-946c-03e70a2a55fa" (UID: "740086f8-f1a4-484e-946c-03e70a2a55fa"). InnerVolumeSpecName "kube-api-access-drhqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:29:07 crc kubenswrapper[4812]: I1124 19:29:07.398239 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/740086f8-f1a4-484e-946c-03e70a2a55fa-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "740086f8-f1a4-484e-946c-03e70a2a55fa" (UID: "740086f8-f1a4-484e-946c-03e70a2a55fa"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:29:07 crc kubenswrapper[4812]: I1124 19:29:07.473851 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drhqf\" (UniqueName: \"kubernetes.io/projected/740086f8-f1a4-484e-946c-03e70a2a55fa-kube-api-access-drhqf\") on node \"crc\" DevicePath \"\"" Nov 24 19:29:07 crc kubenswrapper[4812]: I1124 19:29:07.473904 4812 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/740086f8-f1a4-484e-946c-03e70a2a55fa-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 24 19:29:07 crc kubenswrapper[4812]: I1124 19:29:07.970680 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-lr8tv" event={"ID":"740086f8-f1a4-484e-946c-03e70a2a55fa","Type":"ContainerDied","Data":"ee92281079f621f41060066b05be6901c71a003660edb3762bd928aff5f34d68"} Nov 24 19:29:07 crc kubenswrapper[4812]: I1124 19:29:07.970739 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee92281079f621f41060066b05be6901c71a003660edb3762bd928aff5f34d68" Nov 24 19:29:07 crc kubenswrapper[4812]: I1124 19:29:07.970822 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-lr8tv" Nov 24 19:29:14 crc kubenswrapper[4812]: I1124 19:29:14.075361 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9"] Nov 24 19:29:14 crc kubenswrapper[4812]: E1124 19:29:14.075982 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="740086f8-f1a4-484e-946c-03e70a2a55fa" containerName="storage" Nov 24 19:29:14 crc kubenswrapper[4812]: I1124 19:29:14.076004 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="740086f8-f1a4-484e-946c-03e70a2a55fa" containerName="storage" Nov 24 19:29:14 crc kubenswrapper[4812]: I1124 19:29:14.076189 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="740086f8-f1a4-484e-946c-03e70a2a55fa" containerName="storage" Nov 24 19:29:14 crc kubenswrapper[4812]: I1124 19:29:14.077477 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9" Nov 24 19:29:14 crc kubenswrapper[4812]: I1124 19:29:14.079881 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 19:29:14 crc kubenswrapper[4812]: I1124 19:29:14.101276 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9"] Nov 24 19:29:14 crc kubenswrapper[4812]: I1124 19:29:14.266396 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce87880b-2429-4769-8f74-00539adf6377-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9\" (UID: \"ce87880b-2429-4769-8f74-00539adf6377\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9" Nov 24 19:29:14 crc kubenswrapper[4812]: I1124 19:29:14.266469 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce87880b-2429-4769-8f74-00539adf6377-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9\" (UID: \"ce87880b-2429-4769-8f74-00539adf6377\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9" Nov 24 19:29:14 crc kubenswrapper[4812]: I1124 19:29:14.266554 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pltm\" (UniqueName: \"kubernetes.io/projected/ce87880b-2429-4769-8f74-00539adf6377-kube-api-access-5pltm\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9\" (UID: \"ce87880b-2429-4769-8f74-00539adf6377\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9" Nov 24 19:29:14 crc kubenswrapper[4812]: I1124 19:29:14.367866 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce87880b-2429-4769-8f74-00539adf6377-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9\" (UID: \"ce87880b-2429-4769-8f74-00539adf6377\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9" Nov 24 19:29:14 crc kubenswrapper[4812]: I1124 19:29:14.367958 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce87880b-2429-4769-8f74-00539adf6377-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9\" (UID: \"ce87880b-2429-4769-8f74-00539adf6377\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9" Nov 24 19:29:14 crc kubenswrapper[4812]: I1124 19:29:14.368063 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pltm\" (UniqueName: \"kubernetes.io/projected/ce87880b-2429-4769-8f74-00539adf6377-kube-api-access-5pltm\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9\" (UID: \"ce87880b-2429-4769-8f74-00539adf6377\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9" Nov 24 19:29:14 crc kubenswrapper[4812]: I1124 19:29:14.368520 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce87880b-2429-4769-8f74-00539adf6377-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9\" (UID: \"ce87880b-2429-4769-8f74-00539adf6377\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9" Nov 24 19:29:14 crc kubenswrapper[4812]: I1124 19:29:14.368762 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce87880b-2429-4769-8f74-00539adf6377-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9\" (UID: \"ce87880b-2429-4769-8f74-00539adf6377\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9" Nov 24 19:29:14 crc kubenswrapper[4812]: I1124 19:29:14.406224 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pltm\" (UniqueName: \"kubernetes.io/projected/ce87880b-2429-4769-8f74-00539adf6377-kube-api-access-5pltm\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9\" (UID: \"ce87880b-2429-4769-8f74-00539adf6377\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9" Nov 24 19:29:14 crc kubenswrapper[4812]: I1124 19:29:14.408175 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9" Nov 24 19:29:14 crc kubenswrapper[4812]: I1124 19:29:14.864559 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9"] Nov 24 19:29:15 crc kubenswrapper[4812]: I1124 19:29:15.029774 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9" event={"ID":"ce87880b-2429-4769-8f74-00539adf6377","Type":"ContainerStarted","Data":"f8f0bb85aa8fd0320d6595a6723c2bcb4a4fad8d09bde3c9e6c7043ac3fca0b7"} Nov 24 19:29:16 crc kubenswrapper[4812]: I1124 19:29:16.036014 4812 generic.go:334] "Generic (PLEG): container finished" podID="ce87880b-2429-4769-8f74-00539adf6377" containerID="93f254b8476b19eb0522f9c9532c6841b21f7ed191037489ab7854dbfd82681a" exitCode=0 Nov 24 19:29:16 crc kubenswrapper[4812]: I1124 19:29:16.036065 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9" event={"ID":"ce87880b-2429-4769-8f74-00539adf6377","Type":"ContainerDied","Data":"93f254b8476b19eb0522f9c9532c6841b21f7ed191037489ab7854dbfd82681a"} Nov 24 19:29:18 crc kubenswrapper[4812]: I1124 19:29:18.053141 4812 generic.go:334] "Generic (PLEG): container finished" podID="ce87880b-2429-4769-8f74-00539adf6377" containerID="06336b6d066fed633e6c4bf48e119933ba3a573e19f02bd525c7d146826eeebb" exitCode=0 Nov 24 19:29:18 crc kubenswrapper[4812]: I1124 19:29:18.053190 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9" event={"ID":"ce87880b-2429-4769-8f74-00539adf6377","Type":"ContainerDied","Data":"06336b6d066fed633e6c4bf48e119933ba3a573e19f02bd525c7d146826eeebb"} Nov 24 19:29:19 crc kubenswrapper[4812]: I1124 19:29:19.067179 4812 generic.go:334] "Generic (PLEG): container finished" podID="ce87880b-2429-4769-8f74-00539adf6377" containerID="ca1a961c0548af0466e60cbca2e09e6c171ec094f1b55a86ca8b011772f6bdba" exitCode=0 Nov 24 19:29:19 crc kubenswrapper[4812]: I1124 19:29:19.067229 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9" event={"ID":"ce87880b-2429-4769-8f74-00539adf6377","Type":"ContainerDied","Data":"ca1a961c0548af0466e60cbca2e09e6c171ec094f1b55a86ca8b011772f6bdba"} Nov 24 19:29:20 crc kubenswrapper[4812]: I1124 19:29:20.379634 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9" Nov 24 19:29:20 crc kubenswrapper[4812]: I1124 19:29:20.458442 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce87880b-2429-4769-8f74-00539adf6377-util\") pod \"ce87880b-2429-4769-8f74-00539adf6377\" (UID: \"ce87880b-2429-4769-8f74-00539adf6377\") " Nov 24 19:29:20 crc kubenswrapper[4812]: I1124 19:29:20.458541 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce87880b-2429-4769-8f74-00539adf6377-bundle\") pod \"ce87880b-2429-4769-8f74-00539adf6377\" (UID: \"ce87880b-2429-4769-8f74-00539adf6377\") " Nov 24 19:29:20 crc kubenswrapper[4812]: I1124 19:29:20.458622 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pltm\" (UniqueName: \"kubernetes.io/projected/ce87880b-2429-4769-8f74-00539adf6377-kube-api-access-5pltm\") pod \"ce87880b-2429-4769-8f74-00539adf6377\" (UID: \"ce87880b-2429-4769-8f74-00539adf6377\") " Nov 24 19:29:20 crc kubenswrapper[4812]: I1124 19:29:20.465379 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce87880b-2429-4769-8f74-00539adf6377-bundle" (OuterVolumeSpecName: "bundle") pod "ce87880b-2429-4769-8f74-00539adf6377" (UID: "ce87880b-2429-4769-8f74-00539adf6377"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:29:20 crc kubenswrapper[4812]: I1124 19:29:20.470578 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce87880b-2429-4769-8f74-00539adf6377-kube-api-access-5pltm" (OuterVolumeSpecName: "kube-api-access-5pltm") pod "ce87880b-2429-4769-8f74-00539adf6377" (UID: "ce87880b-2429-4769-8f74-00539adf6377"). InnerVolumeSpecName "kube-api-access-5pltm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:29:20 crc kubenswrapper[4812]: I1124 19:29:20.489224 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce87880b-2429-4769-8f74-00539adf6377-util" (OuterVolumeSpecName: "util") pod "ce87880b-2429-4769-8f74-00539adf6377" (UID: "ce87880b-2429-4769-8f74-00539adf6377"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:29:20 crc kubenswrapper[4812]: I1124 19:29:20.560119 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pltm\" (UniqueName: \"kubernetes.io/projected/ce87880b-2429-4769-8f74-00539adf6377-kube-api-access-5pltm\") on node \"crc\" DevicePath \"\"" Nov 24 19:29:20 crc kubenswrapper[4812]: I1124 19:29:20.560391 4812 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce87880b-2429-4769-8f74-00539adf6377-util\") on node \"crc\" DevicePath \"\"" Nov 24 19:29:20 crc kubenswrapper[4812]: I1124 19:29:20.560539 4812 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce87880b-2429-4769-8f74-00539adf6377-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:29:21 crc kubenswrapper[4812]: I1124 19:29:21.080893 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9" event={"ID":"ce87880b-2429-4769-8f74-00539adf6377","Type":"ContainerDied","Data":"f8f0bb85aa8fd0320d6595a6723c2bcb4a4fad8d09bde3c9e6c7043ac3fca0b7"} Nov 24 19:29:21 crc kubenswrapper[4812]: I1124 19:29:21.080947 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8f0bb85aa8fd0320d6595a6723c2bcb4a4fad8d09bde3c9e6c7043ac3fca0b7" Nov 24 19:29:21 crc kubenswrapper[4812]: I1124 19:29:21.081068 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9" Nov 24 19:29:23 crc kubenswrapper[4812]: I1124 19:29:23.132462 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-ttqsd"] Nov 24 19:29:23 crc kubenswrapper[4812]: E1124 19:29:23.133097 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce87880b-2429-4769-8f74-00539adf6377" containerName="pull" Nov 24 19:29:23 crc kubenswrapper[4812]: I1124 19:29:23.133122 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce87880b-2429-4769-8f74-00539adf6377" containerName="pull" Nov 24 19:29:23 crc kubenswrapper[4812]: E1124 19:29:23.133143 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce87880b-2429-4769-8f74-00539adf6377" containerName="extract" Nov 24 19:29:23 crc kubenswrapper[4812]: I1124 19:29:23.133155 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce87880b-2429-4769-8f74-00539adf6377" containerName="extract" Nov 24 19:29:23 crc kubenswrapper[4812]: E1124 19:29:23.133181 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce87880b-2429-4769-8f74-00539adf6377" containerName="util" Nov 24 19:29:23 crc kubenswrapper[4812]: I1124 19:29:23.133192 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce87880b-2429-4769-8f74-00539adf6377" containerName="util" Nov 24 19:29:23 crc kubenswrapper[4812]: I1124 19:29:23.133365 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce87880b-2429-4769-8f74-00539adf6377" containerName="extract" Nov 24 19:29:23 crc kubenswrapper[4812]: I1124 19:29:23.133819 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-ttqsd" Nov 24 19:29:23 crc kubenswrapper[4812]: I1124 19:29:23.136124 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 24 19:29:23 crc kubenswrapper[4812]: I1124 19:29:23.136172 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-4xkwn" Nov 24 19:29:23 crc kubenswrapper[4812]: I1124 19:29:23.143228 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 24 19:29:23 crc kubenswrapper[4812]: I1124 19:29:23.148363 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-ttqsd"] Nov 24 19:29:23 crc kubenswrapper[4812]: I1124 19:29:23.197780 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58frb\" (UniqueName: \"kubernetes.io/projected/e0231310-bf09-4709-a225-ffc202a7ca6a-kube-api-access-58frb\") pod \"nmstate-operator-557fdffb88-ttqsd\" (UID: \"e0231310-bf09-4709-a225-ffc202a7ca6a\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-ttqsd" Nov 24 19:29:23 crc kubenswrapper[4812]: I1124 19:29:23.299109 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58frb\" (UniqueName: \"kubernetes.io/projected/e0231310-bf09-4709-a225-ffc202a7ca6a-kube-api-access-58frb\") pod \"nmstate-operator-557fdffb88-ttqsd\" (UID: \"e0231310-bf09-4709-a225-ffc202a7ca6a\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-ttqsd" Nov 24 19:29:23 crc kubenswrapper[4812]: I1124 19:29:23.336631 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58frb\" (UniqueName: \"kubernetes.io/projected/e0231310-bf09-4709-a225-ffc202a7ca6a-kube-api-access-58frb\") pod \"nmstate-operator-557fdffb88-ttqsd\" (UID: \"e0231310-bf09-4709-a225-ffc202a7ca6a\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-ttqsd" Nov 24 19:29:23 crc kubenswrapper[4812]: I1124 19:29:23.450442 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-ttqsd" Nov 24 19:29:23 crc kubenswrapper[4812]: I1124 19:29:23.664374 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-ttqsd"] Nov 24 19:29:24 crc kubenswrapper[4812]: I1124 19:29:24.100420 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-ttqsd" event={"ID":"e0231310-bf09-4709-a225-ffc202a7ca6a","Type":"ContainerStarted","Data":"cfe66ada138a7be23a65c6026fb6622ad43dcb4d0b6fac8f182099398b516016"} Nov 24 19:29:24 crc kubenswrapper[4812]: I1124 19:29:24.724515 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6zn22"] Nov 24 19:29:24 crc kubenswrapper[4812]: I1124 19:29:24.725242 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" podUID="1ba44129-7f71-4b58-a7eb-ee5356d4aa4a" containerName="controller-manager" containerID="cri-o://04f8139176875de509920e10f4b7b69dae3b8ced2d32e28b51e1aac66bdb10ae" gracePeriod=30 Nov 24 19:29:24 crc kubenswrapper[4812]: I1124 19:29:24.823303 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr"] Nov 24 19:29:24 crc kubenswrapper[4812]: I1124 19:29:24.823650 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" podUID="a9392d98-21d0-4761-a4e4-0a75367b4c31" containerName="route-controller-manager" containerID="cri-o://e7c49c08385020cf2e9de1b723115a409486bb32b3c57141a8b471c3ec7b4433" gracePeriod=30 Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.110714 4812 generic.go:334] "Generic (PLEG): container finished" podID="a9392d98-21d0-4761-a4e4-0a75367b4c31" containerID="e7c49c08385020cf2e9de1b723115a409486bb32b3c57141a8b471c3ec7b4433" exitCode=0 Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.110778 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" event={"ID":"a9392d98-21d0-4761-a4e4-0a75367b4c31","Type":"ContainerDied","Data":"e7c49c08385020cf2e9de1b723115a409486bb32b3c57141a8b471c3ec7b4433"} Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.110942 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.112258 4812 generic.go:334] "Generic (PLEG): container finished" podID="1ba44129-7f71-4b58-a7eb-ee5356d4aa4a" containerID="04f8139176875de509920e10f4b7b69dae3b8ced2d32e28b51e1aac66bdb10ae" exitCode=0 Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.112285 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" event={"ID":"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a","Type":"ContainerDied","Data":"04f8139176875de509920e10f4b7b69dae3b8ced2d32e28b51e1aac66bdb10ae"} Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.112302 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" event={"ID":"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a","Type":"ContainerDied","Data":"269973b9eef9d55dfd028e938e0a26993f6fe1ffacf91d3b0d3719bc16c9cf7f"} Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.112320 4812 scope.go:117] "RemoveContainer" containerID="04f8139176875de509920e10f4b7b69dae3b8ced2d32e28b51e1aac66bdb10ae" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.136232 4812 scope.go:117] "RemoveContainer" containerID="04f8139176875de509920e10f4b7b69dae3b8ced2d32e28b51e1aac66bdb10ae" Nov 24 19:29:25 crc kubenswrapper[4812]: E1124 19:29:25.136639 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04f8139176875de509920e10f4b7b69dae3b8ced2d32e28b51e1aac66bdb10ae\": container with ID starting with 04f8139176875de509920e10f4b7b69dae3b8ced2d32e28b51e1aac66bdb10ae not found: ID does not exist" containerID="04f8139176875de509920e10f4b7b69dae3b8ced2d32e28b51e1aac66bdb10ae" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.136662 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04f8139176875de509920e10f4b7b69dae3b8ced2d32e28b51e1aac66bdb10ae"} err="failed to get container status \"04f8139176875de509920e10f4b7b69dae3b8ced2d32e28b51e1aac66bdb10ae\": rpc error: code = NotFound desc = could not find container \"04f8139176875de509920e10f4b7b69dae3b8ced2d32e28b51e1aac66bdb10ae\": container with ID starting with 04f8139176875de509920e10f4b7b69dae3b8ced2d32e28b51e1aac66bdb10ae not found: ID does not exist" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.199372 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.224780 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-serving-cert\") pod \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\" (UID: \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\") " Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.224826 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgq9m\" (UniqueName: \"kubernetes.io/projected/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-kube-api-access-mgq9m\") pod \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\" (UID: \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\") " Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.224887 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-client-ca\") pod \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\" (UID: \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\") " Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.224938 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-proxy-ca-bundles\") pod \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\" (UID: \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\") " Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.224958 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-config\") pod \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\" (UID: \"1ba44129-7f71-4b58-a7eb-ee5356d4aa4a\") " Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.225787 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1ba44129-7f71-4b58-a7eb-ee5356d4aa4a" (UID: "1ba44129-7f71-4b58-a7eb-ee5356d4aa4a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.225936 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-config" (OuterVolumeSpecName: "config") pod "1ba44129-7f71-4b58-a7eb-ee5356d4aa4a" (UID: "1ba44129-7f71-4b58-a7eb-ee5356d4aa4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.226083 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-client-ca" (OuterVolumeSpecName: "client-ca") pod "1ba44129-7f71-4b58-a7eb-ee5356d4aa4a" (UID: "1ba44129-7f71-4b58-a7eb-ee5356d4aa4a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.233827 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1ba44129-7f71-4b58-a7eb-ee5356d4aa4a" (UID: "1ba44129-7f71-4b58-a7eb-ee5356d4aa4a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.241257 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-kube-api-access-mgq9m" (OuterVolumeSpecName: "kube-api-access-mgq9m") pod "1ba44129-7f71-4b58-a7eb-ee5356d4aa4a" (UID: "1ba44129-7f71-4b58-a7eb-ee5356d4aa4a"). InnerVolumeSpecName "kube-api-access-mgq9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.282604 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-648876bcc7-j6drk"] Nov 24 19:29:25 crc kubenswrapper[4812]: E1124 19:29:25.283016 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9392d98-21d0-4761-a4e4-0a75367b4c31" containerName="route-controller-manager" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.283027 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9392d98-21d0-4761-a4e4-0a75367b4c31" containerName="route-controller-manager" Nov 24 19:29:25 crc kubenswrapper[4812]: E1124 19:29:25.283040 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba44129-7f71-4b58-a7eb-ee5356d4aa4a" containerName="controller-manager" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.283046 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba44129-7f71-4b58-a7eb-ee5356d4aa4a" containerName="controller-manager" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.283156 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ba44129-7f71-4b58-a7eb-ee5356d4aa4a" containerName="controller-manager" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.283166 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9392d98-21d0-4761-a4e4-0a75367b4c31" containerName="route-controller-manager" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.283521 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.294910 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-648876bcc7-j6drk"] Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.326348 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jssqm\" (UniqueName: \"kubernetes.io/projected/a9392d98-21d0-4761-a4e4-0a75367b4c31-kube-api-access-jssqm\") pod \"a9392d98-21d0-4761-a4e4-0a75367b4c31\" (UID: \"a9392d98-21d0-4761-a4e4-0a75367b4c31\") " Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.326400 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9392d98-21d0-4761-a4e4-0a75367b4c31-client-ca\") pod \"a9392d98-21d0-4761-a4e4-0a75367b4c31\" (UID: \"a9392d98-21d0-4761-a4e4-0a75367b4c31\") " Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.326437 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9392d98-21d0-4761-a4e4-0a75367b4c31-config\") pod \"a9392d98-21d0-4761-a4e4-0a75367b4c31\" (UID: \"a9392d98-21d0-4761-a4e4-0a75367b4c31\") " Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.326462 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9392d98-21d0-4761-a4e4-0a75367b4c31-serving-cert\") pod \"a9392d98-21d0-4761-a4e4-0a75367b4c31\" (UID: \"a9392d98-21d0-4761-a4e4-0a75367b4c31\") " Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.326596 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10541b32-8e45-4ce7-a51d-7bcd41befd9a-config\") pod \"controller-manager-648876bcc7-j6drk\" (UID: \"10541b32-8e45-4ce7-a51d-7bcd41befd9a\") " pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.326631 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5cqf\" (UniqueName: \"kubernetes.io/projected/10541b32-8e45-4ce7-a51d-7bcd41befd9a-kube-api-access-d5cqf\") pod \"controller-manager-648876bcc7-j6drk\" (UID: \"10541b32-8e45-4ce7-a51d-7bcd41befd9a\") " pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.326649 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10541b32-8e45-4ce7-a51d-7bcd41befd9a-client-ca\") pod \"controller-manager-648876bcc7-j6drk\" (UID: \"10541b32-8e45-4ce7-a51d-7bcd41befd9a\") " pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.326667 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10541b32-8e45-4ce7-a51d-7bcd41befd9a-serving-cert\") pod \"controller-manager-648876bcc7-j6drk\" (UID: \"10541b32-8e45-4ce7-a51d-7bcd41befd9a\") " pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.326691 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10541b32-8e45-4ce7-a51d-7bcd41befd9a-proxy-ca-bundles\") pod \"controller-manager-648876bcc7-j6drk\" (UID: \"10541b32-8e45-4ce7-a51d-7bcd41befd9a\") " pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.326720 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.326731 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgq9m\" (UniqueName: \"kubernetes.io/projected/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-kube-api-access-mgq9m\") on node \"crc\" DevicePath \"\"" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.326742 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.326751 4812 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.326763 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.327931 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9392d98-21d0-4761-a4e4-0a75367b4c31-client-ca" (OuterVolumeSpecName: "client-ca") pod "a9392d98-21d0-4761-a4e4-0a75367b4c31" (UID: "a9392d98-21d0-4761-a4e4-0a75367b4c31"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.328264 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9392d98-21d0-4761-a4e4-0a75367b4c31-config" (OuterVolumeSpecName: "config") pod "a9392d98-21d0-4761-a4e4-0a75367b4c31" (UID: "a9392d98-21d0-4761-a4e4-0a75367b4c31"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.330075 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9392d98-21d0-4761-a4e4-0a75367b4c31-kube-api-access-jssqm" (OuterVolumeSpecName: "kube-api-access-jssqm") pod "a9392d98-21d0-4761-a4e4-0a75367b4c31" (UID: "a9392d98-21d0-4761-a4e4-0a75367b4c31"). InnerVolumeSpecName "kube-api-access-jssqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.332911 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9392d98-21d0-4761-a4e4-0a75367b4c31-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a9392d98-21d0-4761-a4e4-0a75367b4c31" (UID: "a9392d98-21d0-4761-a4e4-0a75367b4c31"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.337672 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt"] Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.340370 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.344109 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt"] Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.427656 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10541b32-8e45-4ce7-a51d-7bcd41befd9a-proxy-ca-bundles\") pod \"controller-manager-648876bcc7-j6drk\" (UID: \"10541b32-8e45-4ce7-a51d-7bcd41befd9a\") " pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.427779 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10541b32-8e45-4ce7-a51d-7bcd41befd9a-config\") pod \"controller-manager-648876bcc7-j6drk\" (UID: \"10541b32-8e45-4ce7-a51d-7bcd41befd9a\") " pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.427817 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5cqf\" (UniqueName: \"kubernetes.io/projected/10541b32-8e45-4ce7-a51d-7bcd41befd9a-kube-api-access-d5cqf\") pod \"controller-manager-648876bcc7-j6drk\" (UID: \"10541b32-8e45-4ce7-a51d-7bcd41befd9a\") " pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.427835 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10541b32-8e45-4ce7-a51d-7bcd41befd9a-client-ca\") pod \"controller-manager-648876bcc7-j6drk\" (UID: \"10541b32-8e45-4ce7-a51d-7bcd41befd9a\") " pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.427854 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10541b32-8e45-4ce7-a51d-7bcd41befd9a-serving-cert\") pod \"controller-manager-648876bcc7-j6drk\" (UID: \"10541b32-8e45-4ce7-a51d-7bcd41befd9a\") " pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.427895 4812 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9392d98-21d0-4761-a4e4-0a75367b4c31-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.427908 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jssqm\" (UniqueName: \"kubernetes.io/projected/a9392d98-21d0-4761-a4e4-0a75367b4c31-kube-api-access-jssqm\") on node \"crc\" DevicePath \"\"" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.427920 4812 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9392d98-21d0-4761-a4e4-0a75367b4c31-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.427929 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9392d98-21d0-4761-a4e4-0a75367b4c31-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.428968 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10541b32-8e45-4ce7-a51d-7bcd41befd9a-client-ca\") pod \"controller-manager-648876bcc7-j6drk\" (UID: \"10541b32-8e45-4ce7-a51d-7bcd41befd9a\") " pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.429205 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10541b32-8e45-4ce7-a51d-7bcd41befd9a-proxy-ca-bundles\") pod \"controller-manager-648876bcc7-j6drk\" (UID: \"10541b32-8e45-4ce7-a51d-7bcd41befd9a\") " pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.430248 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10541b32-8e45-4ce7-a51d-7bcd41befd9a-config\") pod \"controller-manager-648876bcc7-j6drk\" (UID: \"10541b32-8e45-4ce7-a51d-7bcd41befd9a\") " pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.431019 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10541b32-8e45-4ce7-a51d-7bcd41befd9a-serving-cert\") pod \"controller-manager-648876bcc7-j6drk\" (UID: \"10541b32-8e45-4ce7-a51d-7bcd41befd9a\") " pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.448509 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5cqf\" (UniqueName: \"kubernetes.io/projected/10541b32-8e45-4ce7-a51d-7bcd41befd9a-kube-api-access-d5cqf\") pod \"controller-manager-648876bcc7-j6drk\" (UID: \"10541b32-8e45-4ce7-a51d-7bcd41befd9a\") " pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.528604 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg5fb\" (UniqueName: \"kubernetes.io/projected/ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc-kube-api-access-kg5fb\") pod \"route-controller-manager-fc68d945-fwgwt\" (UID: \"ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc\") " pod="openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.528690 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc-client-ca\") pod \"route-controller-manager-fc68d945-fwgwt\" (UID: \"ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc\") " pod="openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.528774 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc-config\") pod \"route-controller-manager-fc68d945-fwgwt\" (UID: \"ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc\") " pod="openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.528879 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc-serving-cert\") pod \"route-controller-manager-fc68d945-fwgwt\" (UID: \"ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc\") " pod="openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.608168 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.630445 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg5fb\" (UniqueName: \"kubernetes.io/projected/ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc-kube-api-access-kg5fb\") pod \"route-controller-manager-fc68d945-fwgwt\" (UID: \"ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc\") " pod="openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.630565 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc-client-ca\") pod \"route-controller-manager-fc68d945-fwgwt\" (UID: \"ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc\") " pod="openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.630639 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc-config\") pod \"route-controller-manager-fc68d945-fwgwt\" (UID: \"ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc\") " pod="openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.630696 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc-serving-cert\") pod \"route-controller-manager-fc68d945-fwgwt\" (UID: \"ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc\") " pod="openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.631830 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc-client-ca\") pod \"route-controller-manager-fc68d945-fwgwt\" (UID: \"ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc\") " pod="openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.633027 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc-config\") pod \"route-controller-manager-fc68d945-fwgwt\" (UID: \"ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc\") " pod="openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.644349 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc-serving-cert\") pod \"route-controller-manager-fc68d945-fwgwt\" (UID: \"ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc\") " pod="openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.647301 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg5fb\" (UniqueName: \"kubernetes.io/projected/ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc-kube-api-access-kg5fb\") pod \"route-controller-manager-fc68d945-fwgwt\" (UID: \"ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc\") " pod="openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt" Nov 24 19:29:25 crc kubenswrapper[4812]: I1124 19:29:25.660480 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt" Nov 24 19:29:26 crc kubenswrapper[4812]: I1124 19:29:26.133920 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" event={"ID":"a9392d98-21d0-4761-a4e4-0a75367b4c31","Type":"ContainerDied","Data":"d750689913de07cb46bfb0b07857dc4789753d64d8c4604215b97bce80af9f25"} Nov 24 19:29:26 crc kubenswrapper[4812]: I1124 19:29:26.134188 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr" Nov 24 19:29:26 crc kubenswrapper[4812]: I1124 19:29:26.134223 4812 scope.go:117] "RemoveContainer" containerID="e7c49c08385020cf2e9de1b723115a409486bb32b3c57141a8b471c3ec7b4433" Nov 24 19:29:26 crc kubenswrapper[4812]: I1124 19:29:26.144445 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6zn22" Nov 24 19:29:26 crc kubenswrapper[4812]: I1124 19:29:26.192981 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr"] Nov 24 19:29:26 crc kubenswrapper[4812]: I1124 19:29:26.199180 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-574hr"] Nov 24 19:29:26 crc kubenswrapper[4812]: I1124 19:29:26.207693 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6zn22"] Nov 24 19:29:26 crc kubenswrapper[4812]: I1124 19:29:26.212360 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6zn22"] Nov 24 19:29:26 crc kubenswrapper[4812]: I1124 19:29:26.274112 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt"] Nov 24 19:29:26 crc kubenswrapper[4812]: W1124 19:29:26.277449 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea838c7f_8d50_4ff9_af9f_1c9bc0b798bc.slice/crio-fa4b86247a8ac322a4c138a8a7bf608629340cee1cfcd2a5b93da3922ba40dc6 WatchSource:0}: Error finding container fa4b86247a8ac322a4c138a8a7bf608629340cee1cfcd2a5b93da3922ba40dc6: Status 404 returned error can't find the container with id fa4b86247a8ac322a4c138a8a7bf608629340cee1cfcd2a5b93da3922ba40dc6 Nov 24 19:29:26 crc kubenswrapper[4812]: I1124 19:29:26.428911 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-648876bcc7-j6drk"] Nov 24 19:29:26 crc kubenswrapper[4812]: W1124 19:29:26.429559 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10541b32_8e45_4ce7_a51d_7bcd41befd9a.slice/crio-2f66e4e9bd02405561692fdc80fddde7bde8b30c48b86cd416b2d4c09f314865 WatchSource:0}: Error finding container 2f66e4e9bd02405561692fdc80fddde7bde8b30c48b86cd416b2d4c09f314865: Status 404 returned error can't find the container with id 2f66e4e9bd02405561692fdc80fddde7bde8b30c48b86cd416b2d4c09f314865 Nov 24 19:29:26 crc kubenswrapper[4812]: I1124 19:29:26.972746 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ba44129-7f71-4b58-a7eb-ee5356d4aa4a" path="/var/lib/kubelet/pods/1ba44129-7f71-4b58-a7eb-ee5356d4aa4a/volumes" Nov 24 19:29:26 crc kubenswrapper[4812]: I1124 19:29:26.973657 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9392d98-21d0-4761-a4e4-0a75367b4c31" path="/var/lib/kubelet/pods/a9392d98-21d0-4761-a4e4-0a75367b4c31/volumes" Nov 24 19:29:27 crc kubenswrapper[4812]: I1124 19:29:27.150149 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt" event={"ID":"ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc","Type":"ContainerStarted","Data":"4b385db8d655b25ad5e1b1be33d2fb2d0b3b953f62490f9443bac33c0318ed49"} Nov 24 19:29:27 crc kubenswrapper[4812]: I1124 19:29:27.151127 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt" event={"ID":"ea838c7f-8d50-4ff9-af9f-1c9bc0b798bc","Type":"ContainerStarted","Data":"fa4b86247a8ac322a4c138a8a7bf608629340cee1cfcd2a5b93da3922ba40dc6"} Nov 24 19:29:27 crc kubenswrapper[4812]: I1124 19:29:27.151248 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt" Nov 24 19:29:27 crc kubenswrapper[4812]: I1124 19:29:27.152030 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" event={"ID":"10541b32-8e45-4ce7-a51d-7bcd41befd9a","Type":"ContainerStarted","Data":"0f4f847686857c7ba8875207d493c28182d0bf9f63017844a26a226bee838804"} Nov 24 19:29:27 crc kubenswrapper[4812]: I1124 19:29:27.152094 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" event={"ID":"10541b32-8e45-4ce7-a51d-7bcd41befd9a","Type":"ContainerStarted","Data":"2f66e4e9bd02405561692fdc80fddde7bde8b30c48b86cd416b2d4c09f314865"} Nov 24 19:29:27 crc kubenswrapper[4812]: I1124 19:29:27.152372 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" Nov 24 19:29:27 crc kubenswrapper[4812]: I1124 19:29:27.154861 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-ttqsd" event={"ID":"e0231310-bf09-4709-a225-ffc202a7ca6a","Type":"ContainerStarted","Data":"d69528638f5e869b9379ae44cc0ff191d0e333cf243585d84db069826648a7eb"} Nov 24 19:29:27 crc kubenswrapper[4812]: I1124 19:29:27.168115 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" Nov 24 19:29:27 crc kubenswrapper[4812]: I1124 19:29:27.169452 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt" podStartSLOduration=2.169431486 podStartE2EDuration="2.169431486s" podCreationTimestamp="2025-11-24 19:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:29:27.166602646 +0000 UTC m=+760.955555017" watchObservedRunningTime="2025-11-24 19:29:27.169431486 +0000 UTC m=+760.958383857" Nov 24 19:29:27 crc kubenswrapper[4812]: I1124 19:29:27.196016 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-648876bcc7-j6drk" podStartSLOduration=2.195993188 podStartE2EDuration="2.195993188s" podCreationTimestamp="2025-11-24 19:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:29:27.193127187 +0000 UTC m=+760.982079558" watchObservedRunningTime="2025-11-24 19:29:27.195993188 +0000 UTC m=+760.984945599" Nov 24 19:29:27 crc kubenswrapper[4812]: I1124 19:29:27.237584 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-ttqsd" podStartSLOduration=1.8493894819999999 podStartE2EDuration="4.237553743s" podCreationTimestamp="2025-11-24 19:29:23 +0000 UTC" firstStartedPulling="2025-11-24 19:29:23.679269653 +0000 UTC m=+757.468222034" lastFinishedPulling="2025-11-24 19:29:26.067433924 +0000 UTC m=+759.856386295" observedRunningTime="2025-11-24 19:29:27.216998482 +0000 UTC m=+761.005950853" watchObservedRunningTime="2025-11-24 19:29:27.237553743 +0000 UTC m=+761.026506134" Nov 24 19:29:27 crc kubenswrapper[4812]: I1124 19:29:27.429169 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-fc68d945-fwgwt" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.073832 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-dt784"] Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.075666 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-dt784" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.080830 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-2mt84" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.099496 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-dt784"] Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.102234 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-5ts74"] Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.103496 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5ts74" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.105652 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.128042 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-fp6mk"] Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.128928 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-fp6mk" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.143688 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-5ts74"] Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.228503 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h2g74"] Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.229206 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h2g74" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.231740 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-2fvxv" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.232685 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.235597 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.238001 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h2g74"] Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.261544 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/901a9498-a0c1-497c-b671-122385a07c36-ovs-socket\") pod \"nmstate-handler-fp6mk\" (UID: \"901a9498-a0c1-497c-b671-122385a07c36\") " pod="openshift-nmstate/nmstate-handler-fp6mk" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.261633 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt8pn\" (UniqueName: \"kubernetes.io/projected/901a9498-a0c1-497c-b671-122385a07c36-kube-api-access-nt8pn\") pod \"nmstate-handler-fp6mk\" (UID: \"901a9498-a0c1-497c-b671-122385a07c36\") " pod="openshift-nmstate/nmstate-handler-fp6mk" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.261672 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dd98c63c-2347-487d-95d0-b987f616398c-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-5ts74\" (UID: \"dd98c63c-2347-487d-95d0-b987f616398c\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5ts74" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.261691 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5x88\" (UniqueName: \"kubernetes.io/projected/dd98c63c-2347-487d-95d0-b987f616398c-kube-api-access-c5x88\") pod \"nmstate-webhook-6b89b748d8-5ts74\" (UID: \"dd98c63c-2347-487d-95d0-b987f616398c\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5ts74" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.261712 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/901a9498-a0c1-497c-b671-122385a07c36-nmstate-lock\") pod \"nmstate-handler-fp6mk\" (UID: \"901a9498-a0c1-497c-b671-122385a07c36\") " pod="openshift-nmstate/nmstate-handler-fp6mk" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.261795 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/901a9498-a0c1-497c-b671-122385a07c36-dbus-socket\") pod \"nmstate-handler-fp6mk\" (UID: \"901a9498-a0c1-497c-b671-122385a07c36\") " pod="openshift-nmstate/nmstate-handler-fp6mk" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.261889 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qkbm\" (UniqueName: \"kubernetes.io/projected/b313856b-afca-4673-ba78-cdcbf5d465cb-kube-api-access-2qkbm\") pod \"nmstate-metrics-5dcf9c57c5-dt784\" (UID: \"b313856b-afca-4673-ba78-cdcbf5d465cb\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-dt784" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.363616 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/901a9498-a0c1-497c-b671-122385a07c36-ovs-socket\") pod \"nmstate-handler-fp6mk\" (UID: \"901a9498-a0c1-497c-b671-122385a07c36\") " pod="openshift-nmstate/nmstate-handler-fp6mk" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.363692 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/84931e32-2b26-4a7b-8387-78c99e46841d-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-h2g74\" (UID: \"84931e32-2b26-4a7b-8387-78c99e46841d\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h2g74" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.363734 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/901a9498-a0c1-497c-b671-122385a07c36-ovs-socket\") pod \"nmstate-handler-fp6mk\" (UID: \"901a9498-a0c1-497c-b671-122385a07c36\") " pod="openshift-nmstate/nmstate-handler-fp6mk" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.363753 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt8pn\" (UniqueName: \"kubernetes.io/projected/901a9498-a0c1-497c-b671-122385a07c36-kube-api-access-nt8pn\") pod \"nmstate-handler-fp6mk\" (UID: \"901a9498-a0c1-497c-b671-122385a07c36\") " pod="openshift-nmstate/nmstate-handler-fp6mk" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.363885 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw4rs\" (UniqueName: \"kubernetes.io/projected/84931e32-2b26-4a7b-8387-78c99e46841d-kube-api-access-dw4rs\") pod \"nmstate-console-plugin-5874bd7bc5-h2g74\" (UID: \"84931e32-2b26-4a7b-8387-78c99e46841d\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h2g74" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.363972 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dd98c63c-2347-487d-95d0-b987f616398c-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-5ts74\" (UID: \"dd98c63c-2347-487d-95d0-b987f616398c\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5ts74" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.364016 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5x88\" (UniqueName: \"kubernetes.io/projected/dd98c63c-2347-487d-95d0-b987f616398c-kube-api-access-c5x88\") pod \"nmstate-webhook-6b89b748d8-5ts74\" (UID: \"dd98c63c-2347-487d-95d0-b987f616398c\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5ts74" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.364067 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/901a9498-a0c1-497c-b671-122385a07c36-nmstate-lock\") pod \"nmstate-handler-fp6mk\" (UID: \"901a9498-a0c1-497c-b671-122385a07c36\") " pod="openshift-nmstate/nmstate-handler-fp6mk" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.364144 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/901a9498-a0c1-497c-b671-122385a07c36-dbus-socket\") pod \"nmstate-handler-fp6mk\" (UID: \"901a9498-a0c1-497c-b671-122385a07c36\") " pod="openshift-nmstate/nmstate-handler-fp6mk" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.364206 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/84931e32-2b26-4a7b-8387-78c99e46841d-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-h2g74\" (UID: \"84931e32-2b26-4a7b-8387-78c99e46841d\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h2g74" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.364257 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qkbm\" (UniqueName: \"kubernetes.io/projected/b313856b-afca-4673-ba78-cdcbf5d465cb-kube-api-access-2qkbm\") pod \"nmstate-metrics-5dcf9c57c5-dt784\" (UID: \"b313856b-afca-4673-ba78-cdcbf5d465cb\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-dt784" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.364350 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/901a9498-a0c1-497c-b671-122385a07c36-nmstate-lock\") pod \"nmstate-handler-fp6mk\" (UID: \"901a9498-a0c1-497c-b671-122385a07c36\") " pod="openshift-nmstate/nmstate-handler-fp6mk" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.364454 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/901a9498-a0c1-497c-b671-122385a07c36-dbus-socket\") pod \"nmstate-handler-fp6mk\" (UID: \"901a9498-a0c1-497c-b671-122385a07c36\") " pod="openshift-nmstate/nmstate-handler-fp6mk" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.375445 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/dd98c63c-2347-487d-95d0-b987f616398c-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-5ts74\" (UID: \"dd98c63c-2347-487d-95d0-b987f616398c\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5ts74" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.382941 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5x88\" (UniqueName: \"kubernetes.io/projected/dd98c63c-2347-487d-95d0-b987f616398c-kube-api-access-c5x88\") pod \"nmstate-webhook-6b89b748d8-5ts74\" (UID: \"dd98c63c-2347-487d-95d0-b987f616398c\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5ts74" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.385200 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt8pn\" (UniqueName: \"kubernetes.io/projected/901a9498-a0c1-497c-b671-122385a07c36-kube-api-access-nt8pn\") pod \"nmstate-handler-fp6mk\" (UID: \"901a9498-a0c1-497c-b671-122385a07c36\") " pod="openshift-nmstate/nmstate-handler-fp6mk" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.389318 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qkbm\" (UniqueName: \"kubernetes.io/projected/b313856b-afca-4673-ba78-cdcbf5d465cb-kube-api-access-2qkbm\") pod \"nmstate-metrics-5dcf9c57c5-dt784\" (UID: \"b313856b-afca-4673-ba78-cdcbf5d465cb\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-dt784" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.393839 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-dt784" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.416657 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5ts74" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.445611 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-fp6mk" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.447731 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-646969d5ff-6dtck"] Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.448494 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.469219 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/84931e32-2b26-4a7b-8387-78c99e46841d-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-h2g74\" (UID: \"84931e32-2b26-4a7b-8387-78c99e46841d\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h2g74" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.469286 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw4rs\" (UniqueName: \"kubernetes.io/projected/84931e32-2b26-4a7b-8387-78c99e46841d-kube-api-access-dw4rs\") pod \"nmstate-console-plugin-5874bd7bc5-h2g74\" (UID: \"84931e32-2b26-4a7b-8387-78c99e46841d\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h2g74" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.469393 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/84931e32-2b26-4a7b-8387-78c99e46841d-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-h2g74\" (UID: \"84931e32-2b26-4a7b-8387-78c99e46841d\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h2g74" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.470370 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/84931e32-2b26-4a7b-8387-78c99e46841d-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-h2g74\" (UID: \"84931e32-2b26-4a7b-8387-78c99e46841d\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h2g74" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.483273 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/84931e32-2b26-4a7b-8387-78c99e46841d-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-h2g74\" (UID: \"84931e32-2b26-4a7b-8387-78c99e46841d\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h2g74" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.485992 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-646969d5ff-6dtck"] Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.501933 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw4rs\" (UniqueName: \"kubernetes.io/projected/84931e32-2b26-4a7b-8387-78c99e46841d-kube-api-access-dw4rs\") pod \"nmstate-console-plugin-5874bd7bc5-h2g74\" (UID: \"84931e32-2b26-4a7b-8387-78c99e46841d\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h2g74" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.545880 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h2g74" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.571468 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9ce85532-4a9d-4611-b372-c727aa78fa9e-oauth-serving-cert\") pod \"console-646969d5ff-6dtck\" (UID: \"9ce85532-4a9d-4611-b372-c727aa78fa9e\") " pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.571513 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9ce85532-4a9d-4611-b372-c727aa78fa9e-console-config\") pod \"console-646969d5ff-6dtck\" (UID: \"9ce85532-4a9d-4611-b372-c727aa78fa9e\") " pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.571585 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ce85532-4a9d-4611-b372-c727aa78fa9e-trusted-ca-bundle\") pod \"console-646969d5ff-6dtck\" (UID: \"9ce85532-4a9d-4611-b372-c727aa78fa9e\") " pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.571635 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9ce85532-4a9d-4611-b372-c727aa78fa9e-service-ca\") pod \"console-646969d5ff-6dtck\" (UID: \"9ce85532-4a9d-4611-b372-c727aa78fa9e\") " pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.571670 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9ce85532-4a9d-4611-b372-c727aa78fa9e-console-oauth-config\") pod \"console-646969d5ff-6dtck\" (UID: \"9ce85532-4a9d-4611-b372-c727aa78fa9e\") " pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.571689 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ce85532-4a9d-4611-b372-c727aa78fa9e-console-serving-cert\") pod \"console-646969d5ff-6dtck\" (UID: \"9ce85532-4a9d-4611-b372-c727aa78fa9e\") " pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.571710 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbm9t\" (UniqueName: \"kubernetes.io/projected/9ce85532-4a9d-4611-b372-c727aa78fa9e-kube-api-access-gbm9t\") pod \"console-646969d5ff-6dtck\" (UID: \"9ce85532-4a9d-4611-b372-c727aa78fa9e\") " pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.659762 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-dt784"] Nov 24 19:29:28 crc kubenswrapper[4812]: W1124 19:29:28.663547 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb313856b_afca_4673_ba78_cdcbf5d465cb.slice/crio-2b0a33dd4e2aed0a88afd87d6191891832665efa6a19b100d355bf2993696cfe WatchSource:0}: Error finding container 2b0a33dd4e2aed0a88afd87d6191891832665efa6a19b100d355bf2993696cfe: Status 404 returned error can't find the container with id 2b0a33dd4e2aed0a88afd87d6191891832665efa6a19b100d355bf2993696cfe Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.672528 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9ce85532-4a9d-4611-b372-c727aa78fa9e-service-ca\") pod \"console-646969d5ff-6dtck\" (UID: \"9ce85532-4a9d-4611-b372-c727aa78fa9e\") " pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.672593 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9ce85532-4a9d-4611-b372-c727aa78fa9e-console-oauth-config\") pod \"console-646969d5ff-6dtck\" (UID: \"9ce85532-4a9d-4611-b372-c727aa78fa9e\") " pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.672623 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ce85532-4a9d-4611-b372-c727aa78fa9e-console-serving-cert\") pod \"console-646969d5ff-6dtck\" (UID: \"9ce85532-4a9d-4611-b372-c727aa78fa9e\") " pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.672653 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbm9t\" (UniqueName: \"kubernetes.io/projected/9ce85532-4a9d-4611-b372-c727aa78fa9e-kube-api-access-gbm9t\") pod \"console-646969d5ff-6dtck\" (UID: \"9ce85532-4a9d-4611-b372-c727aa78fa9e\") " pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.672684 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9ce85532-4a9d-4611-b372-c727aa78fa9e-oauth-serving-cert\") pod \"console-646969d5ff-6dtck\" (UID: \"9ce85532-4a9d-4611-b372-c727aa78fa9e\") " pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.672711 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9ce85532-4a9d-4611-b372-c727aa78fa9e-console-config\") pod \"console-646969d5ff-6dtck\" (UID: \"9ce85532-4a9d-4611-b372-c727aa78fa9e\") " pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.672747 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ce85532-4a9d-4611-b372-c727aa78fa9e-trusted-ca-bundle\") pod \"console-646969d5ff-6dtck\" (UID: \"9ce85532-4a9d-4611-b372-c727aa78fa9e\") " pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.673938 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9ce85532-4a9d-4611-b372-c727aa78fa9e-service-ca\") pod \"console-646969d5ff-6dtck\" (UID: \"9ce85532-4a9d-4611-b372-c727aa78fa9e\") " pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.674490 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9ce85532-4a9d-4611-b372-c727aa78fa9e-oauth-serving-cert\") pod \"console-646969d5ff-6dtck\" (UID: \"9ce85532-4a9d-4611-b372-c727aa78fa9e\") " pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.674575 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9ce85532-4a9d-4611-b372-c727aa78fa9e-console-config\") pod \"console-646969d5ff-6dtck\" (UID: \"9ce85532-4a9d-4611-b372-c727aa78fa9e\") " pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.674849 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ce85532-4a9d-4611-b372-c727aa78fa9e-trusted-ca-bundle\") pod \"console-646969d5ff-6dtck\" (UID: \"9ce85532-4a9d-4611-b372-c727aa78fa9e\") " pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.678168 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9ce85532-4a9d-4611-b372-c727aa78fa9e-console-oauth-config\") pod \"console-646969d5ff-6dtck\" (UID: \"9ce85532-4a9d-4611-b372-c727aa78fa9e\") " pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.679669 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9ce85532-4a9d-4611-b372-c727aa78fa9e-console-serving-cert\") pod \"console-646969d5ff-6dtck\" (UID: \"9ce85532-4a9d-4611-b372-c727aa78fa9e\") " pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.684430 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-5ts74"] Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.693884 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbm9t\" (UniqueName: \"kubernetes.io/projected/9ce85532-4a9d-4611-b372-c727aa78fa9e-kube-api-access-gbm9t\") pod \"console-646969d5ff-6dtck\" (UID: \"9ce85532-4a9d-4611-b372-c727aa78fa9e\") " pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.739248 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h2g74"] Nov 24 19:29:28 crc kubenswrapper[4812]: W1124 19:29:28.755156 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84931e32_2b26_4a7b_8387_78c99e46841d.slice/crio-058984b7b7f3c6bccc2b70da470da809c71a1f5eabfb46dfe6ac8efb9866fa42 WatchSource:0}: Error finding container 058984b7b7f3c6bccc2b70da470da809c71a1f5eabfb46dfe6ac8efb9866fa42: Status 404 returned error can't find the container with id 058984b7b7f3c6bccc2b70da470da809c71a1f5eabfb46dfe6ac8efb9866fa42 Nov 24 19:29:28 crc kubenswrapper[4812]: I1124 19:29:28.800951 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:29 crc kubenswrapper[4812]: I1124 19:29:29.176707 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5ts74" event={"ID":"dd98c63c-2347-487d-95d0-b987f616398c","Type":"ContainerStarted","Data":"e3c8326f520ab60d79995b11c1decb1801cf4b2948bf001d3f218d80cc800fba"} Nov 24 19:29:29 crc kubenswrapper[4812]: I1124 19:29:29.181054 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-fp6mk" event={"ID":"901a9498-a0c1-497c-b671-122385a07c36","Type":"ContainerStarted","Data":"daac754f6954b4b4a7f32cefa541f9af41cc27babd53ca5b2a311c46b630fcf2"} Nov 24 19:29:29 crc kubenswrapper[4812]: I1124 19:29:29.183127 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h2g74" event={"ID":"84931e32-2b26-4a7b-8387-78c99e46841d","Type":"ContainerStarted","Data":"058984b7b7f3c6bccc2b70da470da809c71a1f5eabfb46dfe6ac8efb9866fa42"} Nov 24 19:29:29 crc kubenswrapper[4812]: I1124 19:29:29.184763 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-dt784" event={"ID":"b313856b-afca-4673-ba78-cdcbf5d465cb","Type":"ContainerStarted","Data":"2b0a33dd4e2aed0a88afd87d6191891832665efa6a19b100d355bf2993696cfe"} Nov 24 19:29:29 crc kubenswrapper[4812]: I1124 19:29:29.192220 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-646969d5ff-6dtck"] Nov 24 19:29:30 crc kubenswrapper[4812]: I1124 19:29:30.191734 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-646969d5ff-6dtck" event={"ID":"9ce85532-4a9d-4611-b372-c727aa78fa9e","Type":"ContainerStarted","Data":"3cf8529f7f7949aee16060db5e196a2b03d7ca1f204cab7fe4e47f5d1451c914"} Nov 24 19:29:30 crc kubenswrapper[4812]: I1124 19:29:30.191981 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-646969d5ff-6dtck" event={"ID":"9ce85532-4a9d-4611-b372-c727aa78fa9e","Type":"ContainerStarted","Data":"ebbb2a9abff4d41039b3d8d5708fad9d240f09ad7b2839db5f0c8b52c103a3a5"} Nov 24 19:29:31 crc kubenswrapper[4812]: I1124 19:29:31.681533 4812 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 19:29:32 crc kubenswrapper[4812]: I1124 19:29:32.205008 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-dt784" event={"ID":"b313856b-afca-4673-ba78-cdcbf5d465cb","Type":"ContainerStarted","Data":"bccd4d489962eda6915afae9bcd6cdcb15d5e6c490b90e9173dbf204186abe84"} Nov 24 19:29:32 crc kubenswrapper[4812]: I1124 19:29:32.207161 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5ts74" event={"ID":"dd98c63c-2347-487d-95d0-b987f616398c","Type":"ContainerStarted","Data":"c6285616ebef968b2eda53a053b6bb44a9480c0af13087e6065b67f79538ae19"} Nov 24 19:29:32 crc kubenswrapper[4812]: I1124 19:29:32.207322 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5ts74" Nov 24 19:29:32 crc kubenswrapper[4812]: I1124 19:29:32.209268 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-fp6mk" event={"ID":"901a9498-a0c1-497c-b671-122385a07c36","Type":"ContainerStarted","Data":"269d84939ca48598a79bff670cda027842ba354dc6746d8710a2fae372dc2f30"} Nov 24 19:29:32 crc kubenswrapper[4812]: I1124 19:29:32.209422 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-fp6mk" Nov 24 19:29:32 crc kubenswrapper[4812]: I1124 19:29:32.210797 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h2g74" event={"ID":"84931e32-2b26-4a7b-8387-78c99e46841d","Type":"ContainerStarted","Data":"bd8d9d079c8cef8c69ed99a000e5a09a7abe242342216c4a028357a0b38563a5"} Nov 24 19:29:32 crc kubenswrapper[4812]: I1124 19:29:32.233474 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-646969d5ff-6dtck" podStartSLOduration=4.233448165 podStartE2EDuration="4.233448165s" podCreationTimestamp="2025-11-24 19:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:29:30.212306061 +0000 UTC m=+764.001258432" watchObservedRunningTime="2025-11-24 19:29:32.233448165 +0000 UTC m=+766.022400536" Nov 24 19:29:32 crc kubenswrapper[4812]: I1124 19:29:32.233933 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5ts74" podStartSLOduration=1.209748283 podStartE2EDuration="4.233926059s" podCreationTimestamp="2025-11-24 19:29:28 +0000 UTC" firstStartedPulling="2025-11-24 19:29:28.697484717 +0000 UTC m=+762.486437078" lastFinishedPulling="2025-11-24 19:29:31.721662483 +0000 UTC m=+765.510614854" observedRunningTime="2025-11-24 19:29:32.229429241 +0000 UTC m=+766.018381622" watchObservedRunningTime="2025-11-24 19:29:32.233926059 +0000 UTC m=+766.022878440" Nov 24 19:29:32 crc kubenswrapper[4812]: I1124 19:29:32.265286 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-h2g74" podStartSLOduration=1.306009595 podStartE2EDuration="4.265264245s" podCreationTimestamp="2025-11-24 19:29:28 +0000 UTC" firstStartedPulling="2025-11-24 19:29:28.76090353 +0000 UTC m=+762.549855901" lastFinishedPulling="2025-11-24 19:29:31.72015814 +0000 UTC m=+765.509110551" observedRunningTime="2025-11-24 19:29:32.256877408 +0000 UTC m=+766.045829779" watchObservedRunningTime="2025-11-24 19:29:32.265264245 +0000 UTC m=+766.054216616" Nov 24 19:29:32 crc kubenswrapper[4812]: I1124 19:29:32.288216 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-fp6mk" podStartSLOduration=1.059131653 podStartE2EDuration="4.288169642s" podCreationTimestamp="2025-11-24 19:29:28 +0000 UTC" firstStartedPulling="2025-11-24 19:29:28.517173788 +0000 UTC m=+762.306126159" lastFinishedPulling="2025-11-24 19:29:31.746211747 +0000 UTC m=+765.535164148" observedRunningTime="2025-11-24 19:29:32.285759124 +0000 UTC m=+766.074711505" watchObservedRunningTime="2025-11-24 19:29:32.288169642 +0000 UTC m=+766.077122013" Nov 24 19:29:38 crc kubenswrapper[4812]: I1124 19:29:38.260281 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-dt784" event={"ID":"b313856b-afca-4673-ba78-cdcbf5d465cb","Type":"ContainerStarted","Data":"58c42db6b9668fa8b64fe9590e408fcf6d31a496a1d744b6f79c7e083ae0a354"} Nov 24 19:29:38 crc kubenswrapper[4812]: I1124 19:29:38.297787 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-dt784" podStartSLOduration=1.67118502 podStartE2EDuration="10.297762339s" podCreationTimestamp="2025-11-24 19:29:28 +0000 UTC" firstStartedPulling="2025-11-24 19:29:28.665507242 +0000 UTC m=+762.454459613" lastFinishedPulling="2025-11-24 19:29:37.292084551 +0000 UTC m=+771.081036932" observedRunningTime="2025-11-24 19:29:38.291935324 +0000 UTC m=+772.080887735" watchObservedRunningTime="2025-11-24 19:29:38.297762339 +0000 UTC m=+772.086714750" Nov 24 19:29:38 crc kubenswrapper[4812]: I1124 19:29:38.484269 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-fp6mk" Nov 24 19:29:38 crc kubenswrapper[4812]: I1124 19:29:38.801585 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:38 crc kubenswrapper[4812]: I1124 19:29:38.801707 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:38 crc kubenswrapper[4812]: I1124 19:29:38.809382 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:39 crc kubenswrapper[4812]: I1124 19:29:39.274550 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-646969d5ff-6dtck" Nov 24 19:29:39 crc kubenswrapper[4812]: I1124 19:29:39.350879 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-jb4lb"] Nov 24 19:29:46 crc kubenswrapper[4812]: I1124 19:29:46.043818 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9j9t8"] Nov 24 19:29:46 crc kubenswrapper[4812]: I1124 19:29:46.046536 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9j9t8" Nov 24 19:29:46 crc kubenswrapper[4812]: I1124 19:29:46.074190 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9j9t8"] Nov 24 19:29:46 crc kubenswrapper[4812]: I1124 19:29:46.184620 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3923efcd-d984-4e49-bc59-a3320215fb48-utilities\") pod \"redhat-operators-9j9t8\" (UID: \"3923efcd-d984-4e49-bc59-a3320215fb48\") " pod="openshift-marketplace/redhat-operators-9j9t8" Nov 24 19:29:46 crc kubenswrapper[4812]: I1124 19:29:46.184795 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwbsk\" (UniqueName: \"kubernetes.io/projected/3923efcd-d984-4e49-bc59-a3320215fb48-kube-api-access-gwbsk\") pod \"redhat-operators-9j9t8\" (UID: \"3923efcd-d984-4e49-bc59-a3320215fb48\") " pod="openshift-marketplace/redhat-operators-9j9t8" Nov 24 19:29:46 crc kubenswrapper[4812]: I1124 19:29:46.184937 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3923efcd-d984-4e49-bc59-a3320215fb48-catalog-content\") pod \"redhat-operators-9j9t8\" (UID: \"3923efcd-d984-4e49-bc59-a3320215fb48\") " pod="openshift-marketplace/redhat-operators-9j9t8" Nov 24 19:29:46 crc kubenswrapper[4812]: I1124 19:29:46.286272 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwbsk\" (UniqueName: \"kubernetes.io/projected/3923efcd-d984-4e49-bc59-a3320215fb48-kube-api-access-gwbsk\") pod \"redhat-operators-9j9t8\" (UID: \"3923efcd-d984-4e49-bc59-a3320215fb48\") " pod="openshift-marketplace/redhat-operators-9j9t8" Nov 24 19:29:46 crc kubenswrapper[4812]: I1124 19:29:46.286356 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3923efcd-d984-4e49-bc59-a3320215fb48-catalog-content\") pod \"redhat-operators-9j9t8\" (UID: \"3923efcd-d984-4e49-bc59-a3320215fb48\") " pod="openshift-marketplace/redhat-operators-9j9t8" Nov 24 19:29:46 crc kubenswrapper[4812]: I1124 19:29:46.286416 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3923efcd-d984-4e49-bc59-a3320215fb48-utilities\") pod \"redhat-operators-9j9t8\" (UID: \"3923efcd-d984-4e49-bc59-a3320215fb48\") " pod="openshift-marketplace/redhat-operators-9j9t8" Nov 24 19:29:46 crc kubenswrapper[4812]: I1124 19:29:46.286898 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3923efcd-d984-4e49-bc59-a3320215fb48-utilities\") pod \"redhat-operators-9j9t8\" (UID: \"3923efcd-d984-4e49-bc59-a3320215fb48\") " pod="openshift-marketplace/redhat-operators-9j9t8" Nov 24 19:29:46 crc kubenswrapper[4812]: I1124 19:29:46.287038 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3923efcd-d984-4e49-bc59-a3320215fb48-catalog-content\") pod \"redhat-operators-9j9t8\" (UID: \"3923efcd-d984-4e49-bc59-a3320215fb48\") " pod="openshift-marketplace/redhat-operators-9j9t8" Nov 24 19:29:46 crc kubenswrapper[4812]: I1124 19:29:46.314574 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwbsk\" (UniqueName: \"kubernetes.io/projected/3923efcd-d984-4e49-bc59-a3320215fb48-kube-api-access-gwbsk\") pod \"redhat-operators-9j9t8\" (UID: \"3923efcd-d984-4e49-bc59-a3320215fb48\") " pod="openshift-marketplace/redhat-operators-9j9t8" Nov 24 19:29:46 crc kubenswrapper[4812]: I1124 19:29:46.382984 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9j9t8" Nov 24 19:29:46 crc kubenswrapper[4812]: I1124 19:29:46.792987 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9j9t8"] Nov 24 19:29:46 crc kubenswrapper[4812]: W1124 19:29:46.800212 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3923efcd_d984_4e49_bc59_a3320215fb48.slice/crio-0639aea52e80a4d91b8875daaa53bb17f5561449305e593cc27073a2bd997008 WatchSource:0}: Error finding container 0639aea52e80a4d91b8875daaa53bb17f5561449305e593cc27073a2bd997008: Status 404 returned error can't find the container with id 0639aea52e80a4d91b8875daaa53bb17f5561449305e593cc27073a2bd997008 Nov 24 19:29:47 crc kubenswrapper[4812]: I1124 19:29:47.323934 4812 generic.go:334] "Generic (PLEG): container finished" podID="3923efcd-d984-4e49-bc59-a3320215fb48" containerID="dd5f6e921de7383983e15124bdf81b87e9938692489663ee4a23b6f1c4d549a1" exitCode=0 Nov 24 19:29:47 crc kubenswrapper[4812]: I1124 19:29:47.323977 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9j9t8" event={"ID":"3923efcd-d984-4e49-bc59-a3320215fb48","Type":"ContainerDied","Data":"dd5f6e921de7383983e15124bdf81b87e9938692489663ee4a23b6f1c4d549a1"} Nov 24 19:29:47 crc kubenswrapper[4812]: I1124 19:29:47.324243 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9j9t8" event={"ID":"3923efcd-d984-4e49-bc59-a3320215fb48","Type":"ContainerStarted","Data":"0639aea52e80a4d91b8875daaa53bb17f5561449305e593cc27073a2bd997008"} Nov 24 19:29:48 crc kubenswrapper[4812]: I1124 19:29:48.426521 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-5ts74" Nov 24 19:29:49 crc kubenswrapper[4812]: I1124 19:29:49.341922 4812 generic.go:334] "Generic (PLEG): container finished" podID="3923efcd-d984-4e49-bc59-a3320215fb48" containerID="ef377e3c4f7b545f8907a0df81fc59389f9c0c66a4a72521356e06f4e11719ab" exitCode=0 Nov 24 19:29:49 crc kubenswrapper[4812]: I1124 19:29:49.342013 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9j9t8" event={"ID":"3923efcd-d984-4e49-bc59-a3320215fb48","Type":"ContainerDied","Data":"ef377e3c4f7b545f8907a0df81fc59389f9c0c66a4a72521356e06f4e11719ab"} Nov 24 19:29:50 crc kubenswrapper[4812]: I1124 19:29:50.352412 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9j9t8" event={"ID":"3923efcd-d984-4e49-bc59-a3320215fb48","Type":"ContainerStarted","Data":"f41c255899ff8bc6a0621689204360d8f239ab5823d43167cc346e7c04b648d4"} Nov 24 19:29:50 crc kubenswrapper[4812]: I1124 19:29:50.377177 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9j9t8" podStartSLOduration=1.941984949 podStartE2EDuration="4.377162089s" podCreationTimestamp="2025-11-24 19:29:46 +0000 UTC" firstStartedPulling="2025-11-24 19:29:47.325457813 +0000 UTC m=+781.114410184" lastFinishedPulling="2025-11-24 19:29:49.760634913 +0000 UTC m=+783.549587324" observedRunningTime="2025-11-24 19:29:50.37651471 +0000 UTC m=+784.165467081" watchObservedRunningTime="2025-11-24 19:29:50.377162089 +0000 UTC m=+784.166114460" Nov 24 19:29:52 crc kubenswrapper[4812]: I1124 19:29:52.418131 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fcsp9"] Nov 24 19:29:52 crc kubenswrapper[4812]: I1124 19:29:52.421493 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcsp9" Nov 24 19:29:52 crc kubenswrapper[4812]: I1124 19:29:52.439310 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcsp9"] Nov 24 19:29:52 crc kubenswrapper[4812]: I1124 19:29:52.481404 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42abe1ac-df11-4743-8562-383e23dcd790-catalog-content\") pod \"redhat-marketplace-fcsp9\" (UID: \"42abe1ac-df11-4743-8562-383e23dcd790\") " pod="openshift-marketplace/redhat-marketplace-fcsp9" Nov 24 19:29:52 crc kubenswrapper[4812]: I1124 19:29:52.481671 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czl9j\" (UniqueName: \"kubernetes.io/projected/42abe1ac-df11-4743-8562-383e23dcd790-kube-api-access-czl9j\") pod \"redhat-marketplace-fcsp9\" (UID: \"42abe1ac-df11-4743-8562-383e23dcd790\") " pod="openshift-marketplace/redhat-marketplace-fcsp9" Nov 24 19:29:52 crc kubenswrapper[4812]: I1124 19:29:52.481888 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42abe1ac-df11-4743-8562-383e23dcd790-utilities\") pod \"redhat-marketplace-fcsp9\" (UID: \"42abe1ac-df11-4743-8562-383e23dcd790\") " pod="openshift-marketplace/redhat-marketplace-fcsp9" Nov 24 19:29:52 crc kubenswrapper[4812]: I1124 19:29:52.583124 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42abe1ac-df11-4743-8562-383e23dcd790-catalog-content\") pod \"redhat-marketplace-fcsp9\" (UID: \"42abe1ac-df11-4743-8562-383e23dcd790\") " pod="openshift-marketplace/redhat-marketplace-fcsp9" Nov 24 19:29:52 crc kubenswrapper[4812]: I1124 19:29:52.583279 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czl9j\" (UniqueName: \"kubernetes.io/projected/42abe1ac-df11-4743-8562-383e23dcd790-kube-api-access-czl9j\") pod \"redhat-marketplace-fcsp9\" (UID: \"42abe1ac-df11-4743-8562-383e23dcd790\") " pod="openshift-marketplace/redhat-marketplace-fcsp9" Nov 24 19:29:52 crc kubenswrapper[4812]: I1124 19:29:52.583383 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42abe1ac-df11-4743-8562-383e23dcd790-utilities\") pod \"redhat-marketplace-fcsp9\" (UID: \"42abe1ac-df11-4743-8562-383e23dcd790\") " pod="openshift-marketplace/redhat-marketplace-fcsp9" Nov 24 19:29:52 crc kubenswrapper[4812]: I1124 19:29:52.584172 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42abe1ac-df11-4743-8562-383e23dcd790-catalog-content\") pod \"redhat-marketplace-fcsp9\" (UID: \"42abe1ac-df11-4743-8562-383e23dcd790\") " pod="openshift-marketplace/redhat-marketplace-fcsp9" Nov 24 19:29:52 crc kubenswrapper[4812]: I1124 19:29:52.584286 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42abe1ac-df11-4743-8562-383e23dcd790-utilities\") pod \"redhat-marketplace-fcsp9\" (UID: \"42abe1ac-df11-4743-8562-383e23dcd790\") " pod="openshift-marketplace/redhat-marketplace-fcsp9" Nov 24 19:29:52 crc kubenswrapper[4812]: I1124 19:29:52.623680 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czl9j\" (UniqueName: \"kubernetes.io/projected/42abe1ac-df11-4743-8562-383e23dcd790-kube-api-access-czl9j\") pod \"redhat-marketplace-fcsp9\" (UID: \"42abe1ac-df11-4743-8562-383e23dcd790\") " pod="openshift-marketplace/redhat-marketplace-fcsp9" Nov 24 19:29:52 crc kubenswrapper[4812]: I1124 19:29:52.747911 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcsp9" Nov 24 19:29:53 crc kubenswrapper[4812]: I1124 19:29:53.247199 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcsp9"] Nov 24 19:29:53 crc kubenswrapper[4812]: I1124 19:29:53.377158 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcsp9" event={"ID":"42abe1ac-df11-4743-8562-383e23dcd790","Type":"ContainerStarted","Data":"9c90086e6dab18d4d49a9285626783f2df189ab2b221ab84be2eaaa9d1852dc9"} Nov 24 19:29:54 crc kubenswrapper[4812]: I1124 19:29:54.385132 4812 generic.go:334] "Generic (PLEG): container finished" podID="42abe1ac-df11-4743-8562-383e23dcd790" containerID="6a6f35224755c1ba1ae0940b4d7f954356046725ec03b81668069c15500b63f9" exitCode=0 Nov 24 19:29:54 crc kubenswrapper[4812]: I1124 19:29:54.385236 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcsp9" event={"ID":"42abe1ac-df11-4743-8562-383e23dcd790","Type":"ContainerDied","Data":"6a6f35224755c1ba1ae0940b4d7f954356046725ec03b81668069c15500b63f9"} Nov 24 19:29:55 crc kubenswrapper[4812]: I1124 19:29:55.392187 4812 generic.go:334] "Generic (PLEG): container finished" podID="42abe1ac-df11-4743-8562-383e23dcd790" containerID="5e7ba74c2c56f8e611edb536a7cd223ec2600a20c978834d9620fbfbb8094325" exitCode=0 Nov 24 19:29:55 crc kubenswrapper[4812]: I1124 19:29:55.392235 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcsp9" event={"ID":"42abe1ac-df11-4743-8562-383e23dcd790","Type":"ContainerDied","Data":"5e7ba74c2c56f8e611edb536a7cd223ec2600a20c978834d9620fbfbb8094325"} Nov 24 19:29:56 crc kubenswrapper[4812]: I1124 19:29:56.383080 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9j9t8" Nov 24 19:29:56 crc kubenswrapper[4812]: I1124 19:29:56.383300 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9j9t8" Nov 24 19:29:56 crc kubenswrapper[4812]: I1124 19:29:56.403182 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcsp9" event={"ID":"42abe1ac-df11-4743-8562-383e23dcd790","Type":"ContainerStarted","Data":"88521d00bc6219d88bff79750034fb4872cf1769ccb2c655b407c421de8c84a7"} Nov 24 19:29:56 crc kubenswrapper[4812]: I1124 19:29:56.424045 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fcsp9" podStartSLOduration=3.018546612 podStartE2EDuration="4.42403077s" podCreationTimestamp="2025-11-24 19:29:52 +0000 UTC" firstStartedPulling="2025-11-24 19:29:54.386854277 +0000 UTC m=+788.175806668" lastFinishedPulling="2025-11-24 19:29:55.792338415 +0000 UTC m=+789.581290826" observedRunningTime="2025-11-24 19:29:56.421110178 +0000 UTC m=+790.210062549" watchObservedRunningTime="2025-11-24 19:29:56.42403077 +0000 UTC m=+790.212983131" Nov 24 19:29:56 crc kubenswrapper[4812]: I1124 19:29:56.429973 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9j9t8" Nov 24 19:29:56 crc kubenswrapper[4812]: I1124 19:29:56.476694 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9j9t8" Nov 24 19:29:58 crc kubenswrapper[4812]: I1124 19:29:58.804290 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9j9t8"] Nov 24 19:29:58 crc kubenswrapper[4812]: I1124 19:29:58.805336 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9j9t8" podUID="3923efcd-d984-4e49-bc59-a3320215fb48" containerName="registry-server" containerID="cri-o://f41c255899ff8bc6a0621689204360d8f239ab5823d43167cc346e7c04b648d4" gracePeriod=2 Nov 24 19:29:59 crc kubenswrapper[4812]: I1124 19:29:59.258832 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9j9t8" Nov 24 19:29:59 crc kubenswrapper[4812]: I1124 19:29:59.331138 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3923efcd-d984-4e49-bc59-a3320215fb48-catalog-content\") pod \"3923efcd-d984-4e49-bc59-a3320215fb48\" (UID: \"3923efcd-d984-4e49-bc59-a3320215fb48\") " Nov 24 19:29:59 crc kubenswrapper[4812]: I1124 19:29:59.332969 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwbsk\" (UniqueName: \"kubernetes.io/projected/3923efcd-d984-4e49-bc59-a3320215fb48-kube-api-access-gwbsk\") pod \"3923efcd-d984-4e49-bc59-a3320215fb48\" (UID: \"3923efcd-d984-4e49-bc59-a3320215fb48\") " Nov 24 19:29:59 crc kubenswrapper[4812]: I1124 19:29:59.333065 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3923efcd-d984-4e49-bc59-a3320215fb48-utilities\") pod \"3923efcd-d984-4e49-bc59-a3320215fb48\" (UID: \"3923efcd-d984-4e49-bc59-a3320215fb48\") " Nov 24 19:29:59 crc kubenswrapper[4812]: I1124 19:29:59.334400 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3923efcd-d984-4e49-bc59-a3320215fb48-utilities" (OuterVolumeSpecName: "utilities") pod "3923efcd-d984-4e49-bc59-a3320215fb48" (UID: "3923efcd-d984-4e49-bc59-a3320215fb48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:29:59 crc kubenswrapper[4812]: I1124 19:29:59.340246 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3923efcd-d984-4e49-bc59-a3320215fb48-kube-api-access-gwbsk" (OuterVolumeSpecName: "kube-api-access-gwbsk") pod "3923efcd-d984-4e49-bc59-a3320215fb48" (UID: "3923efcd-d984-4e49-bc59-a3320215fb48"). InnerVolumeSpecName "kube-api-access-gwbsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:29:59 crc kubenswrapper[4812]: I1124 19:29:59.425791 4812 generic.go:334] "Generic (PLEG): container finished" podID="3923efcd-d984-4e49-bc59-a3320215fb48" containerID="f41c255899ff8bc6a0621689204360d8f239ab5823d43167cc346e7c04b648d4" exitCode=0 Nov 24 19:29:59 crc kubenswrapper[4812]: I1124 19:29:59.425834 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9j9t8" event={"ID":"3923efcd-d984-4e49-bc59-a3320215fb48","Type":"ContainerDied","Data":"f41c255899ff8bc6a0621689204360d8f239ab5823d43167cc346e7c04b648d4"} Nov 24 19:29:59 crc kubenswrapper[4812]: I1124 19:29:59.425893 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9j9t8" event={"ID":"3923efcd-d984-4e49-bc59-a3320215fb48","Type":"ContainerDied","Data":"0639aea52e80a4d91b8875daaa53bb17f5561449305e593cc27073a2bd997008"} Nov 24 19:29:59 crc kubenswrapper[4812]: I1124 19:29:59.425892 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9j9t8" Nov 24 19:29:59 crc kubenswrapper[4812]: I1124 19:29:59.425918 4812 scope.go:117] "RemoveContainer" containerID="f41c255899ff8bc6a0621689204360d8f239ab5823d43167cc346e7c04b648d4" Nov 24 19:29:59 crc kubenswrapper[4812]: I1124 19:29:59.435731 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwbsk\" (UniqueName: \"kubernetes.io/projected/3923efcd-d984-4e49-bc59-a3320215fb48-kube-api-access-gwbsk\") on node \"crc\" DevicePath \"\"" Nov 24 19:29:59 crc kubenswrapper[4812]: I1124 19:29:59.435793 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3923efcd-d984-4e49-bc59-a3320215fb48-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:29:59 crc kubenswrapper[4812]: I1124 19:29:59.447209 4812 scope.go:117] "RemoveContainer" containerID="ef377e3c4f7b545f8907a0df81fc59389f9c0c66a4a72521356e06f4e11719ab" Nov 24 19:29:59 crc kubenswrapper[4812]: I1124 19:29:59.464477 4812 scope.go:117] "RemoveContainer" containerID="dd5f6e921de7383983e15124bdf81b87e9938692489663ee4a23b6f1c4d549a1" Nov 24 19:29:59 crc kubenswrapper[4812]: I1124 19:29:59.487672 4812 scope.go:117] "RemoveContainer" containerID="f41c255899ff8bc6a0621689204360d8f239ab5823d43167cc346e7c04b648d4" Nov 24 19:29:59 crc kubenswrapper[4812]: E1124 19:29:59.488201 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f41c255899ff8bc6a0621689204360d8f239ab5823d43167cc346e7c04b648d4\": container with ID starting with f41c255899ff8bc6a0621689204360d8f239ab5823d43167cc346e7c04b648d4 not found: ID does not exist" containerID="f41c255899ff8bc6a0621689204360d8f239ab5823d43167cc346e7c04b648d4" Nov 24 19:29:59 crc kubenswrapper[4812]: I1124 19:29:59.488247 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f41c255899ff8bc6a0621689204360d8f239ab5823d43167cc346e7c04b648d4"} err="failed to get container status \"f41c255899ff8bc6a0621689204360d8f239ab5823d43167cc346e7c04b648d4\": rpc error: code = NotFound desc = could not find container \"f41c255899ff8bc6a0621689204360d8f239ab5823d43167cc346e7c04b648d4\": container with ID starting with f41c255899ff8bc6a0621689204360d8f239ab5823d43167cc346e7c04b648d4 not found: ID does not exist" Nov 24 19:29:59 crc kubenswrapper[4812]: I1124 19:29:59.488279 4812 scope.go:117] "RemoveContainer" containerID="ef377e3c4f7b545f8907a0df81fc59389f9c0c66a4a72521356e06f4e11719ab" Nov 24 19:29:59 crc kubenswrapper[4812]: E1124 19:29:59.488807 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef377e3c4f7b545f8907a0df81fc59389f9c0c66a4a72521356e06f4e11719ab\": container with ID starting with ef377e3c4f7b545f8907a0df81fc59389f9c0c66a4a72521356e06f4e11719ab not found: ID does not exist" containerID="ef377e3c4f7b545f8907a0df81fc59389f9c0c66a4a72521356e06f4e11719ab" Nov 24 19:29:59 crc kubenswrapper[4812]: I1124 19:29:59.488862 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef377e3c4f7b545f8907a0df81fc59389f9c0c66a4a72521356e06f4e11719ab"} err="failed to get container status \"ef377e3c4f7b545f8907a0df81fc59389f9c0c66a4a72521356e06f4e11719ab\": rpc error: code = NotFound desc = could not find container \"ef377e3c4f7b545f8907a0df81fc59389f9c0c66a4a72521356e06f4e11719ab\": container with ID starting with ef377e3c4f7b545f8907a0df81fc59389f9c0c66a4a72521356e06f4e11719ab not found: ID does not exist" Nov 24 19:29:59 crc kubenswrapper[4812]: I1124 19:29:59.488899 4812 scope.go:117] "RemoveContainer" containerID="dd5f6e921de7383983e15124bdf81b87e9938692489663ee4a23b6f1c4d549a1" Nov 24 19:29:59 crc kubenswrapper[4812]: E1124 19:29:59.489321 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd5f6e921de7383983e15124bdf81b87e9938692489663ee4a23b6f1c4d549a1\": container with ID starting with dd5f6e921de7383983e15124bdf81b87e9938692489663ee4a23b6f1c4d549a1 not found: ID does not exist" containerID="dd5f6e921de7383983e15124bdf81b87e9938692489663ee4a23b6f1c4d549a1" Nov 24 19:29:59 crc kubenswrapper[4812]: I1124 19:29:59.489375 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd5f6e921de7383983e15124bdf81b87e9938692489663ee4a23b6f1c4d549a1"} err="failed to get container status \"dd5f6e921de7383983e15124bdf81b87e9938692489663ee4a23b6f1c4d549a1\": rpc error: code = NotFound desc = could not find container \"dd5f6e921de7383983e15124bdf81b87e9938692489663ee4a23b6f1c4d549a1\": container with ID starting with dd5f6e921de7383983e15124bdf81b87e9938692489663ee4a23b6f1c4d549a1 not found: ID does not exist" Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.149774 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400210-7l2c8"] Nov 24 19:30:00 crc kubenswrapper[4812]: E1124 19:30:00.150584 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3923efcd-d984-4e49-bc59-a3320215fb48" containerName="extract-utilities" Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.150606 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3923efcd-d984-4e49-bc59-a3320215fb48" containerName="extract-utilities" Nov 24 19:30:00 crc kubenswrapper[4812]: E1124 19:30:00.150624 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3923efcd-d984-4e49-bc59-a3320215fb48" containerName="registry-server" Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.150635 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3923efcd-d984-4e49-bc59-a3320215fb48" containerName="registry-server" Nov 24 19:30:00 crc kubenswrapper[4812]: E1124 19:30:00.150659 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3923efcd-d984-4e49-bc59-a3320215fb48" containerName="extract-content" Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.150667 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3923efcd-d984-4e49-bc59-a3320215fb48" containerName="extract-content" Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.150805 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="3923efcd-d984-4e49-bc59-a3320215fb48" containerName="registry-server" Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.165285 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400210-7l2c8"] Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.165399 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400210-7l2c8" Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.176928 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.178311 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.247637 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v79j9\" (UniqueName: \"kubernetes.io/projected/cd51106d-321e-45b6-8d19-1925dcfccb82-kube-api-access-v79j9\") pod \"collect-profiles-29400210-7l2c8\" (UID: \"cd51106d-321e-45b6-8d19-1925dcfccb82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400210-7l2c8" Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.247704 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd51106d-321e-45b6-8d19-1925dcfccb82-config-volume\") pod \"collect-profiles-29400210-7l2c8\" (UID: \"cd51106d-321e-45b6-8d19-1925dcfccb82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400210-7l2c8" Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.247741 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd51106d-321e-45b6-8d19-1925dcfccb82-secret-volume\") pod \"collect-profiles-29400210-7l2c8\" (UID: \"cd51106d-321e-45b6-8d19-1925dcfccb82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400210-7l2c8" Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.389635 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v79j9\" (UniqueName: \"kubernetes.io/projected/cd51106d-321e-45b6-8d19-1925dcfccb82-kube-api-access-v79j9\") pod \"collect-profiles-29400210-7l2c8\" (UID: \"cd51106d-321e-45b6-8d19-1925dcfccb82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400210-7l2c8" Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.389701 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd51106d-321e-45b6-8d19-1925dcfccb82-config-volume\") pod \"collect-profiles-29400210-7l2c8\" (UID: \"cd51106d-321e-45b6-8d19-1925dcfccb82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400210-7l2c8" Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.389740 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd51106d-321e-45b6-8d19-1925dcfccb82-secret-volume\") pod \"collect-profiles-29400210-7l2c8\" (UID: \"cd51106d-321e-45b6-8d19-1925dcfccb82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400210-7l2c8" Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.392296 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd51106d-321e-45b6-8d19-1925dcfccb82-config-volume\") pod \"collect-profiles-29400210-7l2c8\" (UID: \"cd51106d-321e-45b6-8d19-1925dcfccb82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400210-7l2c8" Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.410251 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd51106d-321e-45b6-8d19-1925dcfccb82-secret-volume\") pod \"collect-profiles-29400210-7l2c8\" (UID: \"cd51106d-321e-45b6-8d19-1925dcfccb82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400210-7l2c8" Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.414166 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v79j9\" (UniqueName: \"kubernetes.io/projected/cd51106d-321e-45b6-8d19-1925dcfccb82-kube-api-access-v79j9\") pod \"collect-profiles-29400210-7l2c8\" (UID: \"cd51106d-321e-45b6-8d19-1925dcfccb82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400210-7l2c8" Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.483749 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400210-7l2c8" Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.493487 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3923efcd-d984-4e49-bc59-a3320215fb48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3923efcd-d984-4e49-bc59-a3320215fb48" (UID: "3923efcd-d984-4e49-bc59-a3320215fb48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.592769 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3923efcd-d984-4e49-bc59-a3320215fb48-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.658859 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9j9t8"] Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.661894 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9j9t8"] Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.934825 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400210-7l2c8"] Nov 24 19:30:00 crc kubenswrapper[4812]: W1124 19:30:00.940106 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd51106d_321e_45b6_8d19_1925dcfccb82.slice/crio-0f63ba124f89deccd92e406e691454762130d589f73f1a8b44d647590c655223 WatchSource:0}: Error finding container 0f63ba124f89deccd92e406e691454762130d589f73f1a8b44d647590c655223: Status 404 returned error can't find the container with id 0f63ba124f89deccd92e406e691454762130d589f73f1a8b44d647590c655223 Nov 24 19:30:00 crc kubenswrapper[4812]: I1124 19:30:00.973671 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3923efcd-d984-4e49-bc59-a3320215fb48" path="/var/lib/kubelet/pods/3923efcd-d984-4e49-bc59-a3320215fb48/volumes" Nov 24 19:30:01 crc kubenswrapper[4812]: I1124 19:30:01.442615 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400210-7l2c8" event={"ID":"cd51106d-321e-45b6-8d19-1925dcfccb82","Type":"ContainerStarted","Data":"0f63ba124f89deccd92e406e691454762130d589f73f1a8b44d647590c655223"} Nov 24 19:30:02 crc kubenswrapper[4812]: I1124 19:30:02.449470 4812 generic.go:334] "Generic (PLEG): container finished" podID="cd51106d-321e-45b6-8d19-1925dcfccb82" containerID="2ab2843cd8de441af746041831a0b7425ae5087f4c6a77bd10673772c5ad544b" exitCode=0 Nov 24 19:30:02 crc kubenswrapper[4812]: I1124 19:30:02.449524 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400210-7l2c8" event={"ID":"cd51106d-321e-45b6-8d19-1925dcfccb82","Type":"ContainerDied","Data":"2ab2843cd8de441af746041831a0b7425ae5087f4c6a77bd10673772c5ad544b"} Nov 24 19:30:02 crc kubenswrapper[4812]: I1124 19:30:02.749191 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fcsp9" Nov 24 19:30:02 crc kubenswrapper[4812]: I1124 19:30:02.749275 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fcsp9" Nov 24 19:30:02 crc kubenswrapper[4812]: I1124 19:30:02.811017 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zmdfd"] Nov 24 19:30:02 crc kubenswrapper[4812]: I1124 19:30:02.812198 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zmdfd" Nov 24 19:30:02 crc kubenswrapper[4812]: I1124 19:30:02.828480 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpskn\" (UniqueName: \"kubernetes.io/projected/3157a3ce-320d-4d67-8a6c-8bf7e30f7200-kube-api-access-jpskn\") pod \"certified-operators-zmdfd\" (UID: \"3157a3ce-320d-4d67-8a6c-8bf7e30f7200\") " pod="openshift-marketplace/certified-operators-zmdfd" Nov 24 19:30:02 crc kubenswrapper[4812]: I1124 19:30:02.828578 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3157a3ce-320d-4d67-8a6c-8bf7e30f7200-catalog-content\") pod \"certified-operators-zmdfd\" (UID: \"3157a3ce-320d-4d67-8a6c-8bf7e30f7200\") " pod="openshift-marketplace/certified-operators-zmdfd" Nov 24 19:30:02 crc kubenswrapper[4812]: I1124 19:30:02.828632 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3157a3ce-320d-4d67-8a6c-8bf7e30f7200-utilities\") pod \"certified-operators-zmdfd\" (UID: \"3157a3ce-320d-4d67-8a6c-8bf7e30f7200\") " pod="openshift-marketplace/certified-operators-zmdfd" Nov 24 19:30:02 crc kubenswrapper[4812]: I1124 19:30:02.830031 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zmdfd"] Nov 24 19:30:02 crc kubenswrapper[4812]: I1124 19:30:02.848616 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fcsp9" Nov 24 19:30:02 crc kubenswrapper[4812]: I1124 19:30:02.929775 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpskn\" (UniqueName: \"kubernetes.io/projected/3157a3ce-320d-4d67-8a6c-8bf7e30f7200-kube-api-access-jpskn\") pod \"certified-operators-zmdfd\" (UID: \"3157a3ce-320d-4d67-8a6c-8bf7e30f7200\") " pod="openshift-marketplace/certified-operators-zmdfd" Nov 24 19:30:02 crc kubenswrapper[4812]: I1124 19:30:02.929870 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3157a3ce-320d-4d67-8a6c-8bf7e30f7200-catalog-content\") pod \"certified-operators-zmdfd\" (UID: \"3157a3ce-320d-4d67-8a6c-8bf7e30f7200\") " pod="openshift-marketplace/certified-operators-zmdfd" Nov 24 19:30:02 crc kubenswrapper[4812]: I1124 19:30:02.929917 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3157a3ce-320d-4d67-8a6c-8bf7e30f7200-utilities\") pod \"certified-operators-zmdfd\" (UID: \"3157a3ce-320d-4d67-8a6c-8bf7e30f7200\") " pod="openshift-marketplace/certified-operators-zmdfd" Nov 24 19:30:02 crc kubenswrapper[4812]: I1124 19:30:02.930410 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3157a3ce-320d-4d67-8a6c-8bf7e30f7200-catalog-content\") pod \"certified-operators-zmdfd\" (UID: \"3157a3ce-320d-4d67-8a6c-8bf7e30f7200\") " pod="openshift-marketplace/certified-operators-zmdfd" Nov 24 19:30:02 crc kubenswrapper[4812]: I1124 19:30:02.930445 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3157a3ce-320d-4d67-8a6c-8bf7e30f7200-utilities\") pod \"certified-operators-zmdfd\" (UID: \"3157a3ce-320d-4d67-8a6c-8bf7e30f7200\") " pod="openshift-marketplace/certified-operators-zmdfd" Nov 24 19:30:02 crc kubenswrapper[4812]: I1124 19:30:02.953387 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpskn\" (UniqueName: \"kubernetes.io/projected/3157a3ce-320d-4d67-8a6c-8bf7e30f7200-kube-api-access-jpskn\") pod \"certified-operators-zmdfd\" (UID: \"3157a3ce-320d-4d67-8a6c-8bf7e30f7200\") " pod="openshift-marketplace/certified-operators-zmdfd" Nov 24 19:30:02 crc kubenswrapper[4812]: I1124 19:30:02.998934 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:30:02 crc kubenswrapper[4812]: I1124 19:30:02.999004 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.050383 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd"] Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.051746 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd" Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.053874 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.062465 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd"] Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.132812 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e652404-fed6-42f2-aae4-02b572479de1-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd\" (UID: \"0e652404-fed6-42f2-aae4-02b572479de1\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd" Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.132974 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e652404-fed6-42f2-aae4-02b572479de1-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd\" (UID: \"0e652404-fed6-42f2-aae4-02b572479de1\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd" Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.133005 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-498tr\" (UniqueName: \"kubernetes.io/projected/0e652404-fed6-42f2-aae4-02b572479de1-kube-api-access-498tr\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd\" (UID: \"0e652404-fed6-42f2-aae4-02b572479de1\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd" Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.137582 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zmdfd" Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.235893 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e652404-fed6-42f2-aae4-02b572479de1-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd\" (UID: \"0e652404-fed6-42f2-aae4-02b572479de1\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd" Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.236401 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-498tr\" (UniqueName: \"kubernetes.io/projected/0e652404-fed6-42f2-aae4-02b572479de1-kube-api-access-498tr\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd\" (UID: \"0e652404-fed6-42f2-aae4-02b572479de1\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd" Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.236508 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e652404-fed6-42f2-aae4-02b572479de1-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd\" (UID: \"0e652404-fed6-42f2-aae4-02b572479de1\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd" Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.237974 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e652404-fed6-42f2-aae4-02b572479de1-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd\" (UID: \"0e652404-fed6-42f2-aae4-02b572479de1\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd" Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.238000 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e652404-fed6-42f2-aae4-02b572479de1-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd\" (UID: \"0e652404-fed6-42f2-aae4-02b572479de1\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd" Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.267174 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-498tr\" (UniqueName: \"kubernetes.io/projected/0e652404-fed6-42f2-aae4-02b572479de1-kube-api-access-498tr\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd\" (UID: \"0e652404-fed6-42f2-aae4-02b572479de1\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd" Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.370457 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd" Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.500062 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fcsp9" Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.557874 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zmdfd"] Nov 24 19:30:03 crc kubenswrapper[4812]: W1124 19:30:03.573424 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3157a3ce_320d_4d67_8a6c_8bf7e30f7200.slice/crio-c3135b1c9950041cc5b355eddead78360480b0bf19e95041184baf2c5c9e930f WatchSource:0}: Error finding container c3135b1c9950041cc5b355eddead78360480b0bf19e95041184baf2c5c9e930f: Status 404 returned error can't find the container with id c3135b1c9950041cc5b355eddead78360480b0bf19e95041184baf2c5c9e930f Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.596872 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd"] Nov 24 19:30:03 crc kubenswrapper[4812]: W1124 19:30:03.604910 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e652404_fed6_42f2_aae4_02b572479de1.slice/crio-7e8da0831cb276c06989fef694ee1611849184a6e6b6be7eef7dea7fc3510495 WatchSource:0}: Error finding container 7e8da0831cb276c06989fef694ee1611849184a6e6b6be7eef7dea7fc3510495: Status 404 returned error can't find the container with id 7e8da0831cb276c06989fef694ee1611849184a6e6b6be7eef7dea7fc3510495 Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.867460 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400210-7l2c8" Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.945458 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd51106d-321e-45b6-8d19-1925dcfccb82-config-volume\") pod \"cd51106d-321e-45b6-8d19-1925dcfccb82\" (UID: \"cd51106d-321e-45b6-8d19-1925dcfccb82\") " Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.945512 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd51106d-321e-45b6-8d19-1925dcfccb82-secret-volume\") pod \"cd51106d-321e-45b6-8d19-1925dcfccb82\" (UID: \"cd51106d-321e-45b6-8d19-1925dcfccb82\") " Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.945552 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v79j9\" (UniqueName: \"kubernetes.io/projected/cd51106d-321e-45b6-8d19-1925dcfccb82-kube-api-access-v79j9\") pod \"cd51106d-321e-45b6-8d19-1925dcfccb82\" (UID: \"cd51106d-321e-45b6-8d19-1925dcfccb82\") " Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.946562 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd51106d-321e-45b6-8d19-1925dcfccb82-config-volume" (OuterVolumeSpecName: "config-volume") pod "cd51106d-321e-45b6-8d19-1925dcfccb82" (UID: "cd51106d-321e-45b6-8d19-1925dcfccb82"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.951829 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd51106d-321e-45b6-8d19-1925dcfccb82-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cd51106d-321e-45b6-8d19-1925dcfccb82" (UID: "cd51106d-321e-45b6-8d19-1925dcfccb82"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:30:03 crc kubenswrapper[4812]: I1124 19:30:03.952289 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd51106d-321e-45b6-8d19-1925dcfccb82-kube-api-access-v79j9" (OuterVolumeSpecName: "kube-api-access-v79j9") pod "cd51106d-321e-45b6-8d19-1925dcfccb82" (UID: "cd51106d-321e-45b6-8d19-1925dcfccb82"). InnerVolumeSpecName "kube-api-access-v79j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.049498 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v79j9\" (UniqueName: \"kubernetes.io/projected/cd51106d-321e-45b6-8d19-1925dcfccb82-kube-api-access-v79j9\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.049571 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd51106d-321e-45b6-8d19-1925dcfccb82-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.049595 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd51106d-321e-45b6-8d19-1925dcfccb82-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.411051 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-jb4lb" podUID="68125ac7-cf1b-4461-820a-b7318076e62d" containerName="console" containerID="cri-o://d1cc3c69ff571dfdb28ef512938a1977dab8160b24b9202f9968218a18535217" gracePeriod=15 Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.466393 4812 generic.go:334] "Generic (PLEG): container finished" podID="3157a3ce-320d-4d67-8a6c-8bf7e30f7200" containerID="ca1185d2b5211a61e1c2cc864484d3e6ccb08064c9592dce0c0ec50893780f51" exitCode=0 Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.466489 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmdfd" event={"ID":"3157a3ce-320d-4d67-8a6c-8bf7e30f7200","Type":"ContainerDied","Data":"ca1185d2b5211a61e1c2cc864484d3e6ccb08064c9592dce0c0ec50893780f51"} Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.466563 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmdfd" event={"ID":"3157a3ce-320d-4d67-8a6c-8bf7e30f7200","Type":"ContainerStarted","Data":"c3135b1c9950041cc5b355eddead78360480b0bf19e95041184baf2c5c9e930f"} Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.476938 4812 generic.go:334] "Generic (PLEG): container finished" podID="0e652404-fed6-42f2-aae4-02b572479de1" containerID="19a3384377f2b09d83b4c362d4f12d7a8a6bfc8d9c3141effb3c2b65267cb7d9" exitCode=0 Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.477098 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd" event={"ID":"0e652404-fed6-42f2-aae4-02b572479de1","Type":"ContainerDied","Data":"19a3384377f2b09d83b4c362d4f12d7a8a6bfc8d9c3141effb3c2b65267cb7d9"} Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.477176 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd" event={"ID":"0e652404-fed6-42f2-aae4-02b572479de1","Type":"ContainerStarted","Data":"7e8da0831cb276c06989fef694ee1611849184a6e6b6be7eef7dea7fc3510495"} Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.483821 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400210-7l2c8" Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.483980 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400210-7l2c8" event={"ID":"cd51106d-321e-45b6-8d19-1925dcfccb82","Type":"ContainerDied","Data":"0f63ba124f89deccd92e406e691454762130d589f73f1a8b44d647590c655223"} Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.484025 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f63ba124f89deccd92e406e691454762130d589f73f1a8b44d647590c655223" Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.759941 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-jb4lb_68125ac7-cf1b-4461-820a-b7318076e62d/console/0.log" Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.760246 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.859928 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/68125ac7-cf1b-4461-820a-b7318076e62d-console-oauth-config\") pod \"68125ac7-cf1b-4461-820a-b7318076e62d\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.860040 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-service-ca\") pod \"68125ac7-cf1b-4461-820a-b7318076e62d\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.860118 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/68125ac7-cf1b-4461-820a-b7318076e62d-console-serving-cert\") pod \"68125ac7-cf1b-4461-820a-b7318076e62d\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.860183 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-console-config\") pod \"68125ac7-cf1b-4461-820a-b7318076e62d\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.860250 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-trusted-ca-bundle\") pod \"68125ac7-cf1b-4461-820a-b7318076e62d\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.860292 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-oauth-serving-cert\") pod \"68125ac7-cf1b-4461-820a-b7318076e62d\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.860439 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtswf\" (UniqueName: \"kubernetes.io/projected/68125ac7-cf1b-4461-820a-b7318076e62d-kube-api-access-dtswf\") pod \"68125ac7-cf1b-4461-820a-b7318076e62d\" (UID: \"68125ac7-cf1b-4461-820a-b7318076e62d\") " Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.861102 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-service-ca" (OuterVolumeSpecName: "service-ca") pod "68125ac7-cf1b-4461-820a-b7318076e62d" (UID: "68125ac7-cf1b-4461-820a-b7318076e62d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.861118 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "68125ac7-cf1b-4461-820a-b7318076e62d" (UID: "68125ac7-cf1b-4461-820a-b7318076e62d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.861149 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-console-config" (OuterVolumeSpecName: "console-config") pod "68125ac7-cf1b-4461-820a-b7318076e62d" (UID: "68125ac7-cf1b-4461-820a-b7318076e62d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.861189 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "68125ac7-cf1b-4461-820a-b7318076e62d" (UID: "68125ac7-cf1b-4461-820a-b7318076e62d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.865511 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68125ac7-cf1b-4461-820a-b7318076e62d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "68125ac7-cf1b-4461-820a-b7318076e62d" (UID: "68125ac7-cf1b-4461-820a-b7318076e62d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.865719 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68125ac7-cf1b-4461-820a-b7318076e62d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "68125ac7-cf1b-4461-820a-b7318076e62d" (UID: "68125ac7-cf1b-4461-820a-b7318076e62d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.865756 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68125ac7-cf1b-4461-820a-b7318076e62d-kube-api-access-dtswf" (OuterVolumeSpecName: "kube-api-access-dtswf") pod "68125ac7-cf1b-4461-820a-b7318076e62d" (UID: "68125ac7-cf1b-4461-820a-b7318076e62d"). InnerVolumeSpecName "kube-api-access-dtswf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.962615 4812 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/68125ac7-cf1b-4461-820a-b7318076e62d-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.962673 4812 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.962692 4812 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.962712 4812 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.962729 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtswf\" (UniqueName: \"kubernetes.io/projected/68125ac7-cf1b-4461-820a-b7318076e62d-kube-api-access-dtswf\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.962748 4812 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/68125ac7-cf1b-4461-820a-b7318076e62d-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:04 crc kubenswrapper[4812]: I1124 19:30:04.962765 4812 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/68125ac7-cf1b-4461-820a-b7318076e62d-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:05 crc kubenswrapper[4812]: I1124 19:30:05.490298 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-jb4lb_68125ac7-cf1b-4461-820a-b7318076e62d/console/0.log" Nov 24 19:30:05 crc kubenswrapper[4812]: I1124 19:30:05.490503 4812 generic.go:334] "Generic (PLEG): container finished" podID="68125ac7-cf1b-4461-820a-b7318076e62d" containerID="d1cc3c69ff571dfdb28ef512938a1977dab8160b24b9202f9968218a18535217" exitCode=2 Nov 24 19:30:05 crc kubenswrapper[4812]: I1124 19:30:05.490550 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jb4lb" event={"ID":"68125ac7-cf1b-4461-820a-b7318076e62d","Type":"ContainerDied","Data":"d1cc3c69ff571dfdb28ef512938a1977dab8160b24b9202f9968218a18535217"} Nov 24 19:30:05 crc kubenswrapper[4812]: I1124 19:30:05.490601 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jb4lb" event={"ID":"68125ac7-cf1b-4461-820a-b7318076e62d","Type":"ContainerDied","Data":"fa4b917ebd1be0bdd22b421c7647ab27c65c98326466c63f19eeb1c235eb413f"} Nov 24 19:30:05 crc kubenswrapper[4812]: I1124 19:30:05.490630 4812 scope.go:117] "RemoveContainer" containerID="d1cc3c69ff571dfdb28ef512938a1977dab8160b24b9202f9968218a18535217" Nov 24 19:30:05 crc kubenswrapper[4812]: I1124 19:30:05.490658 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jb4lb" Nov 24 19:30:05 crc kubenswrapper[4812]: I1124 19:30:05.512066 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-jb4lb"] Nov 24 19:30:05 crc kubenswrapper[4812]: I1124 19:30:05.517596 4812 scope.go:117] "RemoveContainer" containerID="d1cc3c69ff571dfdb28ef512938a1977dab8160b24b9202f9968218a18535217" Nov 24 19:30:05 crc kubenswrapper[4812]: E1124 19:30:05.518165 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1cc3c69ff571dfdb28ef512938a1977dab8160b24b9202f9968218a18535217\": container with ID starting with d1cc3c69ff571dfdb28ef512938a1977dab8160b24b9202f9968218a18535217 not found: ID does not exist" containerID="d1cc3c69ff571dfdb28ef512938a1977dab8160b24b9202f9968218a18535217" Nov 24 19:30:05 crc kubenswrapper[4812]: I1124 19:30:05.518212 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1cc3c69ff571dfdb28ef512938a1977dab8160b24b9202f9968218a18535217"} err="failed to get container status \"d1cc3c69ff571dfdb28ef512938a1977dab8160b24b9202f9968218a18535217\": rpc error: code = NotFound desc = could not find container \"d1cc3c69ff571dfdb28ef512938a1977dab8160b24b9202f9968218a18535217\": container with ID starting with d1cc3c69ff571dfdb28ef512938a1977dab8160b24b9202f9968218a18535217 not found: ID does not exist" Nov 24 19:30:05 crc kubenswrapper[4812]: I1124 19:30:05.519364 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-jb4lb"] Nov 24 19:30:06 crc kubenswrapper[4812]: I1124 19:30:06.505539 4812 generic.go:334] "Generic (PLEG): container finished" podID="3157a3ce-320d-4d67-8a6c-8bf7e30f7200" containerID="86d4b7f42b2dffdcd6591c73f2a98e01b8f3b9e1019e3ffc1903d9541f57aa02" exitCode=0 Nov 24 19:30:06 crc kubenswrapper[4812]: I1124 19:30:06.505608 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmdfd" event={"ID":"3157a3ce-320d-4d67-8a6c-8bf7e30f7200","Type":"ContainerDied","Data":"86d4b7f42b2dffdcd6591c73f2a98e01b8f3b9e1019e3ffc1903d9541f57aa02"} Nov 24 19:30:06 crc kubenswrapper[4812]: I1124 19:30:06.806407 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcsp9"] Nov 24 19:30:06 crc kubenswrapper[4812]: I1124 19:30:06.806908 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fcsp9" podUID="42abe1ac-df11-4743-8562-383e23dcd790" containerName="registry-server" containerID="cri-o://88521d00bc6219d88bff79750034fb4872cf1769ccb2c655b407c421de8c84a7" gracePeriod=2 Nov 24 19:30:06 crc kubenswrapper[4812]: I1124 19:30:06.980151 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68125ac7-cf1b-4461-820a-b7318076e62d" path="/var/lib/kubelet/pods/68125ac7-cf1b-4461-820a-b7318076e62d/volumes" Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.321481 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcsp9" Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.399537 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42abe1ac-df11-4743-8562-383e23dcd790-utilities\") pod \"42abe1ac-df11-4743-8562-383e23dcd790\" (UID: \"42abe1ac-df11-4743-8562-383e23dcd790\") " Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.399617 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czl9j\" (UniqueName: \"kubernetes.io/projected/42abe1ac-df11-4743-8562-383e23dcd790-kube-api-access-czl9j\") pod \"42abe1ac-df11-4743-8562-383e23dcd790\" (UID: \"42abe1ac-df11-4743-8562-383e23dcd790\") " Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.399650 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42abe1ac-df11-4743-8562-383e23dcd790-catalog-content\") pod \"42abe1ac-df11-4743-8562-383e23dcd790\" (UID: \"42abe1ac-df11-4743-8562-383e23dcd790\") " Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.400396 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42abe1ac-df11-4743-8562-383e23dcd790-utilities" (OuterVolumeSpecName: "utilities") pod "42abe1ac-df11-4743-8562-383e23dcd790" (UID: "42abe1ac-df11-4743-8562-383e23dcd790"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.410124 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42abe1ac-df11-4743-8562-383e23dcd790-kube-api-access-czl9j" (OuterVolumeSpecName: "kube-api-access-czl9j") pod "42abe1ac-df11-4743-8562-383e23dcd790" (UID: "42abe1ac-df11-4743-8562-383e23dcd790"). InnerVolumeSpecName "kube-api-access-czl9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.424089 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42abe1ac-df11-4743-8562-383e23dcd790-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42abe1ac-df11-4743-8562-383e23dcd790" (UID: "42abe1ac-df11-4743-8562-383e23dcd790"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.501456 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czl9j\" (UniqueName: \"kubernetes.io/projected/42abe1ac-df11-4743-8562-383e23dcd790-kube-api-access-czl9j\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.501786 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42abe1ac-df11-4743-8562-383e23dcd790-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.501858 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42abe1ac-df11-4743-8562-383e23dcd790-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.517850 4812 generic.go:334] "Generic (PLEG): container finished" podID="42abe1ac-df11-4743-8562-383e23dcd790" containerID="88521d00bc6219d88bff79750034fb4872cf1769ccb2c655b407c421de8c84a7" exitCode=0 Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.517935 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcsp9" event={"ID":"42abe1ac-df11-4743-8562-383e23dcd790","Type":"ContainerDied","Data":"88521d00bc6219d88bff79750034fb4872cf1769ccb2c655b407c421de8c84a7"} Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.517976 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcsp9" event={"ID":"42abe1ac-df11-4743-8562-383e23dcd790","Type":"ContainerDied","Data":"9c90086e6dab18d4d49a9285626783f2df189ab2b221ab84be2eaaa9d1852dc9"} Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.518008 4812 scope.go:117] "RemoveContainer" containerID="88521d00bc6219d88bff79750034fb4872cf1769ccb2c655b407c421de8c84a7" Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.518556 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcsp9" Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.522693 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmdfd" event={"ID":"3157a3ce-320d-4d67-8a6c-8bf7e30f7200","Type":"ContainerStarted","Data":"490ae8a4a9b342cb386c681e321e01b4462fa5321bee68131be094233c88f577"} Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.525164 4812 generic.go:334] "Generic (PLEG): container finished" podID="0e652404-fed6-42f2-aae4-02b572479de1" containerID="8de499142fdea14722c384fdd098543399ebd1249795c86083004667a01656e9" exitCode=0 Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.525232 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd" event={"ID":"0e652404-fed6-42f2-aae4-02b572479de1","Type":"ContainerDied","Data":"8de499142fdea14722c384fdd098543399ebd1249795c86083004667a01656e9"} Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.543828 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zmdfd" podStartSLOduration=3.12593386 podStartE2EDuration="5.54380459s" podCreationTimestamp="2025-11-24 19:30:02 +0000 UTC" firstStartedPulling="2025-11-24 19:30:04.470105102 +0000 UTC m=+798.259057513" lastFinishedPulling="2025-11-24 19:30:06.887975832 +0000 UTC m=+800.676928243" observedRunningTime="2025-11-24 19:30:07.54275305 +0000 UTC m=+801.331705441" watchObservedRunningTime="2025-11-24 19:30:07.54380459 +0000 UTC m=+801.332756981" Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.554308 4812 scope.go:117] "RemoveContainer" containerID="5e7ba74c2c56f8e611edb536a7cd223ec2600a20c978834d9620fbfbb8094325" Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.586813 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcsp9"] Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.590584 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcsp9"] Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.610858 4812 scope.go:117] "RemoveContainer" containerID="6a6f35224755c1ba1ae0940b4d7f954356046725ec03b81668069c15500b63f9" Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.644888 4812 scope.go:117] "RemoveContainer" containerID="88521d00bc6219d88bff79750034fb4872cf1769ccb2c655b407c421de8c84a7" Nov 24 19:30:07 crc kubenswrapper[4812]: E1124 19:30:07.645468 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88521d00bc6219d88bff79750034fb4872cf1769ccb2c655b407c421de8c84a7\": container with ID starting with 88521d00bc6219d88bff79750034fb4872cf1769ccb2c655b407c421de8c84a7 not found: ID does not exist" containerID="88521d00bc6219d88bff79750034fb4872cf1769ccb2c655b407c421de8c84a7" Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.645508 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88521d00bc6219d88bff79750034fb4872cf1769ccb2c655b407c421de8c84a7"} err="failed to get container status \"88521d00bc6219d88bff79750034fb4872cf1769ccb2c655b407c421de8c84a7\": rpc error: code = NotFound desc = could not find container \"88521d00bc6219d88bff79750034fb4872cf1769ccb2c655b407c421de8c84a7\": container with ID starting with 88521d00bc6219d88bff79750034fb4872cf1769ccb2c655b407c421de8c84a7 not found: ID does not exist" Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.645587 4812 scope.go:117] "RemoveContainer" containerID="5e7ba74c2c56f8e611edb536a7cd223ec2600a20c978834d9620fbfbb8094325" Nov 24 19:30:07 crc kubenswrapper[4812]: E1124 19:30:07.645954 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e7ba74c2c56f8e611edb536a7cd223ec2600a20c978834d9620fbfbb8094325\": container with ID starting with 5e7ba74c2c56f8e611edb536a7cd223ec2600a20c978834d9620fbfbb8094325 not found: ID does not exist" containerID="5e7ba74c2c56f8e611edb536a7cd223ec2600a20c978834d9620fbfbb8094325" Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.645981 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e7ba74c2c56f8e611edb536a7cd223ec2600a20c978834d9620fbfbb8094325"} err="failed to get container status \"5e7ba74c2c56f8e611edb536a7cd223ec2600a20c978834d9620fbfbb8094325\": rpc error: code = NotFound desc = could not find container \"5e7ba74c2c56f8e611edb536a7cd223ec2600a20c978834d9620fbfbb8094325\": container with ID starting with 5e7ba74c2c56f8e611edb536a7cd223ec2600a20c978834d9620fbfbb8094325 not found: ID does not exist" Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.646001 4812 scope.go:117] "RemoveContainer" containerID="6a6f35224755c1ba1ae0940b4d7f954356046725ec03b81668069c15500b63f9" Nov 24 19:30:07 crc kubenswrapper[4812]: E1124 19:30:07.646211 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a6f35224755c1ba1ae0940b4d7f954356046725ec03b81668069c15500b63f9\": container with ID starting with 6a6f35224755c1ba1ae0940b4d7f954356046725ec03b81668069c15500b63f9 not found: ID does not exist" containerID="6a6f35224755c1ba1ae0940b4d7f954356046725ec03b81668069c15500b63f9" Nov 24 19:30:07 crc kubenswrapper[4812]: I1124 19:30:07.646236 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a6f35224755c1ba1ae0940b4d7f954356046725ec03b81668069c15500b63f9"} err="failed to get container status \"6a6f35224755c1ba1ae0940b4d7f954356046725ec03b81668069c15500b63f9\": rpc error: code = NotFound desc = could not find container \"6a6f35224755c1ba1ae0940b4d7f954356046725ec03b81668069c15500b63f9\": container with ID starting with 6a6f35224755c1ba1ae0940b4d7f954356046725ec03b81668069c15500b63f9 not found: ID does not exist" Nov 24 19:30:08 crc kubenswrapper[4812]: I1124 19:30:08.539865 4812 generic.go:334] "Generic (PLEG): container finished" podID="0e652404-fed6-42f2-aae4-02b572479de1" containerID="cdd47f458a53daa4c1e0c84312703a996ad73e35ce56411e99d366e827638a54" exitCode=0 Nov 24 19:30:08 crc kubenswrapper[4812]: I1124 19:30:08.539938 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd" event={"ID":"0e652404-fed6-42f2-aae4-02b572479de1","Type":"ContainerDied","Data":"cdd47f458a53daa4c1e0c84312703a996ad73e35ce56411e99d366e827638a54"} Nov 24 19:30:08 crc kubenswrapper[4812]: I1124 19:30:08.975379 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42abe1ac-df11-4743-8562-383e23dcd790" path="/var/lib/kubelet/pods/42abe1ac-df11-4743-8562-383e23dcd790/volumes" Nov 24 19:30:09 crc kubenswrapper[4812]: I1124 19:30:09.989877 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd" Nov 24 19:30:10 crc kubenswrapper[4812]: I1124 19:30:10.038971 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e652404-fed6-42f2-aae4-02b572479de1-bundle\") pod \"0e652404-fed6-42f2-aae4-02b572479de1\" (UID: \"0e652404-fed6-42f2-aae4-02b572479de1\") " Nov 24 19:30:10 crc kubenswrapper[4812]: I1124 19:30:10.039083 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e652404-fed6-42f2-aae4-02b572479de1-util\") pod \"0e652404-fed6-42f2-aae4-02b572479de1\" (UID: \"0e652404-fed6-42f2-aae4-02b572479de1\") " Nov 24 19:30:10 crc kubenswrapper[4812]: I1124 19:30:10.039196 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-498tr\" (UniqueName: \"kubernetes.io/projected/0e652404-fed6-42f2-aae4-02b572479de1-kube-api-access-498tr\") pod \"0e652404-fed6-42f2-aae4-02b572479de1\" (UID: \"0e652404-fed6-42f2-aae4-02b572479de1\") " Nov 24 19:30:10 crc kubenswrapper[4812]: I1124 19:30:10.041503 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e652404-fed6-42f2-aae4-02b572479de1-bundle" (OuterVolumeSpecName: "bundle") pod "0e652404-fed6-42f2-aae4-02b572479de1" (UID: "0e652404-fed6-42f2-aae4-02b572479de1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:30:10 crc kubenswrapper[4812]: I1124 19:30:10.047589 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e652404-fed6-42f2-aae4-02b572479de1-kube-api-access-498tr" (OuterVolumeSpecName: "kube-api-access-498tr") pod "0e652404-fed6-42f2-aae4-02b572479de1" (UID: "0e652404-fed6-42f2-aae4-02b572479de1"). InnerVolumeSpecName "kube-api-access-498tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:30:10 crc kubenswrapper[4812]: I1124 19:30:10.066000 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e652404-fed6-42f2-aae4-02b572479de1-util" (OuterVolumeSpecName: "util") pod "0e652404-fed6-42f2-aae4-02b572479de1" (UID: "0e652404-fed6-42f2-aae4-02b572479de1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:30:10 crc kubenswrapper[4812]: I1124 19:30:10.140764 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-498tr\" (UniqueName: \"kubernetes.io/projected/0e652404-fed6-42f2-aae4-02b572479de1-kube-api-access-498tr\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:10 crc kubenswrapper[4812]: I1124 19:30:10.140818 4812 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0e652404-fed6-42f2-aae4-02b572479de1-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:10 crc kubenswrapper[4812]: I1124 19:30:10.140837 4812 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0e652404-fed6-42f2-aae4-02b572479de1-util\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:10 crc kubenswrapper[4812]: I1124 19:30:10.561585 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd" event={"ID":"0e652404-fed6-42f2-aae4-02b572479de1","Type":"ContainerDied","Data":"7e8da0831cb276c06989fef694ee1611849184a6e6b6be7eef7dea7fc3510495"} Nov 24 19:30:10 crc kubenswrapper[4812]: I1124 19:30:10.561649 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e8da0831cb276c06989fef694ee1611849184a6e6b6be7eef7dea7fc3510495" Nov 24 19:30:10 crc kubenswrapper[4812]: I1124 19:30:10.561780 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd" Nov 24 19:30:13 crc kubenswrapper[4812]: I1124 19:30:13.139240 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zmdfd" Nov 24 19:30:13 crc kubenswrapper[4812]: I1124 19:30:13.139539 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zmdfd" Nov 24 19:30:13 crc kubenswrapper[4812]: I1124 19:30:13.176931 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zmdfd" Nov 24 19:30:13 crc kubenswrapper[4812]: I1124 19:30:13.641020 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zmdfd" Nov 24 19:30:14 crc kubenswrapper[4812]: I1124 19:30:14.803657 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zmdfd"] Nov 24 19:30:15 crc kubenswrapper[4812]: I1124 19:30:15.594271 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zmdfd" podUID="3157a3ce-320d-4d67-8a6c-8bf7e30f7200" containerName="registry-server" containerID="cri-o://490ae8a4a9b342cb386c681e321e01b4462fa5321bee68131be094233c88f577" gracePeriod=2 Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.037106 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zmdfd" Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.226363 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpskn\" (UniqueName: \"kubernetes.io/projected/3157a3ce-320d-4d67-8a6c-8bf7e30f7200-kube-api-access-jpskn\") pod \"3157a3ce-320d-4d67-8a6c-8bf7e30f7200\" (UID: \"3157a3ce-320d-4d67-8a6c-8bf7e30f7200\") " Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.226446 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3157a3ce-320d-4d67-8a6c-8bf7e30f7200-catalog-content\") pod \"3157a3ce-320d-4d67-8a6c-8bf7e30f7200\" (UID: \"3157a3ce-320d-4d67-8a6c-8bf7e30f7200\") " Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.226563 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3157a3ce-320d-4d67-8a6c-8bf7e30f7200-utilities\") pod \"3157a3ce-320d-4d67-8a6c-8bf7e30f7200\" (UID: \"3157a3ce-320d-4d67-8a6c-8bf7e30f7200\") " Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.228165 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3157a3ce-320d-4d67-8a6c-8bf7e30f7200-utilities" (OuterVolumeSpecName: "utilities") pod "3157a3ce-320d-4d67-8a6c-8bf7e30f7200" (UID: "3157a3ce-320d-4d67-8a6c-8bf7e30f7200"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.232052 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3157a3ce-320d-4d67-8a6c-8bf7e30f7200-kube-api-access-jpskn" (OuterVolumeSpecName: "kube-api-access-jpskn") pod "3157a3ce-320d-4d67-8a6c-8bf7e30f7200" (UID: "3157a3ce-320d-4d67-8a6c-8bf7e30f7200"). InnerVolumeSpecName "kube-api-access-jpskn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.299333 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3157a3ce-320d-4d67-8a6c-8bf7e30f7200-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3157a3ce-320d-4d67-8a6c-8bf7e30f7200" (UID: "3157a3ce-320d-4d67-8a6c-8bf7e30f7200"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.327701 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpskn\" (UniqueName: \"kubernetes.io/projected/3157a3ce-320d-4d67-8a6c-8bf7e30f7200-kube-api-access-jpskn\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.327723 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3157a3ce-320d-4d67-8a6c-8bf7e30f7200-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.327732 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3157a3ce-320d-4d67-8a6c-8bf7e30f7200-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.601794 4812 generic.go:334] "Generic (PLEG): container finished" podID="3157a3ce-320d-4d67-8a6c-8bf7e30f7200" containerID="490ae8a4a9b342cb386c681e321e01b4462fa5321bee68131be094233c88f577" exitCode=0 Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.601839 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmdfd" event={"ID":"3157a3ce-320d-4d67-8a6c-8bf7e30f7200","Type":"ContainerDied","Data":"490ae8a4a9b342cb386c681e321e01b4462fa5321bee68131be094233c88f577"} Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.601879 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zmdfd" Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.601906 4812 scope.go:117] "RemoveContainer" containerID="490ae8a4a9b342cb386c681e321e01b4462fa5321bee68131be094233c88f577" Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.601894 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zmdfd" event={"ID":"3157a3ce-320d-4d67-8a6c-8bf7e30f7200","Type":"ContainerDied","Data":"c3135b1c9950041cc5b355eddead78360480b0bf19e95041184baf2c5c9e930f"} Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.619015 4812 scope.go:117] "RemoveContainer" containerID="86d4b7f42b2dffdcd6591c73f2a98e01b8f3b9e1019e3ffc1903d9541f57aa02" Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.630905 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zmdfd"] Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.634659 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zmdfd"] Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.637727 4812 scope.go:117] "RemoveContainer" containerID="ca1185d2b5211a61e1c2cc864484d3e6ccb08064c9592dce0c0ec50893780f51" Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.667295 4812 scope.go:117] "RemoveContainer" containerID="490ae8a4a9b342cb386c681e321e01b4462fa5321bee68131be094233c88f577" Nov 24 19:30:16 crc kubenswrapper[4812]: E1124 19:30:16.667704 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"490ae8a4a9b342cb386c681e321e01b4462fa5321bee68131be094233c88f577\": container with ID starting with 490ae8a4a9b342cb386c681e321e01b4462fa5321bee68131be094233c88f577 not found: ID does not exist" containerID="490ae8a4a9b342cb386c681e321e01b4462fa5321bee68131be094233c88f577" Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.667748 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"490ae8a4a9b342cb386c681e321e01b4462fa5321bee68131be094233c88f577"} err="failed to get container status \"490ae8a4a9b342cb386c681e321e01b4462fa5321bee68131be094233c88f577\": rpc error: code = NotFound desc = could not find container \"490ae8a4a9b342cb386c681e321e01b4462fa5321bee68131be094233c88f577\": container with ID starting with 490ae8a4a9b342cb386c681e321e01b4462fa5321bee68131be094233c88f577 not found: ID does not exist" Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.667777 4812 scope.go:117] "RemoveContainer" containerID="86d4b7f42b2dffdcd6591c73f2a98e01b8f3b9e1019e3ffc1903d9541f57aa02" Nov 24 19:30:16 crc kubenswrapper[4812]: E1124 19:30:16.668083 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86d4b7f42b2dffdcd6591c73f2a98e01b8f3b9e1019e3ffc1903d9541f57aa02\": container with ID starting with 86d4b7f42b2dffdcd6591c73f2a98e01b8f3b9e1019e3ffc1903d9541f57aa02 not found: ID does not exist" containerID="86d4b7f42b2dffdcd6591c73f2a98e01b8f3b9e1019e3ffc1903d9541f57aa02" Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.668114 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d4b7f42b2dffdcd6591c73f2a98e01b8f3b9e1019e3ffc1903d9541f57aa02"} err="failed to get container status \"86d4b7f42b2dffdcd6591c73f2a98e01b8f3b9e1019e3ffc1903d9541f57aa02\": rpc error: code = NotFound desc = could not find container \"86d4b7f42b2dffdcd6591c73f2a98e01b8f3b9e1019e3ffc1903d9541f57aa02\": container with ID starting with 86d4b7f42b2dffdcd6591c73f2a98e01b8f3b9e1019e3ffc1903d9541f57aa02 not found: ID does not exist" Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.668135 4812 scope.go:117] "RemoveContainer" containerID="ca1185d2b5211a61e1c2cc864484d3e6ccb08064c9592dce0c0ec50893780f51" Nov 24 19:30:16 crc kubenswrapper[4812]: E1124 19:30:16.668349 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca1185d2b5211a61e1c2cc864484d3e6ccb08064c9592dce0c0ec50893780f51\": container with ID starting with ca1185d2b5211a61e1c2cc864484d3e6ccb08064c9592dce0c0ec50893780f51 not found: ID does not exist" containerID="ca1185d2b5211a61e1c2cc864484d3e6ccb08064c9592dce0c0ec50893780f51" Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.668369 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca1185d2b5211a61e1c2cc864484d3e6ccb08064c9592dce0c0ec50893780f51"} err="failed to get container status \"ca1185d2b5211a61e1c2cc864484d3e6ccb08064c9592dce0c0ec50893780f51\": rpc error: code = NotFound desc = could not find container \"ca1185d2b5211a61e1c2cc864484d3e6ccb08064c9592dce0c0ec50893780f51\": container with ID starting with ca1185d2b5211a61e1c2cc864484d3e6ccb08064c9592dce0c0ec50893780f51 not found: ID does not exist" Nov 24 19:30:16 crc kubenswrapper[4812]: I1124 19:30:16.973033 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3157a3ce-320d-4d67-8a6c-8bf7e30f7200" path="/var/lib/kubelet/pods/3157a3ce-320d-4d67-8a6c-8bf7e30f7200/volumes" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.024350 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-brhd4"] Nov 24 19:30:18 crc kubenswrapper[4812]: E1124 19:30:18.024836 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42abe1ac-df11-4743-8562-383e23dcd790" containerName="extract-content" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.024849 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="42abe1ac-df11-4743-8562-383e23dcd790" containerName="extract-content" Nov 24 19:30:18 crc kubenswrapper[4812]: E1124 19:30:18.024866 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42abe1ac-df11-4743-8562-383e23dcd790" containerName="extract-utilities" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.024874 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="42abe1ac-df11-4743-8562-383e23dcd790" containerName="extract-utilities" Nov 24 19:30:18 crc kubenswrapper[4812]: E1124 19:30:18.024887 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3157a3ce-320d-4d67-8a6c-8bf7e30f7200" containerName="extract-content" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.024893 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3157a3ce-320d-4d67-8a6c-8bf7e30f7200" containerName="extract-content" Nov 24 19:30:18 crc kubenswrapper[4812]: E1124 19:30:18.024904 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3157a3ce-320d-4d67-8a6c-8bf7e30f7200" containerName="extract-utilities" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.024909 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3157a3ce-320d-4d67-8a6c-8bf7e30f7200" containerName="extract-utilities" Nov 24 19:30:18 crc kubenswrapper[4812]: E1124 19:30:18.024915 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e652404-fed6-42f2-aae4-02b572479de1" containerName="pull" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.024921 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e652404-fed6-42f2-aae4-02b572479de1" containerName="pull" Nov 24 19:30:18 crc kubenswrapper[4812]: E1124 19:30:18.024933 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3157a3ce-320d-4d67-8a6c-8bf7e30f7200" containerName="registry-server" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.024939 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3157a3ce-320d-4d67-8a6c-8bf7e30f7200" containerName="registry-server" Nov 24 19:30:18 crc kubenswrapper[4812]: E1124 19:30:18.024949 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68125ac7-cf1b-4461-820a-b7318076e62d" containerName="console" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.024955 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="68125ac7-cf1b-4461-820a-b7318076e62d" containerName="console" Nov 24 19:30:18 crc kubenswrapper[4812]: E1124 19:30:18.024963 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd51106d-321e-45b6-8d19-1925dcfccb82" containerName="collect-profiles" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.024969 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd51106d-321e-45b6-8d19-1925dcfccb82" containerName="collect-profiles" Nov 24 19:30:18 crc kubenswrapper[4812]: E1124 19:30:18.024976 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e652404-fed6-42f2-aae4-02b572479de1" containerName="util" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.024982 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e652404-fed6-42f2-aae4-02b572479de1" containerName="util" Nov 24 19:30:18 crc kubenswrapper[4812]: E1124 19:30:18.024991 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e652404-fed6-42f2-aae4-02b572479de1" containerName="extract" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.024997 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e652404-fed6-42f2-aae4-02b572479de1" containerName="extract" Nov 24 19:30:18 crc kubenswrapper[4812]: E1124 19:30:18.025006 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42abe1ac-df11-4743-8562-383e23dcd790" containerName="registry-server" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.025011 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="42abe1ac-df11-4743-8562-383e23dcd790" containerName="registry-server" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.025101 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="3157a3ce-320d-4d67-8a6c-8bf7e30f7200" containerName="registry-server" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.025111 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="68125ac7-cf1b-4461-820a-b7318076e62d" containerName="console" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.025119 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="42abe1ac-df11-4743-8562-383e23dcd790" containerName="registry-server" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.025128 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e652404-fed6-42f2-aae4-02b572479de1" containerName="extract" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.025139 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd51106d-321e-45b6-8d19-1925dcfccb82" containerName="collect-profiles" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.025922 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brhd4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.054465 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-brhd4"] Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.154196 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4bmq\" (UniqueName: \"kubernetes.io/projected/00663509-e1ad-4c0a-98f8-1593f49f3af4-kube-api-access-s4bmq\") pod \"community-operators-brhd4\" (UID: \"00663509-e1ad-4c0a-98f8-1593f49f3af4\") " pod="openshift-marketplace/community-operators-brhd4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.154237 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00663509-e1ad-4c0a-98f8-1593f49f3af4-catalog-content\") pod \"community-operators-brhd4\" (UID: \"00663509-e1ad-4c0a-98f8-1593f49f3af4\") " pod="openshift-marketplace/community-operators-brhd4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.154258 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00663509-e1ad-4c0a-98f8-1593f49f3af4-utilities\") pod \"community-operators-brhd4\" (UID: \"00663509-e1ad-4c0a-98f8-1593f49f3af4\") " pod="openshift-marketplace/community-operators-brhd4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.236175 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-78f47c68d-wvmj4"] Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.236845 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-78f47c68d-wvmj4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.238316 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.238564 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.239035 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.239410 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-p7t4v" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.239692 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.254803 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4bmq\" (UniqueName: \"kubernetes.io/projected/00663509-e1ad-4c0a-98f8-1593f49f3af4-kube-api-access-s4bmq\") pod \"community-operators-brhd4\" (UID: \"00663509-e1ad-4c0a-98f8-1593f49f3af4\") " pod="openshift-marketplace/community-operators-brhd4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.254847 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00663509-e1ad-4c0a-98f8-1593f49f3af4-catalog-content\") pod \"community-operators-brhd4\" (UID: \"00663509-e1ad-4c0a-98f8-1593f49f3af4\") " pod="openshift-marketplace/community-operators-brhd4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.254870 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00663509-e1ad-4c0a-98f8-1593f49f3af4-utilities\") pod \"community-operators-brhd4\" (UID: \"00663509-e1ad-4c0a-98f8-1593f49f3af4\") " pod="openshift-marketplace/community-operators-brhd4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.255222 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00663509-e1ad-4c0a-98f8-1593f49f3af4-utilities\") pod \"community-operators-brhd4\" (UID: \"00663509-e1ad-4c0a-98f8-1593f49f3af4\") " pod="openshift-marketplace/community-operators-brhd4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.255709 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00663509-e1ad-4c0a-98f8-1593f49f3af4-catalog-content\") pod \"community-operators-brhd4\" (UID: \"00663509-e1ad-4c0a-98f8-1593f49f3af4\") " pod="openshift-marketplace/community-operators-brhd4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.259483 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-78f47c68d-wvmj4"] Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.290090 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4bmq\" (UniqueName: \"kubernetes.io/projected/00663509-e1ad-4c0a-98f8-1593f49f3af4-kube-api-access-s4bmq\") pod \"community-operators-brhd4\" (UID: \"00663509-e1ad-4c0a-98f8-1593f49f3af4\") " pod="openshift-marketplace/community-operators-brhd4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.342117 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brhd4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.358154 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d0c4f09-ce13-4072-8746-9b086bca839f-webhook-cert\") pod \"metallb-operator-controller-manager-78f47c68d-wvmj4\" (UID: \"3d0c4f09-ce13-4072-8746-9b086bca839f\") " pod="metallb-system/metallb-operator-controller-manager-78f47c68d-wvmj4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.358197 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9rrq\" (UniqueName: \"kubernetes.io/projected/3d0c4f09-ce13-4072-8746-9b086bca839f-kube-api-access-v9rrq\") pod \"metallb-operator-controller-manager-78f47c68d-wvmj4\" (UID: \"3d0c4f09-ce13-4072-8746-9b086bca839f\") " pod="metallb-system/metallb-operator-controller-manager-78f47c68d-wvmj4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.358230 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d0c4f09-ce13-4072-8746-9b086bca839f-apiservice-cert\") pod \"metallb-operator-controller-manager-78f47c68d-wvmj4\" (UID: \"3d0c4f09-ce13-4072-8746-9b086bca839f\") " pod="metallb-system/metallb-operator-controller-manager-78f47c68d-wvmj4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.459980 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d0c4f09-ce13-4072-8746-9b086bca839f-webhook-cert\") pod \"metallb-operator-controller-manager-78f47c68d-wvmj4\" (UID: \"3d0c4f09-ce13-4072-8746-9b086bca839f\") " pod="metallb-system/metallb-operator-controller-manager-78f47c68d-wvmj4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.460317 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9rrq\" (UniqueName: \"kubernetes.io/projected/3d0c4f09-ce13-4072-8746-9b086bca839f-kube-api-access-v9rrq\") pod \"metallb-operator-controller-manager-78f47c68d-wvmj4\" (UID: \"3d0c4f09-ce13-4072-8746-9b086bca839f\") " pod="metallb-system/metallb-operator-controller-manager-78f47c68d-wvmj4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.460372 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d0c4f09-ce13-4072-8746-9b086bca839f-apiservice-cert\") pod \"metallb-operator-controller-manager-78f47c68d-wvmj4\" (UID: \"3d0c4f09-ce13-4072-8746-9b086bca839f\") " pod="metallb-system/metallb-operator-controller-manager-78f47c68d-wvmj4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.465006 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d0c4f09-ce13-4072-8746-9b086bca839f-webhook-cert\") pod \"metallb-operator-controller-manager-78f47c68d-wvmj4\" (UID: \"3d0c4f09-ce13-4072-8746-9b086bca839f\") " pod="metallb-system/metallb-operator-controller-manager-78f47c68d-wvmj4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.465014 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d0c4f09-ce13-4072-8746-9b086bca839f-apiservice-cert\") pod \"metallb-operator-controller-manager-78f47c68d-wvmj4\" (UID: \"3d0c4f09-ce13-4072-8746-9b086bca839f\") " pod="metallb-system/metallb-operator-controller-manager-78f47c68d-wvmj4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.491779 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9rrq\" (UniqueName: \"kubernetes.io/projected/3d0c4f09-ce13-4072-8746-9b086bca839f-kube-api-access-v9rrq\") pod \"metallb-operator-controller-manager-78f47c68d-wvmj4\" (UID: \"3d0c4f09-ce13-4072-8746-9b086bca839f\") " pod="metallb-system/metallb-operator-controller-manager-78f47c68d-wvmj4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.554781 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-78f47c68d-wvmj4" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.568411 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-84b697fbfb-749d6"] Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.569049 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-84b697fbfb-749d6" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.577086 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.577204 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.577308 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-nz4lf" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.583780 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-84b697fbfb-749d6"] Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.763143 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/66596d46-48a0-4690-935e-e0144ee6923c-apiservice-cert\") pod \"metallb-operator-webhook-server-84b697fbfb-749d6\" (UID: \"66596d46-48a0-4690-935e-e0144ee6923c\") " pod="metallb-system/metallb-operator-webhook-server-84b697fbfb-749d6" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.763207 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfbwk\" (UniqueName: \"kubernetes.io/projected/66596d46-48a0-4690-935e-e0144ee6923c-kube-api-access-vfbwk\") pod \"metallb-operator-webhook-server-84b697fbfb-749d6\" (UID: \"66596d46-48a0-4690-935e-e0144ee6923c\") " pod="metallb-system/metallb-operator-webhook-server-84b697fbfb-749d6" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.763251 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/66596d46-48a0-4690-935e-e0144ee6923c-webhook-cert\") pod \"metallb-operator-webhook-server-84b697fbfb-749d6\" (UID: \"66596d46-48a0-4690-935e-e0144ee6923c\") " pod="metallb-system/metallb-operator-webhook-server-84b697fbfb-749d6" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.830460 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-brhd4"] Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.859592 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-78f47c68d-wvmj4"] Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.864818 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/66596d46-48a0-4690-935e-e0144ee6923c-apiservice-cert\") pod \"metallb-operator-webhook-server-84b697fbfb-749d6\" (UID: \"66596d46-48a0-4690-935e-e0144ee6923c\") " pod="metallb-system/metallb-operator-webhook-server-84b697fbfb-749d6" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.864881 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfbwk\" (UniqueName: \"kubernetes.io/projected/66596d46-48a0-4690-935e-e0144ee6923c-kube-api-access-vfbwk\") pod \"metallb-operator-webhook-server-84b697fbfb-749d6\" (UID: \"66596d46-48a0-4690-935e-e0144ee6923c\") " pod="metallb-system/metallb-operator-webhook-server-84b697fbfb-749d6" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.864927 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/66596d46-48a0-4690-935e-e0144ee6923c-webhook-cert\") pod \"metallb-operator-webhook-server-84b697fbfb-749d6\" (UID: \"66596d46-48a0-4690-935e-e0144ee6923c\") " pod="metallb-system/metallb-operator-webhook-server-84b697fbfb-749d6" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.868941 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/66596d46-48a0-4690-935e-e0144ee6923c-apiservice-cert\") pod \"metallb-operator-webhook-server-84b697fbfb-749d6\" (UID: \"66596d46-48a0-4690-935e-e0144ee6923c\") " pod="metallb-system/metallb-operator-webhook-server-84b697fbfb-749d6" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.869035 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/66596d46-48a0-4690-935e-e0144ee6923c-webhook-cert\") pod \"metallb-operator-webhook-server-84b697fbfb-749d6\" (UID: \"66596d46-48a0-4690-935e-e0144ee6923c\") " pod="metallb-system/metallb-operator-webhook-server-84b697fbfb-749d6" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.883062 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfbwk\" (UniqueName: \"kubernetes.io/projected/66596d46-48a0-4690-935e-e0144ee6923c-kube-api-access-vfbwk\") pod \"metallb-operator-webhook-server-84b697fbfb-749d6\" (UID: \"66596d46-48a0-4690-935e-e0144ee6923c\") " pod="metallb-system/metallb-operator-webhook-server-84b697fbfb-749d6" Nov 24 19:30:18 crc kubenswrapper[4812]: I1124 19:30:18.903888 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-84b697fbfb-749d6" Nov 24 19:30:19 crc kubenswrapper[4812]: I1124 19:30:19.205040 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-84b697fbfb-749d6"] Nov 24 19:30:19 crc kubenswrapper[4812]: W1124 19:30:19.220477 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66596d46_48a0_4690_935e_e0144ee6923c.slice/crio-07b948a69d0c14e3d143c4fef06ed26e30a151a1aec2d66ef95ed39e15c8b26a WatchSource:0}: Error finding container 07b948a69d0c14e3d143c4fef06ed26e30a151a1aec2d66ef95ed39e15c8b26a: Status 404 returned error can't find the container with id 07b948a69d0c14e3d143c4fef06ed26e30a151a1aec2d66ef95ed39e15c8b26a Nov 24 19:30:19 crc kubenswrapper[4812]: I1124 19:30:19.624564 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-78f47c68d-wvmj4" event={"ID":"3d0c4f09-ce13-4072-8746-9b086bca839f","Type":"ContainerStarted","Data":"aa1a8665be3fe406ea4a93c034c9e9b28f9f9a197aeed7a6152042152b8602bb"} Nov 24 19:30:19 crc kubenswrapper[4812]: I1124 19:30:19.625991 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-84b697fbfb-749d6" event={"ID":"66596d46-48a0-4690-935e-e0144ee6923c","Type":"ContainerStarted","Data":"07b948a69d0c14e3d143c4fef06ed26e30a151a1aec2d66ef95ed39e15c8b26a"} Nov 24 19:30:19 crc kubenswrapper[4812]: I1124 19:30:19.627790 4812 generic.go:334] "Generic (PLEG): container finished" podID="00663509-e1ad-4c0a-98f8-1593f49f3af4" containerID="e3eb58d229af9fff35edfbc75963fc67c68a45169c5853978642599187284b06" exitCode=0 Nov 24 19:30:19 crc kubenswrapper[4812]: I1124 19:30:19.627833 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brhd4" event={"ID":"00663509-e1ad-4c0a-98f8-1593f49f3af4","Type":"ContainerDied","Data":"e3eb58d229af9fff35edfbc75963fc67c68a45169c5853978642599187284b06"} Nov 24 19:30:19 crc kubenswrapper[4812]: I1124 19:30:19.627860 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brhd4" event={"ID":"00663509-e1ad-4c0a-98f8-1593f49f3af4","Type":"ContainerStarted","Data":"bc6a05f93c0d34c83e94757b4063d5ea00f3e569d11d1b4e55dd2f1482db3b16"} Nov 24 19:30:20 crc kubenswrapper[4812]: I1124 19:30:20.638633 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brhd4" event={"ID":"00663509-e1ad-4c0a-98f8-1593f49f3af4","Type":"ContainerStarted","Data":"31924093ba89965f2240737425b865bcf0f83075c1e4599b2a66330b9eeedc6f"} Nov 24 19:30:21 crc kubenswrapper[4812]: I1124 19:30:21.651317 4812 generic.go:334] "Generic (PLEG): container finished" podID="00663509-e1ad-4c0a-98f8-1593f49f3af4" containerID="31924093ba89965f2240737425b865bcf0f83075c1e4599b2a66330b9eeedc6f" exitCode=0 Nov 24 19:30:21 crc kubenswrapper[4812]: I1124 19:30:21.651389 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brhd4" event={"ID":"00663509-e1ad-4c0a-98f8-1593f49f3af4","Type":"ContainerDied","Data":"31924093ba89965f2240737425b865bcf0f83075c1e4599b2a66330b9eeedc6f"} Nov 24 19:30:24 crc kubenswrapper[4812]: I1124 19:30:24.678781 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-78f47c68d-wvmj4" event={"ID":"3d0c4f09-ce13-4072-8746-9b086bca839f","Type":"ContainerStarted","Data":"8c0c6c21c7c97997b829912266c45605bbb49508625b7478cf38719a643352c1"} Nov 24 19:30:24 crc kubenswrapper[4812]: I1124 19:30:24.679118 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-78f47c68d-wvmj4" Nov 24 19:30:24 crc kubenswrapper[4812]: I1124 19:30:24.681062 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-84b697fbfb-749d6" event={"ID":"66596d46-48a0-4690-935e-e0144ee6923c","Type":"ContainerStarted","Data":"85ddc1b9645729dc38079eb569e1d719d40d228ba3561ce0172c9602eb018d9a"} Nov 24 19:30:24 crc kubenswrapper[4812]: I1124 19:30:24.681158 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-84b697fbfb-749d6" Nov 24 19:30:24 crc kubenswrapper[4812]: I1124 19:30:24.684324 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brhd4" event={"ID":"00663509-e1ad-4c0a-98f8-1593f49f3af4","Type":"ContainerStarted","Data":"eefbffe99dbcf15ab4d5884b18182785e17676a540ca843faec2195cda8f7bf1"} Nov 24 19:30:24 crc kubenswrapper[4812]: I1124 19:30:24.706199 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-78f47c68d-wvmj4" podStartSLOduration=1.744128129 podStartE2EDuration="6.70618001s" podCreationTimestamp="2025-11-24 19:30:18 +0000 UTC" firstStartedPulling="2025-11-24 19:30:18.879562278 +0000 UTC m=+812.668514649" lastFinishedPulling="2025-11-24 19:30:23.841614149 +0000 UTC m=+817.630566530" observedRunningTime="2025-11-24 19:30:24.703870905 +0000 UTC m=+818.492823286" watchObservedRunningTime="2025-11-24 19:30:24.70618001 +0000 UTC m=+818.495132381" Nov 24 19:30:24 crc kubenswrapper[4812]: I1124 19:30:24.737897 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-84b697fbfb-749d6" podStartSLOduration=2.096196945 podStartE2EDuration="6.737868976s" podCreationTimestamp="2025-11-24 19:30:18 +0000 UTC" firstStartedPulling="2025-11-24 19:30:19.230450361 +0000 UTC m=+813.019402742" lastFinishedPulling="2025-11-24 19:30:23.872122402 +0000 UTC m=+817.661074773" observedRunningTime="2025-11-24 19:30:24.73445864 +0000 UTC m=+818.523411011" watchObservedRunningTime="2025-11-24 19:30:24.737868976 +0000 UTC m=+818.526821367" Nov 24 19:30:24 crc kubenswrapper[4812]: I1124 19:30:24.756653 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-brhd4" podStartSLOduration=3.547855498 podStartE2EDuration="7.756634637s" podCreationTimestamp="2025-11-24 19:30:17 +0000 UTC" firstStartedPulling="2025-11-24 19:30:19.629452204 +0000 UTC m=+813.418404575" lastFinishedPulling="2025-11-24 19:30:23.838231343 +0000 UTC m=+817.627183714" observedRunningTime="2025-11-24 19:30:24.752947303 +0000 UTC m=+818.541899714" watchObservedRunningTime="2025-11-24 19:30:24.756634637 +0000 UTC m=+818.545587018" Nov 24 19:30:28 crc kubenswrapper[4812]: I1124 19:30:28.342322 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-brhd4" Nov 24 19:30:28 crc kubenswrapper[4812]: I1124 19:30:28.342678 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-brhd4" Nov 24 19:30:28 crc kubenswrapper[4812]: I1124 19:30:28.414912 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-brhd4" Nov 24 19:30:32 crc kubenswrapper[4812]: I1124 19:30:32.998786 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:30:33 crc kubenswrapper[4812]: I1124 19:30:32.999545 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:30:38 crc kubenswrapper[4812]: I1124 19:30:38.406900 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-brhd4" Nov 24 19:30:38 crc kubenswrapper[4812]: I1124 19:30:38.910856 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-84b697fbfb-749d6" Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.151276 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-brhd4"] Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.151859 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-brhd4" podUID="00663509-e1ad-4c0a-98f8-1593f49f3af4" containerName="registry-server" containerID="cri-o://eefbffe99dbcf15ab4d5884b18182785e17676a540ca843faec2195cda8f7bf1" gracePeriod=2 Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.556141 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brhd4" Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.585369 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4bmq\" (UniqueName: \"kubernetes.io/projected/00663509-e1ad-4c0a-98f8-1593f49f3af4-kube-api-access-s4bmq\") pod \"00663509-e1ad-4c0a-98f8-1593f49f3af4\" (UID: \"00663509-e1ad-4c0a-98f8-1593f49f3af4\") " Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.585429 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00663509-e1ad-4c0a-98f8-1593f49f3af4-catalog-content\") pod \"00663509-e1ad-4c0a-98f8-1593f49f3af4\" (UID: \"00663509-e1ad-4c0a-98f8-1593f49f3af4\") " Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.589312 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00663509-e1ad-4c0a-98f8-1593f49f3af4-utilities\") pod \"00663509-e1ad-4c0a-98f8-1593f49f3af4\" (UID: \"00663509-e1ad-4c0a-98f8-1593f49f3af4\") " Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.595723 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00663509-e1ad-4c0a-98f8-1593f49f3af4-utilities" (OuterVolumeSpecName: "utilities") pod "00663509-e1ad-4c0a-98f8-1593f49f3af4" (UID: "00663509-e1ad-4c0a-98f8-1593f49f3af4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.597560 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00663509-e1ad-4c0a-98f8-1593f49f3af4-kube-api-access-s4bmq" (OuterVolumeSpecName: "kube-api-access-s4bmq") pod "00663509-e1ad-4c0a-98f8-1593f49f3af4" (UID: "00663509-e1ad-4c0a-98f8-1593f49f3af4"). InnerVolumeSpecName "kube-api-access-s4bmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.633262 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00663509-e1ad-4c0a-98f8-1593f49f3af4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00663509-e1ad-4c0a-98f8-1593f49f3af4" (UID: "00663509-e1ad-4c0a-98f8-1593f49f3af4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.690528 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4bmq\" (UniqueName: \"kubernetes.io/projected/00663509-e1ad-4c0a-98f8-1593f49f3af4-kube-api-access-s4bmq\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.690558 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00663509-e1ad-4c0a-98f8-1593f49f3af4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.690571 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00663509-e1ad-4c0a-98f8-1593f49f3af4-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.798886 4812 generic.go:334] "Generic (PLEG): container finished" podID="00663509-e1ad-4c0a-98f8-1593f49f3af4" containerID="eefbffe99dbcf15ab4d5884b18182785e17676a540ca843faec2195cda8f7bf1" exitCode=0 Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.798926 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brhd4" event={"ID":"00663509-e1ad-4c0a-98f8-1593f49f3af4","Type":"ContainerDied","Data":"eefbffe99dbcf15ab4d5884b18182785e17676a540ca843faec2195cda8f7bf1"} Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.798954 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-brhd4" event={"ID":"00663509-e1ad-4c0a-98f8-1593f49f3af4","Type":"ContainerDied","Data":"bc6a05f93c0d34c83e94757b4063d5ea00f3e569d11d1b4e55dd2f1482db3b16"} Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.798972 4812 scope.go:117] "RemoveContainer" containerID="eefbffe99dbcf15ab4d5884b18182785e17676a540ca843faec2195cda8f7bf1" Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.799081 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-brhd4" Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.828738 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-brhd4"] Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.831737 4812 scope.go:117] "RemoveContainer" containerID="31924093ba89965f2240737425b865bcf0f83075c1e4599b2a66330b9eeedc6f" Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.833986 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-brhd4"] Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.845509 4812 scope.go:117] "RemoveContainer" containerID="e3eb58d229af9fff35edfbc75963fc67c68a45169c5853978642599187284b06" Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.861076 4812 scope.go:117] "RemoveContainer" containerID="eefbffe99dbcf15ab4d5884b18182785e17676a540ca843faec2195cda8f7bf1" Nov 24 19:30:41 crc kubenswrapper[4812]: E1124 19:30:41.861395 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eefbffe99dbcf15ab4d5884b18182785e17676a540ca843faec2195cda8f7bf1\": container with ID starting with eefbffe99dbcf15ab4d5884b18182785e17676a540ca843faec2195cda8f7bf1 not found: ID does not exist" containerID="eefbffe99dbcf15ab4d5884b18182785e17676a540ca843faec2195cda8f7bf1" Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.861432 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eefbffe99dbcf15ab4d5884b18182785e17676a540ca843faec2195cda8f7bf1"} err="failed to get container status \"eefbffe99dbcf15ab4d5884b18182785e17676a540ca843faec2195cda8f7bf1\": rpc error: code = NotFound desc = could not find container \"eefbffe99dbcf15ab4d5884b18182785e17676a540ca843faec2195cda8f7bf1\": container with ID starting with eefbffe99dbcf15ab4d5884b18182785e17676a540ca843faec2195cda8f7bf1 not found: ID does not exist" Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.861457 4812 scope.go:117] "RemoveContainer" containerID="31924093ba89965f2240737425b865bcf0f83075c1e4599b2a66330b9eeedc6f" Nov 24 19:30:41 crc kubenswrapper[4812]: E1124 19:30:41.861740 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31924093ba89965f2240737425b865bcf0f83075c1e4599b2a66330b9eeedc6f\": container with ID starting with 31924093ba89965f2240737425b865bcf0f83075c1e4599b2a66330b9eeedc6f not found: ID does not exist" containerID="31924093ba89965f2240737425b865bcf0f83075c1e4599b2a66330b9eeedc6f" Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.861763 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31924093ba89965f2240737425b865bcf0f83075c1e4599b2a66330b9eeedc6f"} err="failed to get container status \"31924093ba89965f2240737425b865bcf0f83075c1e4599b2a66330b9eeedc6f\": rpc error: code = NotFound desc = could not find container \"31924093ba89965f2240737425b865bcf0f83075c1e4599b2a66330b9eeedc6f\": container with ID starting with 31924093ba89965f2240737425b865bcf0f83075c1e4599b2a66330b9eeedc6f not found: ID does not exist" Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.861776 4812 scope.go:117] "RemoveContainer" containerID="e3eb58d229af9fff35edfbc75963fc67c68a45169c5853978642599187284b06" Nov 24 19:30:41 crc kubenswrapper[4812]: E1124 19:30:41.862010 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3eb58d229af9fff35edfbc75963fc67c68a45169c5853978642599187284b06\": container with ID starting with e3eb58d229af9fff35edfbc75963fc67c68a45169c5853978642599187284b06 not found: ID does not exist" containerID="e3eb58d229af9fff35edfbc75963fc67c68a45169c5853978642599187284b06" Nov 24 19:30:41 crc kubenswrapper[4812]: I1124 19:30:41.862029 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3eb58d229af9fff35edfbc75963fc67c68a45169c5853978642599187284b06"} err="failed to get container status \"e3eb58d229af9fff35edfbc75963fc67c68a45169c5853978642599187284b06\": rpc error: code = NotFound desc = could not find container \"e3eb58d229af9fff35edfbc75963fc67c68a45169c5853978642599187284b06\": container with ID starting with e3eb58d229af9fff35edfbc75963fc67c68a45169c5853978642599187284b06 not found: ID does not exist" Nov 24 19:30:42 crc kubenswrapper[4812]: I1124 19:30:42.977923 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00663509-e1ad-4c0a-98f8-1593f49f3af4" path="/var/lib/kubelet/pods/00663509-e1ad-4c0a-98f8-1593f49f3af4/volumes" Nov 24 19:30:58 crc kubenswrapper[4812]: I1124 19:30:58.560620 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-78f47c68d-wvmj4" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.342563 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-sp8mm"] Nov 24 19:30:59 crc kubenswrapper[4812]: E1124 19:30:59.343208 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00663509-e1ad-4c0a-98f8-1593f49f3af4" containerName="registry-server" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.343234 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="00663509-e1ad-4c0a-98f8-1593f49f3af4" containerName="registry-server" Nov 24 19:30:59 crc kubenswrapper[4812]: E1124 19:30:59.343259 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00663509-e1ad-4c0a-98f8-1593f49f3af4" containerName="extract-content" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.343271 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="00663509-e1ad-4c0a-98f8-1593f49f3af4" containerName="extract-content" Nov 24 19:30:59 crc kubenswrapper[4812]: E1124 19:30:59.343289 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00663509-e1ad-4c0a-98f8-1593f49f3af4" containerName="extract-utilities" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.343301 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="00663509-e1ad-4c0a-98f8-1593f49f3af4" containerName="extract-utilities" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.343521 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="00663509-e1ad-4c0a-98f8-1593f49f3af4" containerName="registry-server" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.344108 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-sp8mm" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.347636 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.348102 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-5ps9s" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.384140 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-pnj5c"] Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.387161 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.387200 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-sp8mm"] Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.389998 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.391145 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.449449 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-wx4bd"] Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.451554 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wx4bd" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.455526 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.455803 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.456117 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.456489 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-phxlg" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.468493 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-fnrdw"] Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.471996 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-fnrdw" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.477812 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.481584 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-fnrdw"] Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.530214 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-965mx\" (UniqueName: \"kubernetes.io/projected/7b2ec99a-e925-44c3-b8ca-f60f2bc7c015-kube-api-access-965mx\") pod \"frr-k8s-webhook-server-6998585d5-sp8mm\" (UID: \"7b2ec99a-e925-44c3-b8ca-f60f2bc7c015\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-sp8mm" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.530591 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/492fe1b2-51da-4c19-a7d9-0aa2727a9989-metrics-certs\") pod \"frr-k8s-pnj5c\" (UID: \"492fe1b2-51da-4c19-a7d9-0aa2727a9989\") " pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.530612 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/492fe1b2-51da-4c19-a7d9-0aa2727a9989-frr-conf\") pod \"frr-k8s-pnj5c\" (UID: \"492fe1b2-51da-4c19-a7d9-0aa2727a9989\") " pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.530628 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/492fe1b2-51da-4c19-a7d9-0aa2727a9989-frr-sockets\") pod \"frr-k8s-pnj5c\" (UID: \"492fe1b2-51da-4c19-a7d9-0aa2727a9989\") " pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.530652 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/492fe1b2-51da-4c19-a7d9-0aa2727a9989-frr-startup\") pod \"frr-k8s-pnj5c\" (UID: \"492fe1b2-51da-4c19-a7d9-0aa2727a9989\") " pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.531161 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/492fe1b2-51da-4c19-a7d9-0aa2727a9989-reloader\") pod \"frr-k8s-pnj5c\" (UID: \"492fe1b2-51da-4c19-a7d9-0aa2727a9989\") " pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.531229 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55m9p\" (UniqueName: \"kubernetes.io/projected/492fe1b2-51da-4c19-a7d9-0aa2727a9989-kube-api-access-55m9p\") pod \"frr-k8s-pnj5c\" (UID: \"492fe1b2-51da-4c19-a7d9-0aa2727a9989\") " pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.531251 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b2ec99a-e925-44c3-b8ca-f60f2bc7c015-cert\") pod \"frr-k8s-webhook-server-6998585d5-sp8mm\" (UID: \"7b2ec99a-e925-44c3-b8ca-f60f2bc7c015\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-sp8mm" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.531266 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/492fe1b2-51da-4c19-a7d9-0aa2727a9989-metrics\") pod \"frr-k8s-pnj5c\" (UID: \"492fe1b2-51da-4c19-a7d9-0aa2727a9989\") " pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.632898 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/50657362-f6ce-4ba4-8a38-9d512e1abb86-cert\") pod \"controller-6c7b4b5f48-fnrdw\" (UID: \"50657362-f6ce-4ba4-8a38-9d512e1abb86\") " pod="metallb-system/controller-6c7b4b5f48-fnrdw" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.632971 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55m9p\" (UniqueName: \"kubernetes.io/projected/492fe1b2-51da-4c19-a7d9-0aa2727a9989-kube-api-access-55m9p\") pod \"frr-k8s-pnj5c\" (UID: \"492fe1b2-51da-4c19-a7d9-0aa2727a9989\") " pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.633002 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b2ec99a-e925-44c3-b8ca-f60f2bc7c015-cert\") pod \"frr-k8s-webhook-server-6998585d5-sp8mm\" (UID: \"7b2ec99a-e925-44c3-b8ca-f60f2bc7c015\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-sp8mm" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.633024 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/492fe1b2-51da-4c19-a7d9-0aa2727a9989-metrics\") pod \"frr-k8s-pnj5c\" (UID: \"492fe1b2-51da-4c19-a7d9-0aa2727a9989\") " pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.633064 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2a8b1dc-073e-4abb-bb26-f6bce6e2344f-metrics-certs\") pod \"speaker-wx4bd\" (UID: \"b2a8b1dc-073e-4abb-bb26-f6bce6e2344f\") " pod="metallb-system/speaker-wx4bd" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.633090 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-965mx\" (UniqueName: \"kubernetes.io/projected/7b2ec99a-e925-44c3-b8ca-f60f2bc7c015-kube-api-access-965mx\") pod \"frr-k8s-webhook-server-6998585d5-sp8mm\" (UID: \"7b2ec99a-e925-44c3-b8ca-f60f2bc7c015\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-sp8mm" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.633118 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/492fe1b2-51da-4c19-a7d9-0aa2727a9989-metrics-certs\") pod \"frr-k8s-pnj5c\" (UID: \"492fe1b2-51da-4c19-a7d9-0aa2727a9989\") " pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.633138 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/492fe1b2-51da-4c19-a7d9-0aa2727a9989-frr-conf\") pod \"frr-k8s-pnj5c\" (UID: \"492fe1b2-51da-4c19-a7d9-0aa2727a9989\") " pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.633160 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/492fe1b2-51da-4c19-a7d9-0aa2727a9989-frr-sockets\") pod \"frr-k8s-pnj5c\" (UID: \"492fe1b2-51da-4c19-a7d9-0aa2727a9989\") " pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.633186 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjjmp\" (UniqueName: \"kubernetes.io/projected/b2a8b1dc-073e-4abb-bb26-f6bce6e2344f-kube-api-access-qjjmp\") pod \"speaker-wx4bd\" (UID: \"b2a8b1dc-073e-4abb-bb26-f6bce6e2344f\") " pod="metallb-system/speaker-wx4bd" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.633211 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50657362-f6ce-4ba4-8a38-9d512e1abb86-metrics-certs\") pod \"controller-6c7b4b5f48-fnrdw\" (UID: \"50657362-f6ce-4ba4-8a38-9d512e1abb86\") " pod="metallb-system/controller-6c7b4b5f48-fnrdw" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.633238 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/492fe1b2-51da-4c19-a7d9-0aa2727a9989-frr-startup\") pod \"frr-k8s-pnj5c\" (UID: \"492fe1b2-51da-4c19-a7d9-0aa2727a9989\") " pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.633271 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b2a8b1dc-073e-4abb-bb26-f6bce6e2344f-metallb-excludel2\") pod \"speaker-wx4bd\" (UID: \"b2a8b1dc-073e-4abb-bb26-f6bce6e2344f\") " pod="metallb-system/speaker-wx4bd" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.633336 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b2a8b1dc-073e-4abb-bb26-f6bce6e2344f-memberlist\") pod \"speaker-wx4bd\" (UID: \"b2a8b1dc-073e-4abb-bb26-f6bce6e2344f\") " pod="metallb-system/speaker-wx4bd" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.633387 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/492fe1b2-51da-4c19-a7d9-0aa2727a9989-reloader\") pod \"frr-k8s-pnj5c\" (UID: \"492fe1b2-51da-4c19-a7d9-0aa2727a9989\") " pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.633423 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pb96\" (UniqueName: \"kubernetes.io/projected/50657362-f6ce-4ba4-8a38-9d512e1abb86-kube-api-access-5pb96\") pod \"controller-6c7b4b5f48-fnrdw\" (UID: \"50657362-f6ce-4ba4-8a38-9d512e1abb86\") " pod="metallb-system/controller-6c7b4b5f48-fnrdw" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.636415 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/492fe1b2-51da-4c19-a7d9-0aa2727a9989-frr-sockets\") pod \"frr-k8s-pnj5c\" (UID: \"492fe1b2-51da-4c19-a7d9-0aa2727a9989\") " pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.636930 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/492fe1b2-51da-4c19-a7d9-0aa2727a9989-reloader\") pod \"frr-k8s-pnj5c\" (UID: \"492fe1b2-51da-4c19-a7d9-0aa2727a9989\") " pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.636966 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/492fe1b2-51da-4c19-a7d9-0aa2727a9989-metrics\") pod \"frr-k8s-pnj5c\" (UID: \"492fe1b2-51da-4c19-a7d9-0aa2727a9989\") " pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.637192 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/492fe1b2-51da-4c19-a7d9-0aa2727a9989-frr-conf\") pod \"frr-k8s-pnj5c\" (UID: \"492fe1b2-51da-4c19-a7d9-0aa2727a9989\") " pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.637374 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/492fe1b2-51da-4c19-a7d9-0aa2727a9989-frr-startup\") pod \"frr-k8s-pnj5c\" (UID: \"492fe1b2-51da-4c19-a7d9-0aa2727a9989\") " pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.642327 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b2ec99a-e925-44c3-b8ca-f60f2bc7c015-cert\") pod \"frr-k8s-webhook-server-6998585d5-sp8mm\" (UID: \"7b2ec99a-e925-44c3-b8ca-f60f2bc7c015\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-sp8mm" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.642736 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/492fe1b2-51da-4c19-a7d9-0aa2727a9989-metrics-certs\") pod \"frr-k8s-pnj5c\" (UID: \"492fe1b2-51da-4c19-a7d9-0aa2727a9989\") " pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.661086 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55m9p\" (UniqueName: \"kubernetes.io/projected/492fe1b2-51da-4c19-a7d9-0aa2727a9989-kube-api-access-55m9p\") pod \"frr-k8s-pnj5c\" (UID: \"492fe1b2-51da-4c19-a7d9-0aa2727a9989\") " pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.661248 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-965mx\" (UniqueName: \"kubernetes.io/projected/7b2ec99a-e925-44c3-b8ca-f60f2bc7c015-kube-api-access-965mx\") pod \"frr-k8s-webhook-server-6998585d5-sp8mm\" (UID: \"7b2ec99a-e925-44c3-b8ca-f60f2bc7c015\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-sp8mm" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.711585 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-sp8mm" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.719420 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.734618 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2a8b1dc-073e-4abb-bb26-f6bce6e2344f-metrics-certs\") pod \"speaker-wx4bd\" (UID: \"b2a8b1dc-073e-4abb-bb26-f6bce6e2344f\") " pod="metallb-system/speaker-wx4bd" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.734686 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjjmp\" (UniqueName: \"kubernetes.io/projected/b2a8b1dc-073e-4abb-bb26-f6bce6e2344f-kube-api-access-qjjmp\") pod \"speaker-wx4bd\" (UID: \"b2a8b1dc-073e-4abb-bb26-f6bce6e2344f\") " pod="metallb-system/speaker-wx4bd" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.734715 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50657362-f6ce-4ba4-8a38-9d512e1abb86-metrics-certs\") pod \"controller-6c7b4b5f48-fnrdw\" (UID: \"50657362-f6ce-4ba4-8a38-9d512e1abb86\") " pod="metallb-system/controller-6c7b4b5f48-fnrdw" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.734754 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b2a8b1dc-073e-4abb-bb26-f6bce6e2344f-metallb-excludel2\") pod \"speaker-wx4bd\" (UID: \"b2a8b1dc-073e-4abb-bb26-f6bce6e2344f\") " pod="metallb-system/speaker-wx4bd" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.734796 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b2a8b1dc-073e-4abb-bb26-f6bce6e2344f-memberlist\") pod \"speaker-wx4bd\" (UID: \"b2a8b1dc-073e-4abb-bb26-f6bce6e2344f\") " pod="metallb-system/speaker-wx4bd" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.734838 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pb96\" (UniqueName: \"kubernetes.io/projected/50657362-f6ce-4ba4-8a38-9d512e1abb86-kube-api-access-5pb96\") pod \"controller-6c7b4b5f48-fnrdw\" (UID: \"50657362-f6ce-4ba4-8a38-9d512e1abb86\") " pod="metallb-system/controller-6c7b4b5f48-fnrdw" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.734868 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/50657362-f6ce-4ba4-8a38-9d512e1abb86-cert\") pod \"controller-6c7b4b5f48-fnrdw\" (UID: \"50657362-f6ce-4ba4-8a38-9d512e1abb86\") " pod="metallb-system/controller-6c7b4b5f48-fnrdw" Nov 24 19:30:59 crc kubenswrapper[4812]: E1124 19:30:59.735621 4812 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 19:30:59 crc kubenswrapper[4812]: E1124 19:30:59.735684 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2a8b1dc-073e-4abb-bb26-f6bce6e2344f-memberlist podName:b2a8b1dc-073e-4abb-bb26-f6bce6e2344f nodeName:}" failed. No retries permitted until 2025-11-24 19:31:00.235667697 +0000 UTC m=+854.024620068 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b2a8b1dc-073e-4abb-bb26-f6bce6e2344f-memberlist") pod "speaker-wx4bd" (UID: "b2a8b1dc-073e-4abb-bb26-f6bce6e2344f") : secret "metallb-memberlist" not found Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.736011 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b2a8b1dc-073e-4abb-bb26-f6bce6e2344f-metallb-excludel2\") pod \"speaker-wx4bd\" (UID: \"b2a8b1dc-073e-4abb-bb26-f6bce6e2344f\") " pod="metallb-system/speaker-wx4bd" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.737939 4812 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.738752 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50657362-f6ce-4ba4-8a38-9d512e1abb86-metrics-certs\") pod \"controller-6c7b4b5f48-fnrdw\" (UID: \"50657362-f6ce-4ba4-8a38-9d512e1abb86\") " pod="metallb-system/controller-6c7b4b5f48-fnrdw" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.739805 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2a8b1dc-073e-4abb-bb26-f6bce6e2344f-metrics-certs\") pod \"speaker-wx4bd\" (UID: \"b2a8b1dc-073e-4abb-bb26-f6bce6e2344f\") " pod="metallb-system/speaker-wx4bd" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.748598 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/50657362-f6ce-4ba4-8a38-9d512e1abb86-cert\") pod \"controller-6c7b4b5f48-fnrdw\" (UID: \"50657362-f6ce-4ba4-8a38-9d512e1abb86\") " pod="metallb-system/controller-6c7b4b5f48-fnrdw" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.751105 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjjmp\" (UniqueName: \"kubernetes.io/projected/b2a8b1dc-073e-4abb-bb26-f6bce6e2344f-kube-api-access-qjjmp\") pod \"speaker-wx4bd\" (UID: \"b2a8b1dc-073e-4abb-bb26-f6bce6e2344f\") " pod="metallb-system/speaker-wx4bd" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.757447 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pb96\" (UniqueName: \"kubernetes.io/projected/50657362-f6ce-4ba4-8a38-9d512e1abb86-kube-api-access-5pb96\") pod \"controller-6c7b4b5f48-fnrdw\" (UID: \"50657362-f6ce-4ba4-8a38-9d512e1abb86\") " pod="metallb-system/controller-6c7b4b5f48-fnrdw" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.793627 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-fnrdw" Nov 24 19:30:59 crc kubenswrapper[4812]: I1124 19:30:59.929752 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pnj5c" event={"ID":"492fe1b2-51da-4c19-a7d9-0aa2727a9989","Type":"ContainerStarted","Data":"d24632017ffb70ae3a53506285108ef27b9fe9a19e96d5a11886c19c45a46aa4"} Nov 24 19:31:00 crc kubenswrapper[4812]: I1124 19:31:00.157609 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-sp8mm"] Nov 24 19:31:00 crc kubenswrapper[4812]: W1124 19:31:00.168456 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b2ec99a_e925_44c3_b8ca_f60f2bc7c015.slice/crio-70610811df4317911c4b50250b86d0838e25626a6a45ad152d3526040f9321fb WatchSource:0}: Error finding container 70610811df4317911c4b50250b86d0838e25626a6a45ad152d3526040f9321fb: Status 404 returned error can't find the container with id 70610811df4317911c4b50250b86d0838e25626a6a45ad152d3526040f9321fb Nov 24 19:31:00 crc kubenswrapper[4812]: I1124 19:31:00.243620 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b2a8b1dc-073e-4abb-bb26-f6bce6e2344f-memberlist\") pod \"speaker-wx4bd\" (UID: \"b2a8b1dc-073e-4abb-bb26-f6bce6e2344f\") " pod="metallb-system/speaker-wx4bd" Nov 24 19:31:00 crc kubenswrapper[4812]: E1124 19:31:00.243928 4812 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 19:31:00 crc kubenswrapper[4812]: E1124 19:31:00.244064 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2a8b1dc-073e-4abb-bb26-f6bce6e2344f-memberlist podName:b2a8b1dc-073e-4abb-bb26-f6bce6e2344f nodeName:}" failed. No retries permitted until 2025-11-24 19:31:01.244026925 +0000 UTC m=+855.032979326 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b2a8b1dc-073e-4abb-bb26-f6bce6e2344f-memberlist") pod "speaker-wx4bd" (UID: "b2a8b1dc-073e-4abb-bb26-f6bce6e2344f") : secret "metallb-memberlist" not found Nov 24 19:31:00 crc kubenswrapper[4812]: I1124 19:31:00.282121 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-fnrdw"] Nov 24 19:31:00 crc kubenswrapper[4812]: W1124 19:31:00.290009 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50657362_f6ce_4ba4_8a38_9d512e1abb86.slice/crio-f0c772cac84684362e4b61e9b99aef9835fbb4c0fefdecb2d610855c474c10e2 WatchSource:0}: Error finding container f0c772cac84684362e4b61e9b99aef9835fbb4c0fefdecb2d610855c474c10e2: Status 404 returned error can't find the container with id f0c772cac84684362e4b61e9b99aef9835fbb4c0fefdecb2d610855c474c10e2 Nov 24 19:31:00 crc kubenswrapper[4812]: I1124 19:31:00.949039 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-fnrdw" event={"ID":"50657362-f6ce-4ba4-8a38-9d512e1abb86","Type":"ContainerStarted","Data":"2407eb399204b602b102adf819934e055a416f7b41a84c895691b2913377f0eb"} Nov 24 19:31:00 crc kubenswrapper[4812]: I1124 19:31:00.949406 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-fnrdw" event={"ID":"50657362-f6ce-4ba4-8a38-9d512e1abb86","Type":"ContainerStarted","Data":"f4aa778f28a7d940c6504effbeea2ee6b172eb0164c2381dc082d1cc84aafabf"} Nov 24 19:31:00 crc kubenswrapper[4812]: I1124 19:31:00.949437 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-fnrdw" Nov 24 19:31:00 crc kubenswrapper[4812]: I1124 19:31:00.949456 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-fnrdw" event={"ID":"50657362-f6ce-4ba4-8a38-9d512e1abb86","Type":"ContainerStarted","Data":"f0c772cac84684362e4b61e9b99aef9835fbb4c0fefdecb2d610855c474c10e2"} Nov 24 19:31:00 crc kubenswrapper[4812]: I1124 19:31:00.952990 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-sp8mm" event={"ID":"7b2ec99a-e925-44c3-b8ca-f60f2bc7c015","Type":"ContainerStarted","Data":"70610811df4317911c4b50250b86d0838e25626a6a45ad152d3526040f9321fb"} Nov 24 19:31:00 crc kubenswrapper[4812]: I1124 19:31:00.977113 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-fnrdw" podStartSLOduration=1.977083104 podStartE2EDuration="1.977083104s" podCreationTimestamp="2025-11-24 19:30:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:31:00.976515818 +0000 UTC m=+854.765468219" watchObservedRunningTime="2025-11-24 19:31:00.977083104 +0000 UTC m=+854.766035535" Nov 24 19:31:01 crc kubenswrapper[4812]: I1124 19:31:01.258204 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b2a8b1dc-073e-4abb-bb26-f6bce6e2344f-memberlist\") pod \"speaker-wx4bd\" (UID: \"b2a8b1dc-073e-4abb-bb26-f6bce6e2344f\") " pod="metallb-system/speaker-wx4bd" Nov 24 19:31:01 crc kubenswrapper[4812]: I1124 19:31:01.267734 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b2a8b1dc-073e-4abb-bb26-f6bce6e2344f-memberlist\") pod \"speaker-wx4bd\" (UID: \"b2a8b1dc-073e-4abb-bb26-f6bce6e2344f\") " pod="metallb-system/speaker-wx4bd" Nov 24 19:31:01 crc kubenswrapper[4812]: I1124 19:31:01.271420 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-wx4bd" Nov 24 19:31:01 crc kubenswrapper[4812]: W1124 19:31:01.301802 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2a8b1dc_073e_4abb_bb26_f6bce6e2344f.slice/crio-da8cc71e7a1efa14cbf7c5f471bf765e836d6cd5444390d05dbd35b98ad2211c WatchSource:0}: Error finding container da8cc71e7a1efa14cbf7c5f471bf765e836d6cd5444390d05dbd35b98ad2211c: Status 404 returned error can't find the container with id da8cc71e7a1efa14cbf7c5f471bf765e836d6cd5444390d05dbd35b98ad2211c Nov 24 19:31:01 crc kubenswrapper[4812]: I1124 19:31:01.971033 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wx4bd" event={"ID":"b2a8b1dc-073e-4abb-bb26-f6bce6e2344f","Type":"ContainerStarted","Data":"7e86e81150ef72821c89a223d45439d07ed4723180627f1f3b3d10fa2d5da4b5"} Nov 24 19:31:01 crc kubenswrapper[4812]: I1124 19:31:01.971078 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wx4bd" event={"ID":"b2a8b1dc-073e-4abb-bb26-f6bce6e2344f","Type":"ContainerStarted","Data":"619c61861569e8db75dd2f84c1618c6d319e192776b45116f651e311dafeb157"} Nov 24 19:31:01 crc kubenswrapper[4812]: I1124 19:31:01.971091 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-wx4bd" event={"ID":"b2a8b1dc-073e-4abb-bb26-f6bce6e2344f","Type":"ContainerStarted","Data":"da8cc71e7a1efa14cbf7c5f471bf765e836d6cd5444390d05dbd35b98ad2211c"} Nov 24 19:31:01 crc kubenswrapper[4812]: I1124 19:31:01.971677 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-wx4bd" Nov 24 19:31:02 crc kubenswrapper[4812]: I1124 19:31:02.998634 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:31:02 crc kubenswrapper[4812]: I1124 19:31:02.999197 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:31:02 crc kubenswrapper[4812]: I1124 19:31:02.999268 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:31:03 crc kubenswrapper[4812]: I1124 19:31:03.000136 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"90a8aabf222225c1c2c6e28ca52be1f8419a4b406dd9e47fff7f3d92641c4332"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 19:31:03 crc kubenswrapper[4812]: I1124 19:31:03.000215 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://90a8aabf222225c1c2c6e28ca52be1f8419a4b406dd9e47fff7f3d92641c4332" gracePeriod=600 Nov 24 19:31:03 crc kubenswrapper[4812]: I1124 19:31:03.985403 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="90a8aabf222225c1c2c6e28ca52be1f8419a4b406dd9e47fff7f3d92641c4332" exitCode=0 Nov 24 19:31:03 crc kubenswrapper[4812]: I1124 19:31:03.985765 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"90a8aabf222225c1c2c6e28ca52be1f8419a4b406dd9e47fff7f3d92641c4332"} Nov 24 19:31:03 crc kubenswrapper[4812]: I1124 19:31:03.985819 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"d0e66c2294fc71bbe8db0b484c7769b46468b08b26476c3c5d5d976af6fcf62b"} Nov 24 19:31:03 crc kubenswrapper[4812]: I1124 19:31:03.985840 4812 scope.go:117] "RemoveContainer" containerID="507ee7ea38a17f74128285fb7607b35f01e6c8bdad11e77d054df56b321b0fc0" Nov 24 19:31:04 crc kubenswrapper[4812]: I1124 19:31:04.002480 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-wx4bd" podStartSLOduration=5.002463675 podStartE2EDuration="5.002463675s" podCreationTimestamp="2025-11-24 19:30:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:31:01.988778607 +0000 UTC m=+855.777730978" watchObservedRunningTime="2025-11-24 19:31:04.002463675 +0000 UTC m=+857.791416046" Nov 24 19:31:08 crc kubenswrapper[4812]: I1124 19:31:08.015824 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-sp8mm" event={"ID":"7b2ec99a-e925-44c3-b8ca-f60f2bc7c015","Type":"ContainerStarted","Data":"b2a21d2a76809f838ec5eb3f9beccad4f57a8da6bc72685bafa2e9d3b4f8ddc4"} Nov 24 19:31:08 crc kubenswrapper[4812]: I1124 19:31:08.016711 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-sp8mm" Nov 24 19:31:08 crc kubenswrapper[4812]: I1124 19:31:08.023707 4812 generic.go:334] "Generic (PLEG): container finished" podID="492fe1b2-51da-4c19-a7d9-0aa2727a9989" containerID="c78540065059ea5cf0e2016d1f84e28b53e4bc21773dea024e0f4a0882e06d48" exitCode=0 Nov 24 19:31:08 crc kubenswrapper[4812]: I1124 19:31:08.023782 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pnj5c" event={"ID":"492fe1b2-51da-4c19-a7d9-0aa2727a9989","Type":"ContainerDied","Data":"c78540065059ea5cf0e2016d1f84e28b53e4bc21773dea024e0f4a0882e06d48"} Nov 24 19:31:08 crc kubenswrapper[4812]: I1124 19:31:08.043916 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-sp8mm" podStartSLOduration=2.009407071 podStartE2EDuration="9.043891073s" podCreationTimestamp="2025-11-24 19:30:59 +0000 UTC" firstStartedPulling="2025-11-24 19:31:00.17169256 +0000 UTC m=+853.960644931" lastFinishedPulling="2025-11-24 19:31:07.206176542 +0000 UTC m=+860.995128933" observedRunningTime="2025-11-24 19:31:08.039823478 +0000 UTC m=+861.828775889" watchObservedRunningTime="2025-11-24 19:31:08.043891073 +0000 UTC m=+861.832843484" Nov 24 19:31:09 crc kubenswrapper[4812]: I1124 19:31:09.033517 4812 generic.go:334] "Generic (PLEG): container finished" podID="492fe1b2-51da-4c19-a7d9-0aa2727a9989" containerID="60916bc081c5ddfc8a37eb44f67346445620391c1cf3e33d239471567bcc0b76" exitCode=0 Nov 24 19:31:09 crc kubenswrapper[4812]: I1124 19:31:09.033605 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pnj5c" event={"ID":"492fe1b2-51da-4c19-a7d9-0aa2727a9989","Type":"ContainerDied","Data":"60916bc081c5ddfc8a37eb44f67346445620391c1cf3e33d239471567bcc0b76"} Nov 24 19:31:10 crc kubenswrapper[4812]: I1124 19:31:10.046581 4812 generic.go:334] "Generic (PLEG): container finished" podID="492fe1b2-51da-4c19-a7d9-0aa2727a9989" containerID="762498749bf1431526f6aebd05ad9a1f61e47ea04b96577efc745716cbb9dd09" exitCode=0 Nov 24 19:31:10 crc kubenswrapper[4812]: I1124 19:31:10.048270 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pnj5c" event={"ID":"492fe1b2-51da-4c19-a7d9-0aa2727a9989","Type":"ContainerDied","Data":"762498749bf1431526f6aebd05ad9a1f61e47ea04b96577efc745716cbb9dd09"} Nov 24 19:31:11 crc kubenswrapper[4812]: I1124 19:31:11.058793 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pnj5c" event={"ID":"492fe1b2-51da-4c19-a7d9-0aa2727a9989","Type":"ContainerStarted","Data":"dbefb2dc41ea3ac648b73698753af243b47cba5c58b9668121b54fc8160a29b0"} Nov 24 19:31:11 crc kubenswrapper[4812]: I1124 19:31:11.059075 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pnj5c" event={"ID":"492fe1b2-51da-4c19-a7d9-0aa2727a9989","Type":"ContainerStarted","Data":"bf98a83efccdb39caf36aa3117d73a40e0c603504463113d13b4edf8e0117628"} Nov 24 19:31:11 crc kubenswrapper[4812]: I1124 19:31:11.059088 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pnj5c" event={"ID":"492fe1b2-51da-4c19-a7d9-0aa2727a9989","Type":"ContainerStarted","Data":"2df8e9ec2099789730703edb8f453041ced4abec4c83cffc6431b09d75903417"} Nov 24 19:31:11 crc kubenswrapper[4812]: I1124 19:31:11.059097 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pnj5c" event={"ID":"492fe1b2-51da-4c19-a7d9-0aa2727a9989","Type":"ContainerStarted","Data":"f058173ce974ad409ae864703b78c9941ad252d25ddebda2a790b7ea06577851"} Nov 24 19:31:11 crc kubenswrapper[4812]: I1124 19:31:11.059105 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pnj5c" event={"ID":"492fe1b2-51da-4c19-a7d9-0aa2727a9989","Type":"ContainerStarted","Data":"aeaf3c44aa1c99aa55237aa6916e05fee06edf1369cafcf7d8d8cb72b2414e89"} Nov 24 19:31:11 crc kubenswrapper[4812]: I1124 19:31:11.275735 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-wx4bd" Nov 24 19:31:12 crc kubenswrapper[4812]: I1124 19:31:12.069026 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pnj5c" event={"ID":"492fe1b2-51da-4c19-a7d9-0aa2727a9989","Type":"ContainerStarted","Data":"7ec64d74fcdf3ae479d80874fc8b7da2892a87fb82dd05db82c7e8b352580440"} Nov 24 19:31:12 crc kubenswrapper[4812]: I1124 19:31:12.070360 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:31:12 crc kubenswrapper[4812]: I1124 19:31:12.092160 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-pnj5c" podStartSLOduration=5.744244092 podStartE2EDuration="13.092141625s" podCreationTimestamp="2025-11-24 19:30:59 +0000 UTC" firstStartedPulling="2025-11-24 19:30:59.878265877 +0000 UTC m=+853.667218268" lastFinishedPulling="2025-11-24 19:31:07.22616341 +0000 UTC m=+861.015115801" observedRunningTime="2025-11-24 19:31:12.089437488 +0000 UTC m=+865.878389889" watchObservedRunningTime="2025-11-24 19:31:12.092141625 +0000 UTC m=+865.881093996" Nov 24 19:31:12 crc kubenswrapper[4812]: I1124 19:31:12.668154 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb"] Nov 24 19:31:12 crc kubenswrapper[4812]: I1124 19:31:12.670456 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb" Nov 24 19:31:12 crc kubenswrapper[4812]: I1124 19:31:12.673476 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 19:31:12 crc kubenswrapper[4812]: I1124 19:31:12.680072 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb"] Nov 24 19:31:12 crc kubenswrapper[4812]: I1124 19:31:12.750885 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwh4g\" (UniqueName: \"kubernetes.io/projected/6364088b-2467-4d1d-b6e9-b29ec6ea75a5-kube-api-access-rwh4g\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb\" (UID: \"6364088b-2467-4d1d-b6e9-b29ec6ea75a5\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb" Nov 24 19:31:12 crc kubenswrapper[4812]: I1124 19:31:12.750988 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6364088b-2467-4d1d-b6e9-b29ec6ea75a5-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb\" (UID: \"6364088b-2467-4d1d-b6e9-b29ec6ea75a5\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb" Nov 24 19:31:12 crc kubenswrapper[4812]: I1124 19:31:12.751102 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6364088b-2467-4d1d-b6e9-b29ec6ea75a5-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb\" (UID: \"6364088b-2467-4d1d-b6e9-b29ec6ea75a5\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb" Nov 24 19:31:12 crc kubenswrapper[4812]: I1124 19:31:12.854075 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6364088b-2467-4d1d-b6e9-b29ec6ea75a5-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb\" (UID: \"6364088b-2467-4d1d-b6e9-b29ec6ea75a5\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb" Nov 24 19:31:12 crc kubenswrapper[4812]: I1124 19:31:12.854216 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwh4g\" (UniqueName: \"kubernetes.io/projected/6364088b-2467-4d1d-b6e9-b29ec6ea75a5-kube-api-access-rwh4g\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb\" (UID: \"6364088b-2467-4d1d-b6e9-b29ec6ea75a5\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb" Nov 24 19:31:12 crc kubenswrapper[4812]: I1124 19:31:12.854278 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6364088b-2467-4d1d-b6e9-b29ec6ea75a5-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb\" (UID: \"6364088b-2467-4d1d-b6e9-b29ec6ea75a5\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb" Nov 24 19:31:12 crc kubenswrapper[4812]: I1124 19:31:12.854909 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6364088b-2467-4d1d-b6e9-b29ec6ea75a5-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb\" (UID: \"6364088b-2467-4d1d-b6e9-b29ec6ea75a5\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb" Nov 24 19:31:12 crc kubenswrapper[4812]: I1124 19:31:12.855063 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6364088b-2467-4d1d-b6e9-b29ec6ea75a5-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb\" (UID: \"6364088b-2467-4d1d-b6e9-b29ec6ea75a5\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb" Nov 24 19:31:12 crc kubenswrapper[4812]: I1124 19:31:12.880057 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwh4g\" (UniqueName: \"kubernetes.io/projected/6364088b-2467-4d1d-b6e9-b29ec6ea75a5-kube-api-access-rwh4g\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb\" (UID: \"6364088b-2467-4d1d-b6e9-b29ec6ea75a5\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb" Nov 24 19:31:12 crc kubenswrapper[4812]: I1124 19:31:12.996172 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb" Nov 24 19:31:13 crc kubenswrapper[4812]: I1124 19:31:13.522970 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb"] Nov 24 19:31:14 crc kubenswrapper[4812]: I1124 19:31:14.090455 4812 generic.go:334] "Generic (PLEG): container finished" podID="6364088b-2467-4d1d-b6e9-b29ec6ea75a5" containerID="e64d65dbbf56056e1a2d58bf166f78e6800a47917c794c71032d6a4b4291feec" exitCode=0 Nov 24 19:31:14 crc kubenswrapper[4812]: I1124 19:31:14.090568 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb" event={"ID":"6364088b-2467-4d1d-b6e9-b29ec6ea75a5","Type":"ContainerDied","Data":"e64d65dbbf56056e1a2d58bf166f78e6800a47917c794c71032d6a4b4291feec"} Nov 24 19:31:14 crc kubenswrapper[4812]: I1124 19:31:14.090851 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb" event={"ID":"6364088b-2467-4d1d-b6e9-b29ec6ea75a5","Type":"ContainerStarted","Data":"2cec8d5d1c61ce3b1b999b6f660d29dce029fee90ecf26a107a369a2b4b5dff8"} Nov 24 19:31:14 crc kubenswrapper[4812]: I1124 19:31:14.720222 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:31:14 crc kubenswrapper[4812]: I1124 19:31:14.758709 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:31:18 crc kubenswrapper[4812]: I1124 19:31:18.119027 4812 generic.go:334] "Generic (PLEG): container finished" podID="6364088b-2467-4d1d-b6e9-b29ec6ea75a5" containerID="8d998dcc9a119c4bb25e817ea44b72e71b139bc94e868a5b7209ea9e1737bfef" exitCode=0 Nov 24 19:31:18 crc kubenswrapper[4812]: I1124 19:31:18.119148 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb" event={"ID":"6364088b-2467-4d1d-b6e9-b29ec6ea75a5","Type":"ContainerDied","Data":"8d998dcc9a119c4bb25e817ea44b72e71b139bc94e868a5b7209ea9e1737bfef"} Nov 24 19:31:19 crc kubenswrapper[4812]: I1124 19:31:19.127824 4812 generic.go:334] "Generic (PLEG): container finished" podID="6364088b-2467-4d1d-b6e9-b29ec6ea75a5" containerID="c45c9b8712e2c3d1991ab2f18954192d470e2e3e0c646cbf26c913e10e09f9cb" exitCode=0 Nov 24 19:31:19 crc kubenswrapper[4812]: I1124 19:31:19.127955 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb" event={"ID":"6364088b-2467-4d1d-b6e9-b29ec6ea75a5","Type":"ContainerDied","Data":"c45c9b8712e2c3d1991ab2f18954192d470e2e3e0c646cbf26c913e10e09f9cb"} Nov 24 19:31:19 crc kubenswrapper[4812]: I1124 19:31:19.718582 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-sp8mm" Nov 24 19:31:19 crc kubenswrapper[4812]: I1124 19:31:19.797559 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-fnrdw" Nov 24 19:31:20 crc kubenswrapper[4812]: I1124 19:31:20.556055 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb" Nov 24 19:31:20 crc kubenswrapper[4812]: I1124 19:31:20.576440 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6364088b-2467-4d1d-b6e9-b29ec6ea75a5-bundle\") pod \"6364088b-2467-4d1d-b6e9-b29ec6ea75a5\" (UID: \"6364088b-2467-4d1d-b6e9-b29ec6ea75a5\") " Nov 24 19:31:20 crc kubenswrapper[4812]: I1124 19:31:20.576513 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwh4g\" (UniqueName: \"kubernetes.io/projected/6364088b-2467-4d1d-b6e9-b29ec6ea75a5-kube-api-access-rwh4g\") pod \"6364088b-2467-4d1d-b6e9-b29ec6ea75a5\" (UID: \"6364088b-2467-4d1d-b6e9-b29ec6ea75a5\") " Nov 24 19:31:20 crc kubenswrapper[4812]: I1124 19:31:20.576559 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6364088b-2467-4d1d-b6e9-b29ec6ea75a5-util\") pod \"6364088b-2467-4d1d-b6e9-b29ec6ea75a5\" (UID: \"6364088b-2467-4d1d-b6e9-b29ec6ea75a5\") " Nov 24 19:31:20 crc kubenswrapper[4812]: I1124 19:31:20.577899 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6364088b-2467-4d1d-b6e9-b29ec6ea75a5-bundle" (OuterVolumeSpecName: "bundle") pod "6364088b-2467-4d1d-b6e9-b29ec6ea75a5" (UID: "6364088b-2467-4d1d-b6e9-b29ec6ea75a5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:31:20 crc kubenswrapper[4812]: I1124 19:31:20.585667 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6364088b-2467-4d1d-b6e9-b29ec6ea75a5-kube-api-access-rwh4g" (OuterVolumeSpecName: "kube-api-access-rwh4g") pod "6364088b-2467-4d1d-b6e9-b29ec6ea75a5" (UID: "6364088b-2467-4d1d-b6e9-b29ec6ea75a5"). InnerVolumeSpecName "kube-api-access-rwh4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:31:20 crc kubenswrapper[4812]: I1124 19:31:20.595435 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6364088b-2467-4d1d-b6e9-b29ec6ea75a5-util" (OuterVolumeSpecName: "util") pod "6364088b-2467-4d1d-b6e9-b29ec6ea75a5" (UID: "6364088b-2467-4d1d-b6e9-b29ec6ea75a5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:31:20 crc kubenswrapper[4812]: I1124 19:31:20.678867 4812 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6364088b-2467-4d1d-b6e9-b29ec6ea75a5-util\") on node \"crc\" DevicePath \"\"" Nov 24 19:31:20 crc kubenswrapper[4812]: I1124 19:31:20.678910 4812 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6364088b-2467-4d1d-b6e9-b29ec6ea75a5-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:31:20 crc kubenswrapper[4812]: I1124 19:31:20.678923 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwh4g\" (UniqueName: \"kubernetes.io/projected/6364088b-2467-4d1d-b6e9-b29ec6ea75a5-kube-api-access-rwh4g\") on node \"crc\" DevicePath \"\"" Nov 24 19:31:21 crc kubenswrapper[4812]: I1124 19:31:21.145903 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb" event={"ID":"6364088b-2467-4d1d-b6e9-b29ec6ea75a5","Type":"ContainerDied","Data":"2cec8d5d1c61ce3b1b999b6f660d29dce029fee90ecf26a107a369a2b4b5dff8"} Nov 24 19:31:21 crc kubenswrapper[4812]: I1124 19:31:21.145948 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cec8d5d1c61ce3b1b999b6f660d29dce029fee90ecf26a107a369a2b4b5dff8" Nov 24 19:31:21 crc kubenswrapper[4812]: I1124 19:31:21.145975 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb" Nov 24 19:31:26 crc kubenswrapper[4812]: I1124 19:31:26.854997 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7m8pl"] Nov 24 19:31:26 crc kubenswrapper[4812]: E1124 19:31:26.855804 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6364088b-2467-4d1d-b6e9-b29ec6ea75a5" containerName="pull" Nov 24 19:31:26 crc kubenswrapper[4812]: I1124 19:31:26.855823 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6364088b-2467-4d1d-b6e9-b29ec6ea75a5" containerName="pull" Nov 24 19:31:26 crc kubenswrapper[4812]: E1124 19:31:26.855869 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6364088b-2467-4d1d-b6e9-b29ec6ea75a5" containerName="extract" Nov 24 19:31:26 crc kubenswrapper[4812]: I1124 19:31:26.855881 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6364088b-2467-4d1d-b6e9-b29ec6ea75a5" containerName="extract" Nov 24 19:31:26 crc kubenswrapper[4812]: E1124 19:31:26.855901 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6364088b-2467-4d1d-b6e9-b29ec6ea75a5" containerName="util" Nov 24 19:31:26 crc kubenswrapper[4812]: I1124 19:31:26.855912 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6364088b-2467-4d1d-b6e9-b29ec6ea75a5" containerName="util" Nov 24 19:31:26 crc kubenswrapper[4812]: I1124 19:31:26.856075 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6364088b-2467-4d1d-b6e9-b29ec6ea75a5" containerName="extract" Nov 24 19:31:26 crc kubenswrapper[4812]: I1124 19:31:26.856611 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7m8pl" Nov 24 19:31:26 crc kubenswrapper[4812]: I1124 19:31:26.858866 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Nov 24 19:31:26 crc kubenswrapper[4812]: I1124 19:31:26.858989 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Nov 24 19:31:26 crc kubenswrapper[4812]: I1124 19:31:26.859184 4812 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-jv6q8" Nov 24 19:31:26 crc kubenswrapper[4812]: I1124 19:31:26.903548 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7m8pl"] Nov 24 19:31:26 crc kubenswrapper[4812]: I1124 19:31:26.967194 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8d3f8786-7ba8-48ce-b1b6-90e36e2f0b50-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-7m8pl\" (UID: \"8d3f8786-7ba8-48ce-b1b6-90e36e2f0b50\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7m8pl" Nov 24 19:31:26 crc kubenswrapper[4812]: I1124 19:31:26.967291 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp7p4\" (UniqueName: \"kubernetes.io/projected/8d3f8786-7ba8-48ce-b1b6-90e36e2f0b50-kube-api-access-lp7p4\") pod \"cert-manager-operator-controller-manager-64cf6dff88-7m8pl\" (UID: \"8d3f8786-7ba8-48ce-b1b6-90e36e2f0b50\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7m8pl" Nov 24 19:31:27 crc kubenswrapper[4812]: I1124 19:31:27.069056 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp7p4\" (UniqueName: \"kubernetes.io/projected/8d3f8786-7ba8-48ce-b1b6-90e36e2f0b50-kube-api-access-lp7p4\") pod \"cert-manager-operator-controller-manager-64cf6dff88-7m8pl\" (UID: \"8d3f8786-7ba8-48ce-b1b6-90e36e2f0b50\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7m8pl" Nov 24 19:31:27 crc kubenswrapper[4812]: I1124 19:31:27.069192 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8d3f8786-7ba8-48ce-b1b6-90e36e2f0b50-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-7m8pl\" (UID: \"8d3f8786-7ba8-48ce-b1b6-90e36e2f0b50\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7m8pl" Nov 24 19:31:27 crc kubenswrapper[4812]: I1124 19:31:27.069994 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8d3f8786-7ba8-48ce-b1b6-90e36e2f0b50-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-7m8pl\" (UID: \"8d3f8786-7ba8-48ce-b1b6-90e36e2f0b50\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7m8pl" Nov 24 19:31:27 crc kubenswrapper[4812]: I1124 19:31:27.099148 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp7p4\" (UniqueName: \"kubernetes.io/projected/8d3f8786-7ba8-48ce-b1b6-90e36e2f0b50-kube-api-access-lp7p4\") pod \"cert-manager-operator-controller-manager-64cf6dff88-7m8pl\" (UID: \"8d3f8786-7ba8-48ce-b1b6-90e36e2f0b50\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7m8pl" Nov 24 19:31:27 crc kubenswrapper[4812]: I1124 19:31:27.176645 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7m8pl" Nov 24 19:31:27 crc kubenswrapper[4812]: I1124 19:31:27.516298 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7m8pl"] Nov 24 19:31:28 crc kubenswrapper[4812]: I1124 19:31:28.192439 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7m8pl" event={"ID":"8d3f8786-7ba8-48ce-b1b6-90e36e2f0b50","Type":"ContainerStarted","Data":"7b32df7ffeed4b26a8bcb781f9baa7d5102e957e494cd0852ba99f88e96799f7"} Nov 24 19:31:29 crc kubenswrapper[4812]: I1124 19:31:29.737908 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-pnj5c" Nov 24 19:31:35 crc kubenswrapper[4812]: I1124 19:31:35.245425 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7m8pl" event={"ID":"8d3f8786-7ba8-48ce-b1b6-90e36e2f0b50","Type":"ContainerStarted","Data":"7958d3ace68047d672d000d1c05be22f8c1a313b441ac0b60503638c2cb9bd42"} Nov 24 19:31:35 crc kubenswrapper[4812]: I1124 19:31:35.289693 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-7m8pl" podStartSLOduration=2.151727213 podStartE2EDuration="9.289675334s" podCreationTimestamp="2025-11-24 19:31:26 +0000 UTC" firstStartedPulling="2025-11-24 19:31:27.536358737 +0000 UTC m=+881.325311108" lastFinishedPulling="2025-11-24 19:31:34.674306818 +0000 UTC m=+888.463259229" observedRunningTime="2025-11-24 19:31:35.286171494 +0000 UTC m=+889.075123885" watchObservedRunningTime="2025-11-24 19:31:35.289675334 +0000 UTC m=+889.078627715" Nov 24 19:31:38 crc kubenswrapper[4812]: I1124 19:31:38.495099 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-kjhvq"] Nov 24 19:31:38 crc kubenswrapper[4812]: I1124 19:31:38.496439 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-kjhvq" Nov 24 19:31:38 crc kubenswrapper[4812]: I1124 19:31:38.505095 4812 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-l6cqs" Nov 24 19:31:38 crc kubenswrapper[4812]: I1124 19:31:38.505134 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 24 19:31:38 crc kubenswrapper[4812]: I1124 19:31:38.506074 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 24 19:31:38 crc kubenswrapper[4812]: I1124 19:31:38.510680 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-kjhvq"] Nov 24 19:31:38 crc kubenswrapper[4812]: I1124 19:31:38.565585 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2czjj\" (UniqueName: \"kubernetes.io/projected/47ba0344-b57c-47c2-917f-52f94e072562-kube-api-access-2czjj\") pod \"cert-manager-webhook-f4fb5df64-kjhvq\" (UID: \"47ba0344-b57c-47c2-917f-52f94e072562\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-kjhvq" Nov 24 19:31:38 crc kubenswrapper[4812]: I1124 19:31:38.565768 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47ba0344-b57c-47c2-917f-52f94e072562-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-kjhvq\" (UID: \"47ba0344-b57c-47c2-917f-52f94e072562\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-kjhvq" Nov 24 19:31:38 crc kubenswrapper[4812]: I1124 19:31:38.667354 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2czjj\" (UniqueName: \"kubernetes.io/projected/47ba0344-b57c-47c2-917f-52f94e072562-kube-api-access-2czjj\") pod \"cert-manager-webhook-f4fb5df64-kjhvq\" (UID: \"47ba0344-b57c-47c2-917f-52f94e072562\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-kjhvq" Nov 24 19:31:38 crc kubenswrapper[4812]: I1124 19:31:38.667435 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47ba0344-b57c-47c2-917f-52f94e072562-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-kjhvq\" (UID: \"47ba0344-b57c-47c2-917f-52f94e072562\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-kjhvq" Nov 24 19:31:38 crc kubenswrapper[4812]: I1124 19:31:38.696417 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47ba0344-b57c-47c2-917f-52f94e072562-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-kjhvq\" (UID: \"47ba0344-b57c-47c2-917f-52f94e072562\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-kjhvq" Nov 24 19:31:38 crc kubenswrapper[4812]: I1124 19:31:38.696904 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2czjj\" (UniqueName: \"kubernetes.io/projected/47ba0344-b57c-47c2-917f-52f94e072562-kube-api-access-2czjj\") pod \"cert-manager-webhook-f4fb5df64-kjhvq\" (UID: \"47ba0344-b57c-47c2-917f-52f94e072562\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-kjhvq" Nov 24 19:31:38 crc kubenswrapper[4812]: I1124 19:31:38.814838 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-kjhvq" Nov 24 19:31:39 crc kubenswrapper[4812]: I1124 19:31:39.297527 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-kjhvq"] Nov 24 19:31:40 crc kubenswrapper[4812]: I1124 19:31:40.279177 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-kjhvq" event={"ID":"47ba0344-b57c-47c2-917f-52f94e072562","Type":"ContainerStarted","Data":"1aa804294ccfbd4bcf78d0d2d87bba339d3e463015fd548ffe561605f471fb7b"} Nov 24 19:31:41 crc kubenswrapper[4812]: I1124 19:31:41.015049 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-qqxzg"] Nov 24 19:31:41 crc kubenswrapper[4812]: I1124 19:31:41.016735 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-qqxzg" Nov 24 19:31:41 crc kubenswrapper[4812]: I1124 19:31:41.018838 4812 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-qjf5s" Nov 24 19:31:41 crc kubenswrapper[4812]: I1124 19:31:41.030457 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-qqxzg"] Nov 24 19:31:41 crc kubenswrapper[4812]: I1124 19:31:41.117260 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/999f6f49-032a-4641-9fc4-b0d7b9094d87-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-qqxzg\" (UID: \"999f6f49-032a-4641-9fc4-b0d7b9094d87\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-qqxzg" Nov 24 19:31:41 crc kubenswrapper[4812]: I1124 19:31:41.117325 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqcwv\" (UniqueName: \"kubernetes.io/projected/999f6f49-032a-4641-9fc4-b0d7b9094d87-kube-api-access-xqcwv\") pod \"cert-manager-cainjector-855d9ccff4-qqxzg\" (UID: \"999f6f49-032a-4641-9fc4-b0d7b9094d87\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-qqxzg" Nov 24 19:31:41 crc kubenswrapper[4812]: I1124 19:31:41.218684 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqcwv\" (UniqueName: \"kubernetes.io/projected/999f6f49-032a-4641-9fc4-b0d7b9094d87-kube-api-access-xqcwv\") pod \"cert-manager-cainjector-855d9ccff4-qqxzg\" (UID: \"999f6f49-032a-4641-9fc4-b0d7b9094d87\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-qqxzg" Nov 24 19:31:41 crc kubenswrapper[4812]: I1124 19:31:41.218808 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/999f6f49-032a-4641-9fc4-b0d7b9094d87-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-qqxzg\" (UID: \"999f6f49-032a-4641-9fc4-b0d7b9094d87\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-qqxzg" Nov 24 19:31:41 crc kubenswrapper[4812]: I1124 19:31:41.240517 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqcwv\" (UniqueName: \"kubernetes.io/projected/999f6f49-032a-4641-9fc4-b0d7b9094d87-kube-api-access-xqcwv\") pod \"cert-manager-cainjector-855d9ccff4-qqxzg\" (UID: \"999f6f49-032a-4641-9fc4-b0d7b9094d87\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-qqxzg" Nov 24 19:31:41 crc kubenswrapper[4812]: I1124 19:31:41.249776 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/999f6f49-032a-4641-9fc4-b0d7b9094d87-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-qqxzg\" (UID: \"999f6f49-032a-4641-9fc4-b0d7b9094d87\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-qqxzg" Nov 24 19:31:41 crc kubenswrapper[4812]: I1124 19:31:41.343251 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-qqxzg" Nov 24 19:31:41 crc kubenswrapper[4812]: I1124 19:31:41.681254 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-qqxzg"] Nov 24 19:31:42 crc kubenswrapper[4812]: I1124 19:31:42.321322 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-qqxzg" event={"ID":"999f6f49-032a-4641-9fc4-b0d7b9094d87","Type":"ContainerStarted","Data":"ace4fb9c825743eec53b6c2a91e552b678921d6fb5db3953c1f27213ff2f07d2"} Nov 24 19:31:47 crc kubenswrapper[4812]: I1124 19:31:47.352907 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-kjhvq" event={"ID":"47ba0344-b57c-47c2-917f-52f94e072562","Type":"ContainerStarted","Data":"9bf8af8df0ec00fabfb3a0af08d48a125956d88ec1d346e757a64594cc7d0237"} Nov 24 19:31:47 crc kubenswrapper[4812]: I1124 19:31:47.353694 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-kjhvq" Nov 24 19:31:47 crc kubenswrapper[4812]: I1124 19:31:47.356505 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-qqxzg" event={"ID":"999f6f49-032a-4641-9fc4-b0d7b9094d87","Type":"ContainerStarted","Data":"280744a124f66ec8c2bf49f0bc94269a4fdeb4c0dc9930abe59be6bcc0b22593"} Nov 24 19:31:47 crc kubenswrapper[4812]: I1124 19:31:47.379065 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-kjhvq" podStartSLOduration=1.646748337 podStartE2EDuration="9.379032956s" podCreationTimestamp="2025-11-24 19:31:38 +0000 UTC" firstStartedPulling="2025-11-24 19:31:39.305922957 +0000 UTC m=+893.094875338" lastFinishedPulling="2025-11-24 19:31:47.038207596 +0000 UTC m=+900.827159957" observedRunningTime="2025-11-24 19:31:47.370874824 +0000 UTC m=+901.159827205" watchObservedRunningTime="2025-11-24 19:31:47.379032956 +0000 UTC m=+901.167985357" Nov 24 19:31:47 crc kubenswrapper[4812]: I1124 19:31:47.395488 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-qqxzg" podStartSLOduration=2.0945022030000002 podStartE2EDuration="7.395459432s" podCreationTimestamp="2025-11-24 19:31:40 +0000 UTC" firstStartedPulling="2025-11-24 19:31:41.694558835 +0000 UTC m=+895.483511216" lastFinishedPulling="2025-11-24 19:31:46.995516074 +0000 UTC m=+900.784468445" observedRunningTime="2025-11-24 19:31:47.388803953 +0000 UTC m=+901.177756364" watchObservedRunningTime="2025-11-24 19:31:47.395459432 +0000 UTC m=+901.184411833" Nov 24 19:31:53 crc kubenswrapper[4812]: I1124 19:31:53.819653 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-kjhvq" Nov 24 19:31:56 crc kubenswrapper[4812]: I1124 19:31:56.378399 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-d2zlq"] Nov 24 19:31:56 crc kubenswrapper[4812]: I1124 19:31:56.379822 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-d2zlq" Nov 24 19:31:56 crc kubenswrapper[4812]: I1124 19:31:56.382137 4812 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-x92ct" Nov 24 19:31:56 crc kubenswrapper[4812]: I1124 19:31:56.393493 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-d2zlq"] Nov 24 19:31:56 crc kubenswrapper[4812]: I1124 19:31:56.488729 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8637886d-1ab8-45c9-9b90-7d94a6820292-bound-sa-token\") pod \"cert-manager-86cb77c54b-d2zlq\" (UID: \"8637886d-1ab8-45c9-9b90-7d94a6820292\") " pod="cert-manager/cert-manager-86cb77c54b-d2zlq" Nov 24 19:31:56 crc kubenswrapper[4812]: I1124 19:31:56.488805 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72wj5\" (UniqueName: \"kubernetes.io/projected/8637886d-1ab8-45c9-9b90-7d94a6820292-kube-api-access-72wj5\") pod \"cert-manager-86cb77c54b-d2zlq\" (UID: \"8637886d-1ab8-45c9-9b90-7d94a6820292\") " pod="cert-manager/cert-manager-86cb77c54b-d2zlq" Nov 24 19:31:56 crc kubenswrapper[4812]: I1124 19:31:56.590637 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8637886d-1ab8-45c9-9b90-7d94a6820292-bound-sa-token\") pod \"cert-manager-86cb77c54b-d2zlq\" (UID: \"8637886d-1ab8-45c9-9b90-7d94a6820292\") " pod="cert-manager/cert-manager-86cb77c54b-d2zlq" Nov 24 19:31:56 crc kubenswrapper[4812]: I1124 19:31:56.590738 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72wj5\" (UniqueName: \"kubernetes.io/projected/8637886d-1ab8-45c9-9b90-7d94a6820292-kube-api-access-72wj5\") pod \"cert-manager-86cb77c54b-d2zlq\" (UID: \"8637886d-1ab8-45c9-9b90-7d94a6820292\") " pod="cert-manager/cert-manager-86cb77c54b-d2zlq" Nov 24 19:31:56 crc kubenswrapper[4812]: I1124 19:31:56.625925 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72wj5\" (UniqueName: \"kubernetes.io/projected/8637886d-1ab8-45c9-9b90-7d94a6820292-kube-api-access-72wj5\") pod \"cert-manager-86cb77c54b-d2zlq\" (UID: \"8637886d-1ab8-45c9-9b90-7d94a6820292\") " pod="cert-manager/cert-manager-86cb77c54b-d2zlq" Nov 24 19:31:56 crc kubenswrapper[4812]: I1124 19:31:56.626062 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8637886d-1ab8-45c9-9b90-7d94a6820292-bound-sa-token\") pod \"cert-manager-86cb77c54b-d2zlq\" (UID: \"8637886d-1ab8-45c9-9b90-7d94a6820292\") " pod="cert-manager/cert-manager-86cb77c54b-d2zlq" Nov 24 19:31:56 crc kubenswrapper[4812]: I1124 19:31:56.715189 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-d2zlq" Nov 24 19:31:57 crc kubenswrapper[4812]: I1124 19:31:57.221930 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-d2zlq"] Nov 24 19:31:57 crc kubenswrapper[4812]: W1124 19:31:57.233787 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8637886d_1ab8_45c9_9b90_7d94a6820292.slice/crio-d350d18aa400421f4613692e24da04c4502d92ae439ab2610ed662fc1aa75de8 WatchSource:0}: Error finding container d350d18aa400421f4613692e24da04c4502d92ae439ab2610ed662fc1aa75de8: Status 404 returned error can't find the container with id d350d18aa400421f4613692e24da04c4502d92ae439ab2610ed662fc1aa75de8 Nov 24 19:31:57 crc kubenswrapper[4812]: I1124 19:31:57.431406 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-d2zlq" event={"ID":"8637886d-1ab8-45c9-9b90-7d94a6820292","Type":"ContainerStarted","Data":"d350d18aa400421f4613692e24da04c4502d92ae439ab2610ed662fc1aa75de8"} Nov 24 19:31:58 crc kubenswrapper[4812]: I1124 19:31:58.440548 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-d2zlq" event={"ID":"8637886d-1ab8-45c9-9b90-7d94a6820292","Type":"ContainerStarted","Data":"ab8afa5bd13d7acbd5537a1616f8fe586de1f1eb666c39f996ef5e6f2433cd8b"} Nov 24 19:31:58 crc kubenswrapper[4812]: I1124 19:31:58.467534 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-d2zlq" podStartSLOduration=2.467496609 podStartE2EDuration="2.467496609s" podCreationTimestamp="2025-11-24 19:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:31:58.457940348 +0000 UTC m=+912.246892759" watchObservedRunningTime="2025-11-24 19:31:58.467496609 +0000 UTC m=+912.256449020" Nov 24 19:32:07 crc kubenswrapper[4812]: I1124 19:32:07.271594 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-n5x8r"] Nov 24 19:32:07 crc kubenswrapper[4812]: I1124 19:32:07.274473 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n5x8r" Nov 24 19:32:07 crc kubenswrapper[4812]: I1124 19:32:07.296822 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 24 19:32:07 crc kubenswrapper[4812]: I1124 19:32:07.297142 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 24 19:32:07 crc kubenswrapper[4812]: I1124 19:32:07.297383 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-gwtkf" Nov 24 19:32:07 crc kubenswrapper[4812]: I1124 19:32:07.318323 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-n5x8r"] Nov 24 19:32:07 crc kubenswrapper[4812]: I1124 19:32:07.373743 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h9bh\" (UniqueName: \"kubernetes.io/projected/c04b4891-c344-4423-915d-32f721cb5899-kube-api-access-7h9bh\") pod \"openstack-operator-index-n5x8r\" (UID: \"c04b4891-c344-4423-915d-32f721cb5899\") " pod="openstack-operators/openstack-operator-index-n5x8r" Nov 24 19:32:07 crc kubenswrapper[4812]: I1124 19:32:07.474755 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h9bh\" (UniqueName: \"kubernetes.io/projected/c04b4891-c344-4423-915d-32f721cb5899-kube-api-access-7h9bh\") pod \"openstack-operator-index-n5x8r\" (UID: \"c04b4891-c344-4423-915d-32f721cb5899\") " pod="openstack-operators/openstack-operator-index-n5x8r" Nov 24 19:32:07 crc kubenswrapper[4812]: I1124 19:32:07.494301 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h9bh\" (UniqueName: \"kubernetes.io/projected/c04b4891-c344-4423-915d-32f721cb5899-kube-api-access-7h9bh\") pod \"openstack-operator-index-n5x8r\" (UID: \"c04b4891-c344-4423-915d-32f721cb5899\") " pod="openstack-operators/openstack-operator-index-n5x8r" Nov 24 19:32:07 crc kubenswrapper[4812]: I1124 19:32:07.628558 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n5x8r" Nov 24 19:32:08 crc kubenswrapper[4812]: I1124 19:32:08.092276 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-n5x8r"] Nov 24 19:32:08 crc kubenswrapper[4812]: I1124 19:32:08.523268 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n5x8r" event={"ID":"c04b4891-c344-4423-915d-32f721cb5899","Type":"ContainerStarted","Data":"eef7243f00b5917c0d9222ea961f45c8f3067f12f87cdfef801200bb01eda88b"} Nov 24 19:32:09 crc kubenswrapper[4812]: I1124 19:32:09.531174 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n5x8r" event={"ID":"c04b4891-c344-4423-915d-32f721cb5899","Type":"ContainerStarted","Data":"65563bb64f5ffef137019ec674ca666ff375ce33dd11071b63f8efbd12f1f805"} Nov 24 19:32:09 crc kubenswrapper[4812]: I1124 19:32:09.550302 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-n5x8r" podStartSLOduration=1.4586494810000001 podStartE2EDuration="2.5502831s" podCreationTimestamp="2025-11-24 19:32:07 +0000 UTC" firstStartedPulling="2025-11-24 19:32:08.10349786 +0000 UTC m=+921.892450251" lastFinishedPulling="2025-11-24 19:32:09.195131459 +0000 UTC m=+922.984083870" observedRunningTime="2025-11-24 19:32:09.545908026 +0000 UTC m=+923.334860427" watchObservedRunningTime="2025-11-24 19:32:09.5502831 +0000 UTC m=+923.339235471" Nov 24 19:32:10 crc kubenswrapper[4812]: I1124 19:32:10.635986 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-n5x8r"] Nov 24 19:32:11 crc kubenswrapper[4812]: I1124 19:32:11.249966 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-d74vw"] Nov 24 19:32:11 crc kubenswrapper[4812]: I1124 19:32:11.253474 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d74vw" Nov 24 19:32:11 crc kubenswrapper[4812]: I1124 19:32:11.261291 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-d74vw"] Nov 24 19:32:11 crc kubenswrapper[4812]: I1124 19:32:11.340515 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdx78\" (UniqueName: \"kubernetes.io/projected/b3fcebf7-47e1-4467-92c7-391f0d9bcc5b-kube-api-access-gdx78\") pod \"openstack-operator-index-d74vw\" (UID: \"b3fcebf7-47e1-4467-92c7-391f0d9bcc5b\") " pod="openstack-operators/openstack-operator-index-d74vw" Nov 24 19:32:11 crc kubenswrapper[4812]: I1124 19:32:11.441241 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdx78\" (UniqueName: \"kubernetes.io/projected/b3fcebf7-47e1-4467-92c7-391f0d9bcc5b-kube-api-access-gdx78\") pod \"openstack-operator-index-d74vw\" (UID: \"b3fcebf7-47e1-4467-92c7-391f0d9bcc5b\") " pod="openstack-operators/openstack-operator-index-d74vw" Nov 24 19:32:11 crc kubenswrapper[4812]: I1124 19:32:11.469479 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdx78\" (UniqueName: \"kubernetes.io/projected/b3fcebf7-47e1-4467-92c7-391f0d9bcc5b-kube-api-access-gdx78\") pod \"openstack-operator-index-d74vw\" (UID: \"b3fcebf7-47e1-4467-92c7-391f0d9bcc5b\") " pod="openstack-operators/openstack-operator-index-d74vw" Nov 24 19:32:11 crc kubenswrapper[4812]: I1124 19:32:11.544774 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-n5x8r" podUID="c04b4891-c344-4423-915d-32f721cb5899" containerName="registry-server" containerID="cri-o://65563bb64f5ffef137019ec674ca666ff375ce33dd11071b63f8efbd12f1f805" gracePeriod=2 Nov 24 19:32:11 crc kubenswrapper[4812]: I1124 19:32:11.613161 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-d74vw" Nov 24 19:32:11 crc kubenswrapper[4812]: I1124 19:32:11.907927 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-d74vw"] Nov 24 19:32:11 crc kubenswrapper[4812]: W1124 19:32:11.916745 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3fcebf7_47e1_4467_92c7_391f0d9bcc5b.slice/crio-f799753db3aa549f5d52eeb63dcff554e53de8ee037dd59f12c3aea57e3baa6c WatchSource:0}: Error finding container f799753db3aa549f5d52eeb63dcff554e53de8ee037dd59f12c3aea57e3baa6c: Status 404 returned error can't find the container with id f799753db3aa549f5d52eeb63dcff554e53de8ee037dd59f12c3aea57e3baa6c Nov 24 19:32:11 crc kubenswrapper[4812]: I1124 19:32:11.957691 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n5x8r" Nov 24 19:32:12 crc kubenswrapper[4812]: I1124 19:32:12.151114 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h9bh\" (UniqueName: \"kubernetes.io/projected/c04b4891-c344-4423-915d-32f721cb5899-kube-api-access-7h9bh\") pod \"c04b4891-c344-4423-915d-32f721cb5899\" (UID: \"c04b4891-c344-4423-915d-32f721cb5899\") " Nov 24 19:32:12 crc kubenswrapper[4812]: I1124 19:32:12.159705 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04b4891-c344-4423-915d-32f721cb5899-kube-api-access-7h9bh" (OuterVolumeSpecName: "kube-api-access-7h9bh") pod "c04b4891-c344-4423-915d-32f721cb5899" (UID: "c04b4891-c344-4423-915d-32f721cb5899"). InnerVolumeSpecName "kube-api-access-7h9bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:32:12 crc kubenswrapper[4812]: I1124 19:32:12.252820 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h9bh\" (UniqueName: \"kubernetes.io/projected/c04b4891-c344-4423-915d-32f721cb5899-kube-api-access-7h9bh\") on node \"crc\" DevicePath \"\"" Nov 24 19:32:12 crc kubenswrapper[4812]: I1124 19:32:12.554527 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d74vw" event={"ID":"b3fcebf7-47e1-4467-92c7-391f0d9bcc5b","Type":"ContainerStarted","Data":"f799753db3aa549f5d52eeb63dcff554e53de8ee037dd59f12c3aea57e3baa6c"} Nov 24 19:32:12 crc kubenswrapper[4812]: I1124 19:32:12.556794 4812 generic.go:334] "Generic (PLEG): container finished" podID="c04b4891-c344-4423-915d-32f721cb5899" containerID="65563bb64f5ffef137019ec674ca666ff375ce33dd11071b63f8efbd12f1f805" exitCode=0 Nov 24 19:32:12 crc kubenswrapper[4812]: I1124 19:32:12.556848 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n5x8r" event={"ID":"c04b4891-c344-4423-915d-32f721cb5899","Type":"ContainerDied","Data":"65563bb64f5ffef137019ec674ca666ff375ce33dd11071b63f8efbd12f1f805"} Nov 24 19:32:12 crc kubenswrapper[4812]: I1124 19:32:12.556858 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n5x8r" Nov 24 19:32:12 crc kubenswrapper[4812]: I1124 19:32:12.556888 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n5x8r" event={"ID":"c04b4891-c344-4423-915d-32f721cb5899","Type":"ContainerDied","Data":"eef7243f00b5917c0d9222ea961f45c8f3067f12f87cdfef801200bb01eda88b"} Nov 24 19:32:12 crc kubenswrapper[4812]: I1124 19:32:12.556911 4812 scope.go:117] "RemoveContainer" containerID="65563bb64f5ffef137019ec674ca666ff375ce33dd11071b63f8efbd12f1f805" Nov 24 19:32:12 crc kubenswrapper[4812]: I1124 19:32:12.578919 4812 scope.go:117] "RemoveContainer" containerID="65563bb64f5ffef137019ec674ca666ff375ce33dd11071b63f8efbd12f1f805" Nov 24 19:32:12 crc kubenswrapper[4812]: E1124 19:32:12.580170 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65563bb64f5ffef137019ec674ca666ff375ce33dd11071b63f8efbd12f1f805\": container with ID starting with 65563bb64f5ffef137019ec674ca666ff375ce33dd11071b63f8efbd12f1f805 not found: ID does not exist" containerID="65563bb64f5ffef137019ec674ca666ff375ce33dd11071b63f8efbd12f1f805" Nov 24 19:32:12 crc kubenswrapper[4812]: I1124 19:32:12.580214 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65563bb64f5ffef137019ec674ca666ff375ce33dd11071b63f8efbd12f1f805"} err="failed to get container status \"65563bb64f5ffef137019ec674ca666ff375ce33dd11071b63f8efbd12f1f805\": rpc error: code = NotFound desc = could not find container \"65563bb64f5ffef137019ec674ca666ff375ce33dd11071b63f8efbd12f1f805\": container with ID starting with 65563bb64f5ffef137019ec674ca666ff375ce33dd11071b63f8efbd12f1f805 not found: ID does not exist" Nov 24 19:32:12 crc kubenswrapper[4812]: I1124 19:32:12.602082 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-n5x8r"] Nov 24 19:32:12 crc kubenswrapper[4812]: I1124 19:32:12.606286 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-n5x8r"] Nov 24 19:32:12 crc kubenswrapper[4812]: I1124 19:32:12.978026 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c04b4891-c344-4423-915d-32f721cb5899" path="/var/lib/kubelet/pods/c04b4891-c344-4423-915d-32f721cb5899/volumes" Nov 24 19:32:13 crc kubenswrapper[4812]: I1124 19:32:13.572043 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-d74vw" event={"ID":"b3fcebf7-47e1-4467-92c7-391f0d9bcc5b","Type":"ContainerStarted","Data":"ce2ef577a505096b53bac7ca5bed0d0180dea114c3dc1d9ca42abdd99dc46947"} Nov 24 19:32:13 crc kubenswrapper[4812]: I1124 19:32:13.598932 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-d74vw" podStartSLOduration=2.125725452 podStartE2EDuration="2.59890266s" podCreationTimestamp="2025-11-24 19:32:11 +0000 UTC" firstStartedPulling="2025-11-24 19:32:11.923116797 +0000 UTC m=+925.712069168" lastFinishedPulling="2025-11-24 19:32:12.396293995 +0000 UTC m=+926.185246376" observedRunningTime="2025-11-24 19:32:13.597061288 +0000 UTC m=+927.386013709" watchObservedRunningTime="2025-11-24 19:32:13.59890266 +0000 UTC m=+927.387855061" Nov 24 19:32:21 crc kubenswrapper[4812]: I1124 19:32:21.614036 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-d74vw" Nov 24 19:32:21 crc kubenswrapper[4812]: I1124 19:32:21.614687 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-d74vw" Nov 24 19:32:21 crc kubenswrapper[4812]: I1124 19:32:21.661101 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-d74vw" Nov 24 19:32:21 crc kubenswrapper[4812]: I1124 19:32:21.702262 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-d74vw" Nov 24 19:32:29 crc kubenswrapper[4812]: I1124 19:32:29.205311 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw"] Nov 24 19:32:29 crc kubenswrapper[4812]: E1124 19:32:29.206615 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04b4891-c344-4423-915d-32f721cb5899" containerName="registry-server" Nov 24 19:32:29 crc kubenswrapper[4812]: I1124 19:32:29.206648 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04b4891-c344-4423-915d-32f721cb5899" containerName="registry-server" Nov 24 19:32:29 crc kubenswrapper[4812]: I1124 19:32:29.206948 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04b4891-c344-4423-915d-32f721cb5899" containerName="registry-server" Nov 24 19:32:29 crc kubenswrapper[4812]: I1124 19:32:29.208885 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw" Nov 24 19:32:29 crc kubenswrapper[4812]: I1124 19:32:29.214189 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-tqxps" Nov 24 19:32:29 crc kubenswrapper[4812]: I1124 19:32:29.217980 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw"] Nov 24 19:32:29 crc kubenswrapper[4812]: I1124 19:32:29.221784 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rxnw\" (UniqueName: \"kubernetes.io/projected/66e69f66-9eb1-4276-b756-572b50aa2417-kube-api-access-5rxnw\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw\" (UID: \"66e69f66-9eb1-4276-b756-572b50aa2417\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw" Nov 24 19:32:29 crc kubenswrapper[4812]: I1124 19:32:29.221863 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66e69f66-9eb1-4276-b756-572b50aa2417-bundle\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw\" (UID: \"66e69f66-9eb1-4276-b756-572b50aa2417\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw" Nov 24 19:32:29 crc kubenswrapper[4812]: I1124 19:32:29.221904 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66e69f66-9eb1-4276-b756-572b50aa2417-util\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw\" (UID: \"66e69f66-9eb1-4276-b756-572b50aa2417\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw" Nov 24 19:32:29 crc kubenswrapper[4812]: I1124 19:32:29.323690 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66e69f66-9eb1-4276-b756-572b50aa2417-bundle\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw\" (UID: \"66e69f66-9eb1-4276-b756-572b50aa2417\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw" Nov 24 19:32:29 crc kubenswrapper[4812]: I1124 19:32:29.323822 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66e69f66-9eb1-4276-b756-572b50aa2417-util\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw\" (UID: \"66e69f66-9eb1-4276-b756-572b50aa2417\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw" Nov 24 19:32:29 crc kubenswrapper[4812]: I1124 19:32:29.323938 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rxnw\" (UniqueName: \"kubernetes.io/projected/66e69f66-9eb1-4276-b756-572b50aa2417-kube-api-access-5rxnw\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw\" (UID: \"66e69f66-9eb1-4276-b756-572b50aa2417\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw" Nov 24 19:32:29 crc kubenswrapper[4812]: I1124 19:32:29.325096 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66e69f66-9eb1-4276-b756-572b50aa2417-bundle\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw\" (UID: \"66e69f66-9eb1-4276-b756-572b50aa2417\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw" Nov 24 19:32:29 crc kubenswrapper[4812]: I1124 19:32:29.325242 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66e69f66-9eb1-4276-b756-572b50aa2417-util\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw\" (UID: \"66e69f66-9eb1-4276-b756-572b50aa2417\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw" Nov 24 19:32:29 crc kubenswrapper[4812]: I1124 19:32:29.362590 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rxnw\" (UniqueName: \"kubernetes.io/projected/66e69f66-9eb1-4276-b756-572b50aa2417-kube-api-access-5rxnw\") pod \"bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw\" (UID: \"66e69f66-9eb1-4276-b756-572b50aa2417\") " pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw" Nov 24 19:32:29 crc kubenswrapper[4812]: I1124 19:32:29.530566 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw" Nov 24 19:32:29 crc kubenswrapper[4812]: I1124 19:32:29.753491 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw"] Nov 24 19:32:29 crc kubenswrapper[4812]: W1124 19:32:29.761597 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66e69f66_9eb1_4276_b756_572b50aa2417.slice/crio-8447c4b325f95e7ae413bfe2d5353a688045e7dda8ba058d3c7b3dc4fa64100e WatchSource:0}: Error finding container 8447c4b325f95e7ae413bfe2d5353a688045e7dda8ba058d3c7b3dc4fa64100e: Status 404 returned error can't find the container with id 8447c4b325f95e7ae413bfe2d5353a688045e7dda8ba058d3c7b3dc4fa64100e Nov 24 19:32:30 crc kubenswrapper[4812]: I1124 19:32:30.712488 4812 generic.go:334] "Generic (PLEG): container finished" podID="66e69f66-9eb1-4276-b756-572b50aa2417" containerID="c0e4aa51df6e02e8b5bf6cab946786b62e33b051d7d6432fa1509ae6d7940566" exitCode=0 Nov 24 19:32:30 crc kubenswrapper[4812]: I1124 19:32:30.712846 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw" event={"ID":"66e69f66-9eb1-4276-b756-572b50aa2417","Type":"ContainerDied","Data":"c0e4aa51df6e02e8b5bf6cab946786b62e33b051d7d6432fa1509ae6d7940566"} Nov 24 19:32:30 crc kubenswrapper[4812]: I1124 19:32:30.712886 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw" event={"ID":"66e69f66-9eb1-4276-b756-572b50aa2417","Type":"ContainerStarted","Data":"8447c4b325f95e7ae413bfe2d5353a688045e7dda8ba058d3c7b3dc4fa64100e"} Nov 24 19:32:31 crc kubenswrapper[4812]: I1124 19:32:31.727805 4812 generic.go:334] "Generic (PLEG): container finished" podID="66e69f66-9eb1-4276-b756-572b50aa2417" containerID="cce1ff3c2693eca99c240fdaca6ba6f80fdf72348f8547592771d7af6a6fadfe" exitCode=0 Nov 24 19:32:31 crc kubenswrapper[4812]: I1124 19:32:31.727940 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw" event={"ID":"66e69f66-9eb1-4276-b756-572b50aa2417","Type":"ContainerDied","Data":"cce1ff3c2693eca99c240fdaca6ba6f80fdf72348f8547592771d7af6a6fadfe"} Nov 24 19:32:32 crc kubenswrapper[4812]: I1124 19:32:32.739184 4812 generic.go:334] "Generic (PLEG): container finished" podID="66e69f66-9eb1-4276-b756-572b50aa2417" containerID="b0ebeba58aa6ef4a60b2e79897e749ff1f5e1e8144299e68e2a378ae788e14d6" exitCode=0 Nov 24 19:32:32 crc kubenswrapper[4812]: I1124 19:32:32.739228 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw" event={"ID":"66e69f66-9eb1-4276-b756-572b50aa2417","Type":"ContainerDied","Data":"b0ebeba58aa6ef4a60b2e79897e749ff1f5e1e8144299e68e2a378ae788e14d6"} Nov 24 19:32:34 crc kubenswrapper[4812]: I1124 19:32:34.105472 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw" Nov 24 19:32:34 crc kubenswrapper[4812]: I1124 19:32:34.303529 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66e69f66-9eb1-4276-b756-572b50aa2417-bundle\") pod \"66e69f66-9eb1-4276-b756-572b50aa2417\" (UID: \"66e69f66-9eb1-4276-b756-572b50aa2417\") " Nov 24 19:32:34 crc kubenswrapper[4812]: I1124 19:32:34.303610 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66e69f66-9eb1-4276-b756-572b50aa2417-util\") pod \"66e69f66-9eb1-4276-b756-572b50aa2417\" (UID: \"66e69f66-9eb1-4276-b756-572b50aa2417\") " Nov 24 19:32:34 crc kubenswrapper[4812]: I1124 19:32:34.303762 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rxnw\" (UniqueName: \"kubernetes.io/projected/66e69f66-9eb1-4276-b756-572b50aa2417-kube-api-access-5rxnw\") pod \"66e69f66-9eb1-4276-b756-572b50aa2417\" (UID: \"66e69f66-9eb1-4276-b756-572b50aa2417\") " Nov 24 19:32:34 crc kubenswrapper[4812]: I1124 19:32:34.304758 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66e69f66-9eb1-4276-b756-572b50aa2417-bundle" (OuterVolumeSpecName: "bundle") pod "66e69f66-9eb1-4276-b756-572b50aa2417" (UID: "66e69f66-9eb1-4276-b756-572b50aa2417"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:32:34 crc kubenswrapper[4812]: I1124 19:32:34.316284 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66e69f66-9eb1-4276-b756-572b50aa2417-kube-api-access-5rxnw" (OuterVolumeSpecName: "kube-api-access-5rxnw") pod "66e69f66-9eb1-4276-b756-572b50aa2417" (UID: "66e69f66-9eb1-4276-b756-572b50aa2417"). InnerVolumeSpecName "kube-api-access-5rxnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:32:34 crc kubenswrapper[4812]: I1124 19:32:34.320360 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66e69f66-9eb1-4276-b756-572b50aa2417-util" (OuterVolumeSpecName: "util") pod "66e69f66-9eb1-4276-b756-572b50aa2417" (UID: "66e69f66-9eb1-4276-b756-572b50aa2417"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:32:34 crc kubenswrapper[4812]: I1124 19:32:34.405786 4812 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66e69f66-9eb1-4276-b756-572b50aa2417-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:32:34 crc kubenswrapper[4812]: I1124 19:32:34.405827 4812 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66e69f66-9eb1-4276-b756-572b50aa2417-util\") on node \"crc\" DevicePath \"\"" Nov 24 19:32:34 crc kubenswrapper[4812]: I1124 19:32:34.405840 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rxnw\" (UniqueName: \"kubernetes.io/projected/66e69f66-9eb1-4276-b756-572b50aa2417-kube-api-access-5rxnw\") on node \"crc\" DevicePath \"\"" Nov 24 19:32:34 crc kubenswrapper[4812]: I1124 19:32:34.757350 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw" event={"ID":"66e69f66-9eb1-4276-b756-572b50aa2417","Type":"ContainerDied","Data":"8447c4b325f95e7ae413bfe2d5353a688045e7dda8ba058d3c7b3dc4fa64100e"} Nov 24 19:32:34 crc kubenswrapper[4812]: I1124 19:32:34.757389 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8447c4b325f95e7ae413bfe2d5353a688045e7dda8ba058d3c7b3dc4fa64100e" Nov 24 19:32:34 crc kubenswrapper[4812]: I1124 19:32:34.757422 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw" Nov 24 19:32:40 crc kubenswrapper[4812]: I1124 19:32:40.217705 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b567956b5-x6rdv"] Nov 24 19:32:40 crc kubenswrapper[4812]: E1124 19:32:40.218562 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66e69f66-9eb1-4276-b756-572b50aa2417" containerName="extract" Nov 24 19:32:40 crc kubenswrapper[4812]: I1124 19:32:40.218580 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="66e69f66-9eb1-4276-b756-572b50aa2417" containerName="extract" Nov 24 19:32:40 crc kubenswrapper[4812]: E1124 19:32:40.218595 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66e69f66-9eb1-4276-b756-572b50aa2417" containerName="util" Nov 24 19:32:40 crc kubenswrapper[4812]: I1124 19:32:40.218603 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="66e69f66-9eb1-4276-b756-572b50aa2417" containerName="util" Nov 24 19:32:40 crc kubenswrapper[4812]: E1124 19:32:40.218627 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66e69f66-9eb1-4276-b756-572b50aa2417" containerName="pull" Nov 24 19:32:40 crc kubenswrapper[4812]: I1124 19:32:40.218634 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="66e69f66-9eb1-4276-b756-572b50aa2417" containerName="pull" Nov 24 19:32:40 crc kubenswrapper[4812]: I1124 19:32:40.218767 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="66e69f66-9eb1-4276-b756-572b50aa2417" containerName="extract" Nov 24 19:32:40 crc kubenswrapper[4812]: I1124 19:32:40.219284 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x6rdv" Nov 24 19:32:40 crc kubenswrapper[4812]: I1124 19:32:40.222126 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-pjf72" Nov 24 19:32:40 crc kubenswrapper[4812]: I1124 19:32:40.252149 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b567956b5-x6rdv"] Nov 24 19:32:40 crc kubenswrapper[4812]: I1124 19:32:40.386913 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k56w8\" (UniqueName: \"kubernetes.io/projected/6a6cc9d7-e241-4ff6-bdc6-fa1ab7862f9f-kube-api-access-k56w8\") pod \"openstack-operator-controller-operator-7b567956b5-x6rdv\" (UID: \"6a6cc9d7-e241-4ff6-bdc6-fa1ab7862f9f\") " pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x6rdv" Nov 24 19:32:40 crc kubenswrapper[4812]: I1124 19:32:40.488555 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k56w8\" (UniqueName: \"kubernetes.io/projected/6a6cc9d7-e241-4ff6-bdc6-fa1ab7862f9f-kube-api-access-k56w8\") pod \"openstack-operator-controller-operator-7b567956b5-x6rdv\" (UID: \"6a6cc9d7-e241-4ff6-bdc6-fa1ab7862f9f\") " pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x6rdv" Nov 24 19:32:40 crc kubenswrapper[4812]: I1124 19:32:40.512512 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k56w8\" (UniqueName: \"kubernetes.io/projected/6a6cc9d7-e241-4ff6-bdc6-fa1ab7862f9f-kube-api-access-k56w8\") pod \"openstack-operator-controller-operator-7b567956b5-x6rdv\" (UID: \"6a6cc9d7-e241-4ff6-bdc6-fa1ab7862f9f\") " pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x6rdv" Nov 24 19:32:40 crc kubenswrapper[4812]: I1124 19:32:40.537983 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x6rdv" Nov 24 19:32:40 crc kubenswrapper[4812]: I1124 19:32:40.865985 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7b567956b5-x6rdv"] Nov 24 19:32:41 crc kubenswrapper[4812]: I1124 19:32:41.812820 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x6rdv" event={"ID":"6a6cc9d7-e241-4ff6-bdc6-fa1ab7862f9f","Type":"ContainerStarted","Data":"d8bed7d90e0a78acddd2020eccdc1764c2aff6e881a4ffecfa06f9a8764edeec"} Nov 24 19:32:45 crc kubenswrapper[4812]: I1124 19:32:45.861251 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x6rdv" event={"ID":"6a6cc9d7-e241-4ff6-bdc6-fa1ab7862f9f","Type":"ContainerStarted","Data":"67ced075c7067d020b1eb430b76209ed5e08a11e2b1d71e3ba8929be228c3710"} Nov 24 19:32:45 crc kubenswrapper[4812]: I1124 19:32:45.861916 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x6rdv" Nov 24 19:32:45 crc kubenswrapper[4812]: I1124 19:32:45.894430 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x6rdv" podStartSLOduration=1.300642851 podStartE2EDuration="5.894411316s" podCreationTimestamp="2025-11-24 19:32:40 +0000 UTC" firstStartedPulling="2025-11-24 19:32:40.873468521 +0000 UTC m=+954.662420902" lastFinishedPulling="2025-11-24 19:32:45.467237006 +0000 UTC m=+959.256189367" observedRunningTime="2025-11-24 19:32:45.892075509 +0000 UTC m=+959.681027900" watchObservedRunningTime="2025-11-24 19:32:45.894411316 +0000 UTC m=+959.683363697" Nov 24 19:32:50 crc kubenswrapper[4812]: I1124 19:32:50.542209 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7b567956b5-x6rdv" Nov 24 19:33:26 crc kubenswrapper[4812]: I1124 19:33:26.997418 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wqgxl"] Nov 24 19:33:26 crc kubenswrapper[4812]: I1124 19:33:26.999235 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wqgxl" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.001018 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-rq2ck" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.004099 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-98txq"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.005409 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-98txq" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.007319 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-ltw8m" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.009036 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-48cqh"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.010249 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-48cqh" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.013244 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-l7tf9" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.016615 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wqgxl"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.030286 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-48cqh"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.055448 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd5bm\" (UniqueName: \"kubernetes.io/projected/baf03f1e-597c-434a-b90d-821e44d586d1-kube-api-access-cd5bm\") pod \"barbican-operator-controller-manager-86dc4d89c8-wqgxl\" (UID: \"baf03f1e-597c-434a-b90d-821e44d586d1\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wqgxl" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.055488 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c9xr\" (UniqueName: \"kubernetes.io/projected/e41fac25-1110-4e6e-a45e-5529ef0ef2f1-kube-api-access-6c9xr\") pod \"cinder-operator-controller-manager-79856dc55c-98txq\" (UID: \"e41fac25-1110-4e6e-a45e-5529ef0ef2f1\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-98txq" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.055518 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8fx7\" (UniqueName: \"kubernetes.io/projected/851fd2ec-631c-41d1-826f-934b3561cd70-kube-api-access-j8fx7\") pod \"designate-operator-controller-manager-7d695c9b56-48cqh\" (UID: \"851fd2ec-631c-41d1-826f-934b3561cd70\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-48cqh" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.074559 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-98txq"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.084405 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-5j5g2"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.085409 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5j5g2" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.090166 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-5j5g2"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.095565 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-x68kx" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.101905 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-mz2ht"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.103036 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-mz2ht" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.105673 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-mbgwh" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.121530 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-mz2ht"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.137257 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-6h6pl"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.138371 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-6h6pl" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.140608 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-tmcm4" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.157261 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd5bm\" (UniqueName: \"kubernetes.io/projected/baf03f1e-597c-434a-b90d-821e44d586d1-kube-api-access-cd5bm\") pod \"barbican-operator-controller-manager-86dc4d89c8-wqgxl\" (UID: \"baf03f1e-597c-434a-b90d-821e44d586d1\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wqgxl" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.157302 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c9xr\" (UniqueName: \"kubernetes.io/projected/e41fac25-1110-4e6e-a45e-5529ef0ef2f1-kube-api-access-6c9xr\") pod \"cinder-operator-controller-manager-79856dc55c-98txq\" (UID: \"e41fac25-1110-4e6e-a45e-5529ef0ef2f1\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-98txq" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.157343 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8fx7\" (UniqueName: \"kubernetes.io/projected/851fd2ec-631c-41d1-826f-934b3561cd70-kube-api-access-j8fx7\") pod \"designate-operator-controller-manager-7d695c9b56-48cqh\" (UID: \"851fd2ec-631c-41d1-826f-934b3561cd70\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-48cqh" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.157390 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk4rz\" (UniqueName: \"kubernetes.io/projected/82485996-0922-41c0-903c-e44eadf8be30-kube-api-access-xk4rz\") pod \"glance-operator-controller-manager-68b95954c9-5j5g2\" (UID: \"82485996-0922-41c0-903c-e44eadf8be30\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5j5g2" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.157421 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk9nh\" (UniqueName: \"kubernetes.io/projected/2f7164ed-0304-404a-938f-134952b55d15-kube-api-access-jk9nh\") pod \"heat-operator-controller-manager-774b86978c-mz2ht\" (UID: \"2f7164ed-0304-404a-938f-134952b55d15\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-mz2ht" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.164391 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-6h6pl"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.188009 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8fx7\" (UniqueName: \"kubernetes.io/projected/851fd2ec-631c-41d1-826f-934b3561cd70-kube-api-access-j8fx7\") pod \"designate-operator-controller-manager-7d695c9b56-48cqh\" (UID: \"851fd2ec-631c-41d1-826f-934b3561cd70\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-48cqh" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.188141 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-fprj2"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.189203 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fprj2" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.197766 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c9xr\" (UniqueName: \"kubernetes.io/projected/e41fac25-1110-4e6e-a45e-5529ef0ef2f1-kube-api-access-6c9xr\") pod \"cinder-operator-controller-manager-79856dc55c-98txq\" (UID: \"e41fac25-1110-4e6e-a45e-5529ef0ef2f1\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-98txq" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.198198 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd5bm\" (UniqueName: \"kubernetes.io/projected/baf03f1e-597c-434a-b90d-821e44d586d1-kube-api-access-cd5bm\") pod \"barbican-operator-controller-manager-86dc4d89c8-wqgxl\" (UID: \"baf03f1e-597c-434a-b90d-821e44d586d1\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wqgxl" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.198558 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7mslt" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.198737 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.205091 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-zxcm8"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.211463 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-zxcm8" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.216365 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-pfcgf" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.218415 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-fprj2"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.223430 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-zxcm8"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.259517 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lszmh\" (UniqueName: \"kubernetes.io/projected/c317b8b9-c5b6-4aa3-b666-425be5ee68fc-kube-api-access-lszmh\") pod \"horizon-operator-controller-manager-68c9694994-6h6pl\" (UID: \"c317b8b9-c5b6-4aa3-b666-425be5ee68fc\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-6h6pl" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.259756 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j24xr\" (UniqueName: \"kubernetes.io/projected/4a7a540a-2cc4-4703-bfa6-b85acdffe4a7-kube-api-access-j24xr\") pod \"infra-operator-controller-manager-d5cc86f4b-fprj2\" (UID: \"4a7a540a-2cc4-4703-bfa6-b85acdffe4a7\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fprj2" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.259880 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4a7a540a-2cc4-4703-bfa6-b85acdffe4a7-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-fprj2\" (UID: \"4a7a540a-2cc4-4703-bfa6-b85acdffe4a7\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fprj2" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.259979 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk4rz\" (UniqueName: \"kubernetes.io/projected/82485996-0922-41c0-903c-e44eadf8be30-kube-api-access-xk4rz\") pod \"glance-operator-controller-manager-68b95954c9-5j5g2\" (UID: \"82485996-0922-41c0-903c-e44eadf8be30\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5j5g2" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.260077 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-657fj\" (UniqueName: \"kubernetes.io/projected/393e1e76-aacc-4ae3-95e3-cb2f4cddd6bb-kube-api-access-657fj\") pod \"ironic-operator-controller-manager-5bfcdc958c-zxcm8\" (UID: \"393e1e76-aacc-4ae3-95e3-cb2f4cddd6bb\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-zxcm8" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.260189 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk9nh\" (UniqueName: \"kubernetes.io/projected/2f7164ed-0304-404a-938f-134952b55d15-kube-api-access-jk9nh\") pod \"heat-operator-controller-manager-774b86978c-mz2ht\" (UID: \"2f7164ed-0304-404a-938f-134952b55d15\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-mz2ht" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.262410 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-b5ghq"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.263577 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-b5ghq" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.266982 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-527wn" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.273264 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-5fw6l"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.274319 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-5fw6l" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.281065 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-psf4f" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.287263 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk9nh\" (UniqueName: \"kubernetes.io/projected/2f7164ed-0304-404a-938f-134952b55d15-kube-api-access-jk9nh\") pod \"heat-operator-controller-manager-774b86978c-mz2ht\" (UID: \"2f7164ed-0304-404a-938f-134952b55d15\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-mz2ht" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.293248 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-b5ghq"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.304144 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-5fw6l"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.311002 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-s9pm6"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.312451 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-s9pm6" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.312972 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk4rz\" (UniqueName: \"kubernetes.io/projected/82485996-0922-41c0-903c-e44eadf8be30-kube-api-access-xk4rz\") pod \"glance-operator-controller-manager-68b95954c9-5j5g2\" (UID: \"82485996-0922-41c0-903c-e44eadf8be30\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5j5g2" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.315702 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-r44xf" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.322372 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-s9pm6"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.322892 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wqgxl" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.330253 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-jpljb"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.331454 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jpljb" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.337458 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-v5l8d" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.353946 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-7thtk"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.355251 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-7thtk" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.355414 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-98txq" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.357498 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-r5djl" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.362424 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4a7a540a-2cc4-4703-bfa6-b85acdffe4a7-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-fprj2\" (UID: \"4a7a540a-2cc4-4703-bfa6-b85acdffe4a7\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fprj2" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.362486 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-657fj\" (UniqueName: \"kubernetes.io/projected/393e1e76-aacc-4ae3-95e3-cb2f4cddd6bb-kube-api-access-657fj\") pod \"ironic-operator-controller-manager-5bfcdc958c-zxcm8\" (UID: \"393e1e76-aacc-4ae3-95e3-cb2f4cddd6bb\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-zxcm8" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.362541 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lszmh\" (UniqueName: \"kubernetes.io/projected/c317b8b9-c5b6-4aa3-b666-425be5ee68fc-kube-api-access-lszmh\") pod \"horizon-operator-controller-manager-68c9694994-6h6pl\" (UID: \"c317b8b9-c5b6-4aa3-b666-425be5ee68fc\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-6h6pl" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.362580 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgx9r\" (UniqueName: \"kubernetes.io/projected/b745dd21-3908-4dba-8966-a4be4aea8aa4-kube-api-access-sgx9r\") pod \"manila-operator-controller-manager-58bb8d67cc-5fw6l\" (UID: \"b745dd21-3908-4dba-8966-a4be4aea8aa4\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-5fw6l" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.362600 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bktnv\" (UniqueName: \"kubernetes.io/projected/aa881065-4590-4fbb-9f31-926164d07125-kube-api-access-bktnv\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-s9pm6\" (UID: \"aa881065-4590-4fbb-9f31-926164d07125\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-s9pm6" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.362645 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdkf7\" (UniqueName: \"kubernetes.io/projected/3aafb524-8fa5-4752-b3f5-a4c700c8b4ee-kube-api-access-vdkf7\") pod \"keystone-operator-controller-manager-748dc6576f-b5ghq\" (UID: \"3aafb524-8fa5-4752-b3f5-a4c700c8b4ee\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-b5ghq" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.362667 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j24xr\" (UniqueName: \"kubernetes.io/projected/4a7a540a-2cc4-4703-bfa6-b85acdffe4a7-kube-api-access-j24xr\") pod \"infra-operator-controller-manager-d5cc86f4b-fprj2\" (UID: \"4a7a540a-2cc4-4703-bfa6-b85acdffe4a7\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fprj2" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.365451 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-jpljb"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.367075 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4a7a540a-2cc4-4703-bfa6-b85acdffe4a7-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-fprj2\" (UID: \"4a7a540a-2cc4-4703-bfa6-b85acdffe4a7\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fprj2" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.373618 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-7thtk"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.375577 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-48cqh" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.382153 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-cgt44"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.383391 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cgt44" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.385821 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lszmh\" (UniqueName: \"kubernetes.io/projected/c317b8b9-c5b6-4aa3-b666-425be5ee68fc-kube-api-access-lszmh\") pod \"horizon-operator-controller-manager-68c9694994-6h6pl\" (UID: \"c317b8b9-c5b6-4aa3-b666-425be5ee68fc\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-6h6pl" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.386318 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-657fj\" (UniqueName: \"kubernetes.io/projected/393e1e76-aacc-4ae3-95e3-cb2f4cddd6bb-kube-api-access-657fj\") pod \"ironic-operator-controller-manager-5bfcdc958c-zxcm8\" (UID: \"393e1e76-aacc-4ae3-95e3-cb2f4cddd6bb\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-zxcm8" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.387771 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-cgt44"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.388636 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-z59qm" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.398408 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j24xr\" (UniqueName: \"kubernetes.io/projected/4a7a540a-2cc4-4703-bfa6-b85acdffe4a7-kube-api-access-j24xr\") pod \"infra-operator-controller-manager-d5cc86f4b-fprj2\" (UID: \"4a7a540a-2cc4-4703-bfa6-b85acdffe4a7\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fprj2" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.401753 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-8bjbz"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.402620 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-8bjbz" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.404460 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.404740 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jp58g" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.405879 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5j5g2" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.407430 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg58x"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.408707 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg58x" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.411021 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-ptcwx"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.411766 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-mdsg4" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.412025 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-ptcwx" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.415611 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-ptcwx"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.417873 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-dftnw" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.417955 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg58x"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.435134 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-mz2ht" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.437618 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-68nf9"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.440872 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-68nf9" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.444542 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-5qp6n" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.453647 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-6h6pl" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.466361 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34cfb52c-d093-4d71-bba3-0ab2e2047e74-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-8bjbz\" (UID: \"34cfb52c-d093-4d71-bba3-0ab2e2047e74\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-8bjbz" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.466423 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdkf7\" (UniqueName: \"kubernetes.io/projected/3aafb524-8fa5-4752-b3f5-a4c700c8b4ee-kube-api-access-vdkf7\") pod \"keystone-operator-controller-manager-748dc6576f-b5ghq\" (UID: \"3aafb524-8fa5-4752-b3f5-a4c700c8b4ee\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-b5ghq" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.466480 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tfxv\" (UniqueName: \"kubernetes.io/projected/31db8161-54b4-417d-b6ec-85109904df50-kube-api-access-4tfxv\") pod \"placement-operator-controller-manager-5db546f9d9-ptcwx\" (UID: \"31db8161-54b4-417d-b6ec-85109904df50\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-ptcwx" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.466546 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4rjh\" (UniqueName: \"kubernetes.io/projected/34cfb52c-d093-4d71-bba3-0ab2e2047e74-kube-api-access-x4rjh\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-8bjbz\" (UID: \"34cfb52c-d093-4d71-bba3-0ab2e2047e74\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-8bjbz" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.466675 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlj88\" (UniqueName: \"kubernetes.io/projected/8e0c969b-35df-4a7c-85ff-ae79ea881c06-kube-api-access-dlj88\") pod \"octavia-operator-controller-manager-fd75fd47d-cgt44\" (UID: \"8e0c969b-35df-4a7c-85ff-ae79ea881c06\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cgt44" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.467235 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgx9r\" (UniqueName: \"kubernetes.io/projected/b745dd21-3908-4dba-8966-a4be4aea8aa4-kube-api-access-sgx9r\") pod \"manila-operator-controller-manager-58bb8d67cc-5fw6l\" (UID: \"b745dd21-3908-4dba-8966-a4be4aea8aa4\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-5fw6l" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.467280 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bktnv\" (UniqueName: \"kubernetes.io/projected/aa881065-4590-4fbb-9f31-926164d07125-kube-api-access-bktnv\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-s9pm6\" (UID: \"aa881065-4590-4fbb-9f31-926164d07125\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-s9pm6" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.467304 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjb72\" (UniqueName: \"kubernetes.io/projected/599321b1-4c22-4f86-a8fb-16ab7335db6a-kube-api-access-zjb72\") pod \"nova-operator-controller-manager-79556f57fc-jpljb\" (UID: \"599321b1-4c22-4f86-a8fb-16ab7335db6a\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jpljb" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.467342 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8s2w\" (UniqueName: \"kubernetes.io/projected/76a00195-5eec-4790-93cf-0167e77c2d69-kube-api-access-x8s2w\") pod \"ovn-operator-controller-manager-66cf5c67ff-jg58x\" (UID: \"76a00195-5eec-4790-93cf-0167e77c2d69\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg58x" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.467375 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r6vk\" (UniqueName: \"kubernetes.io/projected/39c92508-c338-4865-b7fb-aa0c026e5f6b-kube-api-access-4r6vk\") pod \"neutron-operator-controller-manager-7c57c8bbc4-7thtk\" (UID: \"39c92508-c338-4865-b7fb-aa0c026e5f6b\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-7thtk" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.485890 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-8bjbz"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.498942 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bktnv\" (UniqueName: \"kubernetes.io/projected/aa881065-4590-4fbb-9f31-926164d07125-kube-api-access-bktnv\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-s9pm6\" (UID: \"aa881065-4590-4fbb-9f31-926164d07125\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-s9pm6" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.502624 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-68nf9"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.503324 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdkf7\" (UniqueName: \"kubernetes.io/projected/3aafb524-8fa5-4752-b3f5-a4c700c8b4ee-kube-api-access-vdkf7\") pod \"keystone-operator-controller-manager-748dc6576f-b5ghq\" (UID: \"3aafb524-8fa5-4752-b3f5-a4c700c8b4ee\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-b5ghq" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.510818 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgx9r\" (UniqueName: \"kubernetes.io/projected/b745dd21-3908-4dba-8966-a4be4aea8aa4-kube-api-access-sgx9r\") pod \"manila-operator-controller-manager-58bb8d67cc-5fw6l\" (UID: \"b745dd21-3908-4dba-8966-a4be4aea8aa4\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-5fw6l" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.533369 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-x8j6n"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.534536 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-x8j6n" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.537572 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-lc4sj" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.546298 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fprj2" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.551716 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-x8j6n"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.570105 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r6vk\" (UniqueName: \"kubernetes.io/projected/39c92508-c338-4865-b7fb-aa0c026e5f6b-kube-api-access-4r6vk\") pod \"neutron-operator-controller-manager-7c57c8bbc4-7thtk\" (UID: \"39c92508-c338-4865-b7fb-aa0c026e5f6b\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-7thtk" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.570157 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34cfb52c-d093-4d71-bba3-0ab2e2047e74-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-8bjbz\" (UID: \"34cfb52c-d093-4d71-bba3-0ab2e2047e74\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-8bjbz" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.570203 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tfxv\" (UniqueName: \"kubernetes.io/projected/31db8161-54b4-417d-b6ec-85109904df50-kube-api-access-4tfxv\") pod \"placement-operator-controller-manager-5db546f9d9-ptcwx\" (UID: \"31db8161-54b4-417d-b6ec-85109904df50\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-ptcwx" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.570240 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4rjh\" (UniqueName: \"kubernetes.io/projected/34cfb52c-d093-4d71-bba3-0ab2e2047e74-kube-api-access-x4rjh\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-8bjbz\" (UID: \"34cfb52c-d093-4d71-bba3-0ab2e2047e74\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-8bjbz" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.570272 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pnqd\" (UniqueName: \"kubernetes.io/projected/2ec7107c-8ae0-4a00-901e-e70ac99520e7-kube-api-access-7pnqd\") pod \"swift-operator-controller-manager-6fdc4fcf86-68nf9\" (UID: \"2ec7107c-8ae0-4a00-901e-e70ac99520e7\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-68nf9" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.570314 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlj88\" (UniqueName: \"kubernetes.io/projected/8e0c969b-35df-4a7c-85ff-ae79ea881c06-kube-api-access-dlj88\") pod \"octavia-operator-controller-manager-fd75fd47d-cgt44\" (UID: \"8e0c969b-35df-4a7c-85ff-ae79ea881c06\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cgt44" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.570359 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjb72\" (UniqueName: \"kubernetes.io/projected/599321b1-4c22-4f86-a8fb-16ab7335db6a-kube-api-access-zjb72\") pod \"nova-operator-controller-manager-79556f57fc-jpljb\" (UID: \"599321b1-4c22-4f86-a8fb-16ab7335db6a\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jpljb" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.570379 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8s2w\" (UniqueName: \"kubernetes.io/projected/76a00195-5eec-4790-93cf-0167e77c2d69-kube-api-access-x8s2w\") pod \"ovn-operator-controller-manager-66cf5c67ff-jg58x\" (UID: \"76a00195-5eec-4790-93cf-0167e77c2d69\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg58x" Nov 24 19:33:27 crc kubenswrapper[4812]: E1124 19:33:27.570881 4812 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 19:33:27 crc kubenswrapper[4812]: E1124 19:33:27.570929 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34cfb52c-d093-4d71-bba3-0ab2e2047e74-cert podName:34cfb52c-d093-4d71-bba3-0ab2e2047e74 nodeName:}" failed. No retries permitted until 2025-11-24 19:33:28.070915156 +0000 UTC m=+1001.859867527 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/34cfb52c-d093-4d71-bba3-0ab2e2047e74-cert") pod "openstack-baremetal-operator-controller-manager-b58f89467-8bjbz" (UID: "34cfb52c-d093-4d71-bba3-0ab2e2047e74") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.576212 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-zxcm8" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.577377 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-b5ghq" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.593286 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r6vk\" (UniqueName: \"kubernetes.io/projected/39c92508-c338-4865-b7fb-aa0c026e5f6b-kube-api-access-4r6vk\") pod \"neutron-operator-controller-manager-7c57c8bbc4-7thtk\" (UID: \"39c92508-c338-4865-b7fb-aa0c026e5f6b\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-7thtk" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.594060 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4rjh\" (UniqueName: \"kubernetes.io/projected/34cfb52c-d093-4d71-bba3-0ab2e2047e74-kube-api-access-x4rjh\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-8bjbz\" (UID: \"34cfb52c-d093-4d71-bba3-0ab2e2047e74\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-8bjbz" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.615492 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8s2w\" (UniqueName: \"kubernetes.io/projected/76a00195-5eec-4790-93cf-0167e77c2d69-kube-api-access-x8s2w\") pod \"ovn-operator-controller-manager-66cf5c67ff-jg58x\" (UID: \"76a00195-5eec-4790-93cf-0167e77c2d69\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg58x" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.616278 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-28xt4"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.617487 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-28xt4" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.618242 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjb72\" (UniqueName: \"kubernetes.io/projected/599321b1-4c22-4f86-a8fb-16ab7335db6a-kube-api-access-zjb72\") pod \"nova-operator-controller-manager-79556f57fc-jpljb\" (UID: \"599321b1-4c22-4f86-a8fb-16ab7335db6a\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jpljb" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.622906 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlj88\" (UniqueName: \"kubernetes.io/projected/8e0c969b-35df-4a7c-85ff-ae79ea881c06-kube-api-access-dlj88\") pod \"octavia-operator-controller-manager-fd75fd47d-cgt44\" (UID: \"8e0c969b-35df-4a7c-85ff-ae79ea881c06\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cgt44" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.623401 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tfxv\" (UniqueName: \"kubernetes.io/projected/31db8161-54b4-417d-b6ec-85109904df50-kube-api-access-4tfxv\") pod \"placement-operator-controller-manager-5db546f9d9-ptcwx\" (UID: \"31db8161-54b4-417d-b6ec-85109904df50\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-ptcwx" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.628698 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-9njnq" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.629060 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-28xt4"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.657121 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-h7bnm"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.658540 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-h7bnm" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.661726 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-t8mpq" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.665785 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-h7bnm"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.671162 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz5gz\" (UniqueName: \"kubernetes.io/projected/05059f83-e196-4181-916f-ba36ed828d22-kube-api-access-hz5gz\") pod \"test-operator-controller-manager-5cb74df96-28xt4\" (UID: \"05059f83-e196-4181-916f-ba36ed828d22\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-28xt4" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.671216 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pnqd\" (UniqueName: \"kubernetes.io/projected/2ec7107c-8ae0-4a00-901e-e70ac99520e7-kube-api-access-7pnqd\") pod \"swift-operator-controller-manager-6fdc4fcf86-68nf9\" (UID: \"2ec7107c-8ae0-4a00-901e-e70ac99520e7\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-68nf9" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.671301 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7jw2\" (UniqueName: \"kubernetes.io/projected/2ac87fe9-d5cf-44a3-b5fe-0717ea4087c8-kube-api-access-x7jw2\") pod \"telemetry-operator-controller-manager-567f98c9d-x8j6n\" (UID: \"2ac87fe9-d5cf-44a3-b5fe-0717ea4087c8\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-x8j6n" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.686114 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-5fw6l" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.697052 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-s9pm6" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.708397 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wqgxl"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.713366 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jpljb" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.717084 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pnqd\" (UniqueName: \"kubernetes.io/projected/2ec7107c-8ae0-4a00-901e-e70ac99520e7-kube-api-access-7pnqd\") pod \"swift-operator-controller-manager-6fdc4fcf86-68nf9\" (UID: \"2ec7107c-8ae0-4a00-901e-e70ac99520e7\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-68nf9" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.718417 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.719287 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.720938 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-7thtk" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.721821 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.721962 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.722097 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-c7bxl" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.726939 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.740522 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cgt44" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.772997 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjwdv\" (UniqueName: \"kubernetes.io/projected/8b38e013-64de-4e6d-8092-b9607b1c21f7-kube-api-access-jjwdv\") pod \"watcher-operator-controller-manager-864885998-h7bnm\" (UID: \"8b38e013-64de-4e6d-8092-b9607b1c21f7\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-h7bnm" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.773064 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7jw2\" (UniqueName: \"kubernetes.io/projected/2ac87fe9-d5cf-44a3-b5fe-0717ea4087c8-kube-api-access-x7jw2\") pod \"telemetry-operator-controller-manager-567f98c9d-x8j6n\" (UID: \"2ac87fe9-d5cf-44a3-b5fe-0717ea4087c8\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-x8j6n" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.773083 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-ntd69\" (UID: \"eff10de0-2386-4e12-97a9-b6f08b1eda95\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.773126 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzqdp\" (UniqueName: \"kubernetes.io/projected/eff10de0-2386-4e12-97a9-b6f08b1eda95-kube-api-access-tzqdp\") pod \"openstack-operator-controller-manager-7cd5954d9-ntd69\" (UID: \"eff10de0-2386-4e12-97a9-b6f08b1eda95\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.773150 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz5gz\" (UniqueName: \"kubernetes.io/projected/05059f83-e196-4181-916f-ba36ed828d22-kube-api-access-hz5gz\") pod \"test-operator-controller-manager-5cb74df96-28xt4\" (UID: \"05059f83-e196-4181-916f-ba36ed828d22\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-28xt4" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.773259 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-metrics-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-ntd69\" (UID: \"eff10de0-2386-4e12-97a9-b6f08b1eda95\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.773648 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg58x" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.788450 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mrr28"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.789249 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mrr28" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.794418 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-msgfh" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.800970 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-ptcwx" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.809217 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-68nf9" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.811640 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mrr28"] Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.825485 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz5gz\" (UniqueName: \"kubernetes.io/projected/05059f83-e196-4181-916f-ba36ed828d22-kube-api-access-hz5gz\") pod \"test-operator-controller-manager-5cb74df96-28xt4\" (UID: \"05059f83-e196-4181-916f-ba36ed828d22\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-28xt4" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.826945 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7jw2\" (UniqueName: \"kubernetes.io/projected/2ac87fe9-d5cf-44a3-b5fe-0717ea4087c8-kube-api-access-x7jw2\") pod \"telemetry-operator-controller-manager-567f98c9d-x8j6n\" (UID: \"2ac87fe9-d5cf-44a3-b5fe-0717ea4087c8\") " pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-x8j6n" Nov 24 19:33:27 crc kubenswrapper[4812]: W1124 19:33:27.838659 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbaf03f1e_597c_434a_b90d_821e44d586d1.slice/crio-82792ebcbbd7ac6cb2a4a5f4d3afd308abe07b45fd73ead0b75ba80424804015 WatchSource:0}: Error finding container 82792ebcbbd7ac6cb2a4a5f4d3afd308abe07b45fd73ead0b75ba80424804015: Status 404 returned error can't find the container with id 82792ebcbbd7ac6cb2a4a5f4d3afd308abe07b45fd73ead0b75ba80424804015 Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.854854 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-x8j6n" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.875235 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjwdv\" (UniqueName: \"kubernetes.io/projected/8b38e013-64de-4e6d-8092-b9607b1c21f7-kube-api-access-jjwdv\") pod \"watcher-operator-controller-manager-864885998-h7bnm\" (UID: \"8b38e013-64de-4e6d-8092-b9607b1c21f7\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-h7bnm" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.875293 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-ntd69\" (UID: \"eff10de0-2386-4e12-97a9-b6f08b1eda95\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.875367 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzqdp\" (UniqueName: \"kubernetes.io/projected/eff10de0-2386-4e12-97a9-b6f08b1eda95-kube-api-access-tzqdp\") pod \"openstack-operator-controller-manager-7cd5954d9-ntd69\" (UID: \"eff10de0-2386-4e12-97a9-b6f08b1eda95\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.875402 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq4nq\" (UniqueName: \"kubernetes.io/projected/a601c469-c7c1-4311-8927-a0ebccc2722b-kube-api-access-tq4nq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mrr28\" (UID: \"a601c469-c7c1-4311-8927-a0ebccc2722b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mrr28" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.875429 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-metrics-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-ntd69\" (UID: \"eff10de0-2386-4e12-97a9-b6f08b1eda95\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" Nov 24 19:33:27 crc kubenswrapper[4812]: E1124 19:33:27.875563 4812 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 19:33:27 crc kubenswrapper[4812]: E1124 19:33:27.875610 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-metrics-certs podName:eff10de0-2386-4e12-97a9-b6f08b1eda95 nodeName:}" failed. No retries permitted until 2025-11-24 19:33:28.375595632 +0000 UTC m=+1002.164548003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-metrics-certs") pod "openstack-operator-controller-manager-7cd5954d9-ntd69" (UID: "eff10de0-2386-4e12-97a9-b6f08b1eda95") : secret "metrics-server-cert" not found Nov 24 19:33:27 crc kubenswrapper[4812]: E1124 19:33:27.876004 4812 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 19:33:27 crc kubenswrapper[4812]: E1124 19:33:27.876033 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-webhook-certs podName:eff10de0-2386-4e12-97a9-b6f08b1eda95 nodeName:}" failed. No retries permitted until 2025-11-24 19:33:28.376026104 +0000 UTC m=+1002.164978475 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-webhook-certs") pod "openstack-operator-controller-manager-7cd5954d9-ntd69" (UID: "eff10de0-2386-4e12-97a9-b6f08b1eda95") : secret "webhook-server-cert" not found Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.908196 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzqdp\" (UniqueName: \"kubernetes.io/projected/eff10de0-2386-4e12-97a9-b6f08b1eda95-kube-api-access-tzqdp\") pod \"openstack-operator-controller-manager-7cd5954d9-ntd69\" (UID: \"eff10de0-2386-4e12-97a9-b6f08b1eda95\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.912699 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjwdv\" (UniqueName: \"kubernetes.io/projected/8b38e013-64de-4e6d-8092-b9607b1c21f7-kube-api-access-jjwdv\") pod \"watcher-operator-controller-manager-864885998-h7bnm\" (UID: \"8b38e013-64de-4e6d-8092-b9607b1c21f7\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-h7bnm" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.948658 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-28xt4" Nov 24 19:33:27 crc kubenswrapper[4812]: I1124 19:33:27.976962 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq4nq\" (UniqueName: \"kubernetes.io/projected/a601c469-c7c1-4311-8927-a0ebccc2722b-kube-api-access-tq4nq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mrr28\" (UID: \"a601c469-c7c1-4311-8927-a0ebccc2722b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mrr28" Nov 24 19:33:28 crc kubenswrapper[4812]: I1124 19:33:28.012124 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq4nq\" (UniqueName: \"kubernetes.io/projected/a601c469-c7c1-4311-8927-a0ebccc2722b-kube-api-access-tq4nq\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mrr28\" (UID: \"a601c469-c7c1-4311-8927-a0ebccc2722b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mrr28" Nov 24 19:33:28 crc kubenswrapper[4812]: I1124 19:33:28.058508 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-h7bnm" Nov 24 19:33:28 crc kubenswrapper[4812]: I1124 19:33:28.079825 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34cfb52c-d093-4d71-bba3-0ab2e2047e74-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-8bjbz\" (UID: \"34cfb52c-d093-4d71-bba3-0ab2e2047e74\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-8bjbz" Nov 24 19:33:28 crc kubenswrapper[4812]: E1124 19:33:28.080037 4812 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 19:33:28 crc kubenswrapper[4812]: E1124 19:33:28.080120 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/34cfb52c-d093-4d71-bba3-0ab2e2047e74-cert podName:34cfb52c-d093-4d71-bba3-0ab2e2047e74 nodeName:}" failed. No retries permitted until 2025-11-24 19:33:29.080104989 +0000 UTC m=+1002.869057360 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/34cfb52c-d093-4d71-bba3-0ab2e2047e74-cert") pod "openstack-baremetal-operator-controller-manager-b58f89467-8bjbz" (UID: "34cfb52c-d093-4d71-bba3-0ab2e2047e74") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 19:33:28 crc kubenswrapper[4812]: I1124 19:33:28.165932 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mrr28" Nov 24 19:33:28 crc kubenswrapper[4812]: I1124 19:33:28.368844 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wqgxl" event={"ID":"baf03f1e-597c-434a-b90d-821e44d586d1","Type":"ContainerStarted","Data":"82792ebcbbd7ac6cb2a4a5f4d3afd308abe07b45fd73ead0b75ba80424804015"} Nov 24 19:33:28 crc kubenswrapper[4812]: I1124 19:33:28.414636 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-6h6pl"] Nov 24 19:33:28 crc kubenswrapper[4812]: I1124 19:33:28.417661 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-mz2ht"] Nov 24 19:33:28 crc kubenswrapper[4812]: I1124 19:33:28.484963 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-5j5g2"] Nov 24 19:33:28 crc kubenswrapper[4812]: I1124 19:33:28.486235 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-ntd69\" (UID: \"eff10de0-2386-4e12-97a9-b6f08b1eda95\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" Nov 24 19:33:28 crc kubenswrapper[4812]: E1124 19:33:28.486455 4812 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 19:33:28 crc kubenswrapper[4812]: I1124 19:33:28.486920 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-metrics-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-ntd69\" (UID: \"eff10de0-2386-4e12-97a9-b6f08b1eda95\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" Nov 24 19:33:28 crc kubenswrapper[4812]: E1124 19:33:28.487134 4812 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 19:33:28 crc kubenswrapper[4812]: E1124 19:33:28.487180 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-metrics-certs podName:eff10de0-2386-4e12-97a9-b6f08b1eda95 nodeName:}" failed. No retries permitted until 2025-11-24 19:33:29.487162956 +0000 UTC m=+1003.276115327 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-metrics-certs") pod "openstack-operator-controller-manager-7cd5954d9-ntd69" (UID: "eff10de0-2386-4e12-97a9-b6f08b1eda95") : secret "metrics-server-cert" not found Nov 24 19:33:28 crc kubenswrapper[4812]: E1124 19:33:28.488384 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-webhook-certs podName:eff10de0-2386-4e12-97a9-b6f08b1eda95 nodeName:}" failed. No retries permitted until 2025-11-24 19:33:29.488366471 +0000 UTC m=+1003.277318832 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-webhook-certs") pod "openstack-operator-controller-manager-7cd5954d9-ntd69" (UID: "eff10de0-2386-4e12-97a9-b6f08b1eda95") : secret "webhook-server-cert" not found Nov 24 19:33:28 crc kubenswrapper[4812]: I1124 19:33:28.498279 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-48cqh"] Nov 24 19:33:28 crc kubenswrapper[4812]: I1124 19:33:28.502307 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-98txq"] Nov 24 19:33:28 crc kubenswrapper[4812]: I1124 19:33:28.586048 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-b5ghq"] Nov 24 19:33:28 crc kubenswrapper[4812]: W1124 19:33:28.600037 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3aafb524_8fa5_4752_b3f5_a4c700c8b4ee.slice/crio-66e48594e54c7c5c8a894891a7e49ba9051457b0128ba7001cb5f8aad66d3e36 WatchSource:0}: Error finding container 66e48594e54c7c5c8a894891a7e49ba9051457b0128ba7001cb5f8aad66d3e36: Status 404 returned error can't find the container with id 66e48594e54c7c5c8a894891a7e49ba9051457b0128ba7001cb5f8aad66d3e36 Nov 24 19:33:28 crc kubenswrapper[4812]: I1124 19:33:28.613421 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-fprj2"] Nov 24 19:33:28 crc kubenswrapper[4812]: I1124 19:33:28.620406 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-zxcm8"] Nov 24 19:33:28 crc kubenswrapper[4812]: W1124 19:33:28.640152 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a7a540a_2cc4_4703_bfa6_b85acdffe4a7.slice/crio-445f5fc55534a41b74b21c64f8980a74f8fca5ad59f0b3716f3c8eedfe30b20c WatchSource:0}: Error finding container 445f5fc55534a41b74b21c64f8980a74f8fca5ad59f0b3716f3c8eedfe30b20c: Status 404 returned error can't find the container with id 445f5fc55534a41b74b21c64f8980a74f8fca5ad59f0b3716f3c8eedfe30b20c Nov 24 19:33:28 crc kubenswrapper[4812]: W1124 19:33:28.641596 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod393e1e76_aacc_4ae3_95e3_cb2f4cddd6bb.slice/crio-d7858bd4e86e953680c5ac1aa490fec24201943874f9750a727c04a5a7baed35 WatchSource:0}: Error finding container d7858bd4e86e953680c5ac1aa490fec24201943874f9750a727c04a5a7baed35: Status 404 returned error can't find the container with id d7858bd4e86e953680c5ac1aa490fec24201943874f9750a727c04a5a7baed35 Nov 24 19:33:28 crc kubenswrapper[4812]: I1124 19:33:28.746909 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-s9pm6"] Nov 24 19:33:28 crc kubenswrapper[4812]: I1124 19:33:28.869451 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-68nf9"] Nov 24 19:33:28 crc kubenswrapper[4812]: W1124 19:33:28.886182 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ec7107c_8ae0_4a00_901e_e70ac99520e7.slice/crio-a8f84d8785eb6393fb0de733110da0a7cfb5d68d31a9f68c0d763487312d264e WatchSource:0}: Error finding container a8f84d8785eb6393fb0de733110da0a7cfb5d68d31a9f68c0d763487312d264e: Status 404 returned error can't find the container with id a8f84d8785eb6393fb0de733110da0a7cfb5d68d31a9f68c0d763487312d264e Nov 24 19:33:28 crc kubenswrapper[4812]: I1124 19:33:28.895736 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-cgt44"] Nov 24 19:33:28 crc kubenswrapper[4812]: I1124 19:33:28.909889 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-jpljb"] Nov 24 19:33:28 crc kubenswrapper[4812]: I1124 19:33:28.917429 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-5fw6l"] Nov 24 19:33:28 crc kubenswrapper[4812]: I1124 19:33:28.921416 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-7thtk"] Nov 24 19:33:28 crc kubenswrapper[4812]: E1124 19:33:28.930800 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4r6vk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7c57c8bbc4-7thtk_openstack-operators(39c92508-c338-4865-b7fb-aa0c026e5f6b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 19:33:28 crc kubenswrapper[4812]: E1124 19:33:28.933841 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4r6vk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7c57c8bbc4-7thtk_openstack-operators(39c92508-c338-4865-b7fb-aa0c026e5f6b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 19:33:28 crc kubenswrapper[4812]: E1124 19:33:28.935117 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-7thtk" podUID="39c92508-c338-4865-b7fb-aa0c026e5f6b" Nov 24 19:33:28 crc kubenswrapper[4812]: W1124 19:33:28.937101 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod599321b1_4c22_4f86_a8fb_16ab7335db6a.slice/crio-4b102735891b1d3fcc3c89fea29f3669496ec89f7a7d3bf614c087c618f0d4fc WatchSource:0}: Error finding container 4b102735891b1d3fcc3c89fea29f3669496ec89f7a7d3bf614c087c618f0d4fc: Status 404 returned error can't find the container with id 4b102735891b1d3fcc3c89fea29f3669496ec89f7a7d3bf614c087c618f0d4fc Nov 24 19:33:28 crc kubenswrapper[4812]: E1124 19:33:28.942062 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zjb72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-jpljb_openstack-operators(599321b1-4c22-4f86-a8fb-16ab7335db6a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 19:33:28 crc kubenswrapper[4812]: E1124 19:33:28.945054 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zjb72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-jpljb_openstack-operators(599321b1-4c22-4f86-a8fb-16ab7335db6a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 19:33:28 crc kubenswrapper[4812]: E1124 19:33:28.946378 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jpljb" podUID="599321b1-4c22-4f86-a8fb-16ab7335db6a" Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.027195 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-28xt4"] Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.042382 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-h7bnm"] Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.047886 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hz5gz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-28xt4_openstack-operators(05059f83-e196-4181-916f-ba36ed828d22): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 19:33:29 crc kubenswrapper[4812]: W1124 19:33:29.049769 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76a00195_5eec_4790_93cf_0167e77c2d69.slice/crio-53b0fb4e9351364cab09cc427d69dcfa0b8b1e75c3f2c3b4f0b8674f3e1c6608 WatchSource:0}: Error finding container 53b0fb4e9351364cab09cc427d69dcfa0b8b1e75c3f2c3b4f0b8674f3e1c6608: Status 404 returned error can't find the container with id 53b0fb4e9351364cab09cc427d69dcfa0b8b1e75c3f2c3b4f0b8674f3e1c6608 Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.050012 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hz5gz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-28xt4_openstack-operators(05059f83-e196-4181-916f-ba36ed828d22): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.051512 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-28xt4" podUID="05059f83-e196-4181-916f-ba36ed828d22" Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.053551 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-ptcwx"] Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.053937 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x8s2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-jg58x_openstack-operators(76a00195-5eec-4790-93cf-0167e77c2d69): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.056818 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x8s2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-jg58x_openstack-operators(76a00195-5eec-4790-93cf-0167e77c2d69): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 19:33:29 crc kubenswrapper[4812]: W1124 19:33:29.057365 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b38e013_64de_4e6d_8092_b9607b1c21f7.slice/crio-2c2a612bc87ef719744d93f8d75b17b476930069caa31ce6b93895d5f6c2670e WatchSource:0}: Error finding container 2c2a612bc87ef719744d93f8d75b17b476930069caa31ce6b93895d5f6c2670e: Status 404 returned error can't find the container with id 2c2a612bc87ef719744d93f8d75b17b476930069caa31ce6b93895d5f6c2670e Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.058484 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg58x" podUID="76a00195-5eec-4790-93cf-0167e77c2d69" Nov 24 19:33:29 crc kubenswrapper[4812]: W1124 19:33:29.059955 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ac87fe9_d5cf_44a3_b5fe_0717ea4087c8.slice/crio-a355f4df34b296808cd8d86204a1b9625291551f24a1e897a3a528d95a4254f8 WatchSource:0}: Error finding container a355f4df34b296808cd8d86204a1b9625291551f24a1e897a3a528d95a4254f8: Status 404 returned error can't find the container with id a355f4df34b296808cd8d86204a1b9625291551f24a1e897a3a528d95a4254f8 Nov 24 19:33:29 crc kubenswrapper[4812]: W1124 19:33:29.062557 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31db8161_54b4_417d_b6ec_85109904df50.slice/crio-726e3659d81ca57d553c5ff7712d22e80174afbb8f71c708e148a4f2c55778d0 WatchSource:0}: Error finding container 726e3659d81ca57d553c5ff7712d22e80174afbb8f71c708e148a4f2c55778d0: Status 404 returned error can't find the container with id 726e3659d81ca57d553c5ff7712d22e80174afbb8f71c708e148a4f2c55778d0 Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.063908 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jjwdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-h7bnm_openstack-operators(8b38e013-64de-4e6d-8092-b9607b1c21f7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.064172 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x7jw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-x8j6n_openstack-operators(2ac87fe9-d5cf-44a3-b5fe-0717ea4087c8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.065491 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jjwdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-h7bnm_openstack-operators(8b38e013-64de-4e6d-8092-b9607b1c21f7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.065666 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x7jw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-567f98c9d-x8j6n_openstack-operators(2ac87fe9-d5cf-44a3-b5fe-0717ea4087c8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.065842 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4tfxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-ptcwx_openstack-operators(31db8161-54b4-417d-b6ec-85109904df50): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.066773 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-x8j6n" podUID="2ac87fe9-d5cf-44a3-b5fe-0717ea4087c8" Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.066847 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-h7bnm" podUID="8b38e013-64de-4e6d-8092-b9607b1c21f7" Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.067540 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-567f98c9d-x8j6n"] Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.067636 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4tfxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-ptcwx_openstack-operators(31db8161-54b4-417d-b6ec-85109904df50): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.068778 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-ptcwx" podUID="31db8161-54b4-417d-b6ec-85109904df50" Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.073440 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg58x"] Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.096225 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34cfb52c-d093-4d71-bba3-0ab2e2047e74-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-8bjbz\" (UID: \"34cfb52c-d093-4d71-bba3-0ab2e2047e74\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-8bjbz" Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.106481 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/34cfb52c-d093-4d71-bba3-0ab2e2047e74-cert\") pod \"openstack-baremetal-operator-controller-manager-b58f89467-8bjbz\" (UID: \"34cfb52c-d093-4d71-bba3-0ab2e2047e74\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-8bjbz" Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.168189 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mrr28"] Nov 24 19:33:29 crc kubenswrapper[4812]: W1124 19:33:29.191070 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda601c469_c7c1_4311_8927_a0ebccc2722b.slice/crio-19b0ff0a958c106c94ca46f8549ac393e96f9a62bdef52d620006f723e082ab0 WatchSource:0}: Error finding container 19b0ff0a958c106c94ca46f8549ac393e96f9a62bdef52d620006f723e082ab0: Status 404 returned error can't find the container with id 19b0ff0a958c106c94ca46f8549ac393e96f9a62bdef52d620006f723e082ab0 Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.257298 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-8bjbz" Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.434592 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-68nf9" event={"ID":"2ec7107c-8ae0-4a00-901e-e70ac99520e7","Type":"ContainerStarted","Data":"a8f84d8785eb6393fb0de733110da0a7cfb5d68d31a9f68c0d763487312d264e"} Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.442293 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-5fw6l" event={"ID":"b745dd21-3908-4dba-8966-a4be4aea8aa4","Type":"ContainerStarted","Data":"3f4ab4976bf8c5fd9b6ac67d85852c8932bd31c842799aa05ee8440c9da4cb3d"} Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.443621 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-48cqh" event={"ID":"851fd2ec-631c-41d1-826f-934b3561cd70","Type":"ContainerStarted","Data":"93581bb52880d7f28d64ea60dc3f9e945b8f58191ae76287f9c43f3c422e7abd"} Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.458154 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-b5ghq" event={"ID":"3aafb524-8fa5-4752-b3f5-a4c700c8b4ee","Type":"ContainerStarted","Data":"66e48594e54c7c5c8a894891a7e49ba9051457b0128ba7001cb5f8aad66d3e36"} Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.467600 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jpljb" event={"ID":"599321b1-4c22-4f86-a8fb-16ab7335db6a","Type":"ContainerStarted","Data":"4b102735891b1d3fcc3c89fea29f3669496ec89f7a7d3bf614c087c618f0d4fc"} Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.475114 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jpljb" podUID="599321b1-4c22-4f86-a8fb-16ab7335db6a" Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.489753 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-7thtk" event={"ID":"39c92508-c338-4865-b7fb-aa0c026e5f6b","Type":"ContainerStarted","Data":"0b0932732071547b587c066accda9e7a367028aae9a3f53abe19f23c51eedcc4"} Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.504091 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-metrics-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-ntd69\" (UID: \"eff10de0-2386-4e12-97a9-b6f08b1eda95\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.504230 4812 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.504258 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-ntd69\" (UID: \"eff10de0-2386-4e12-97a9-b6f08b1eda95\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.504269 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-metrics-certs podName:eff10de0-2386-4e12-97a9-b6f08b1eda95 nodeName:}" failed. No retries permitted until 2025-11-24 19:33:31.504255914 +0000 UTC m=+1005.293208285 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-metrics-certs") pod "openstack-operator-controller-manager-7cd5954d9-ntd69" (UID: "eff10de0-2386-4e12-97a9-b6f08b1eda95") : secret "metrics-server-cert" not found Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.504677 4812 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.504808 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-webhook-certs podName:eff10de0-2386-4e12-97a9-b6f08b1eda95 nodeName:}" failed. No retries permitted until 2025-11-24 19:33:31.504794749 +0000 UTC m=+1005.293747120 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-webhook-certs") pod "openstack-operator-controller-manager-7cd5954d9-ntd69" (UID: "eff10de0-2386-4e12-97a9-b6f08b1eda95") : secret "webhook-server-cert" not found Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.505964 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-7thtk" podUID="39c92508-c338-4865-b7fb-aa0c026e5f6b" Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.514599 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5j5g2" event={"ID":"82485996-0922-41c0-903c-e44eadf8be30","Type":"ContainerStarted","Data":"38e92724aafc01322d2a2bcecb00bbbcbbfc19b445f1427f95179d9fc11cd1d3"} Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.517945 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-s9pm6" event={"ID":"aa881065-4590-4fbb-9f31-926164d07125","Type":"ContainerStarted","Data":"c53cd2e5865142b60e915f428af571fa9e62281b1af999c0a731eb758961aae7"} Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.521736 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-28xt4" event={"ID":"05059f83-e196-4181-916f-ba36ed828d22","Type":"ContainerStarted","Data":"4403ec01f4f61362f5bc58f47c7fd908d7eea2342c691214d8efe476697d40a4"} Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.524028 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-28xt4" podUID="05059f83-e196-4181-916f-ba36ed828d22" Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.524581 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-h7bnm" event={"ID":"8b38e013-64de-4e6d-8092-b9607b1c21f7","Type":"ContainerStarted","Data":"2c2a612bc87ef719744d93f8d75b17b476930069caa31ce6b93895d5f6c2670e"} Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.528132 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-h7bnm" podUID="8b38e013-64de-4e6d-8092-b9607b1c21f7" Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.532126 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-mz2ht" event={"ID":"2f7164ed-0304-404a-938f-134952b55d15","Type":"ContainerStarted","Data":"5c2a5236f6f0187174646ece837b04277dba6feb1403518552de8f448a57ffc3"} Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.533611 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mrr28" event={"ID":"a601c469-c7c1-4311-8927-a0ebccc2722b","Type":"ContainerStarted","Data":"19b0ff0a958c106c94ca46f8549ac393e96f9a62bdef52d620006f723e082ab0"} Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.543189 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-98txq" event={"ID":"e41fac25-1110-4e6e-a45e-5529ef0ef2f1","Type":"ContainerStarted","Data":"6171c077c33c7a13ffd7d5efdfe8fa78cbd08d3439a9a9a9e45d7f22e0605819"} Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.573254 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg58x" event={"ID":"76a00195-5eec-4790-93cf-0167e77c2d69","Type":"ContainerStarted","Data":"53b0fb4e9351364cab09cc427d69dcfa0b8b1e75c3f2c3b4f0b8674f3e1c6608"} Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.575401 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg58x" podUID="76a00195-5eec-4790-93cf-0167e77c2d69" Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.576132 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fprj2" event={"ID":"4a7a540a-2cc4-4703-bfa6-b85acdffe4a7","Type":"ContainerStarted","Data":"445f5fc55534a41b74b21c64f8980a74f8fca5ad59f0b3716f3c8eedfe30b20c"} Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.584508 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-zxcm8" event={"ID":"393e1e76-aacc-4ae3-95e3-cb2f4cddd6bb","Type":"ContainerStarted","Data":"d7858bd4e86e953680c5ac1aa490fec24201943874f9750a727c04a5a7baed35"} Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.592800 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-ptcwx" event={"ID":"31db8161-54b4-417d-b6ec-85109904df50","Type":"ContainerStarted","Data":"726e3659d81ca57d553c5ff7712d22e80174afbb8f71c708e148a4f2c55778d0"} Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.595784 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-ptcwx" podUID="31db8161-54b4-417d-b6ec-85109904df50" Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.596143 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cgt44" event={"ID":"8e0c969b-35df-4a7c-85ff-ae79ea881c06","Type":"ContainerStarted","Data":"d23daac51b0750d158d33378323672988c3ba94a94dc7082dbfe149fa2111bfb"} Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.598992 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-x8j6n" event={"ID":"2ac87fe9-d5cf-44a3-b5fe-0717ea4087c8","Type":"ContainerStarted","Data":"a355f4df34b296808cd8d86204a1b9625291551f24a1e897a3a528d95a4254f8"} Nov 24 19:33:29 crc kubenswrapper[4812]: E1124 19:33:29.601783 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-x8j6n" podUID="2ac87fe9-d5cf-44a3-b5fe-0717ea4087c8" Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.601886 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-6h6pl" event={"ID":"c317b8b9-c5b6-4aa3-b666-425be5ee68fc","Type":"ContainerStarted","Data":"bbd4e088530ed1dfd2b9badd856b6ece8c92998298c74b38480ae797488e1a26"} Nov 24 19:33:29 crc kubenswrapper[4812]: I1124 19:33:29.846756 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-8bjbz"] Nov 24 19:33:30 crc kubenswrapper[4812]: I1124 19:33:30.621136 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-8bjbz" event={"ID":"34cfb52c-d093-4d71-bba3-0ab2e2047e74","Type":"ContainerStarted","Data":"40099da2748bf7f9f0524de6f68808c3f04717241eb6bc36f6f7272994dec3cb"} Nov 24 19:33:30 crc kubenswrapper[4812]: E1124 19:33:30.624090 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cb74df96-28xt4" podUID="05059f83-e196-4181-916f-ba36ed828d22" Nov 24 19:33:30 crc kubenswrapper[4812]: E1124 19:33:30.624270 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-h7bnm" podUID="8b38e013-64de-4e6d-8092-b9607b1c21f7" Nov 24 19:33:30 crc kubenswrapper[4812]: E1124 19:33:30.624832 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg58x" podUID="76a00195-5eec-4790-93cf-0167e77c2d69" Nov 24 19:33:30 crc kubenswrapper[4812]: E1124 19:33:30.625447 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jpljb" podUID="599321b1-4c22-4f86-a8fb-16ab7335db6a" Nov 24 19:33:30 crc kubenswrapper[4812]: E1124 19:33:30.625574 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-ptcwx" podUID="31db8161-54b4-417d-b6ec-85109904df50" Nov 24 19:33:30 crc kubenswrapper[4812]: E1124 19:33:30.629551 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-7thtk" podUID="39c92508-c338-4865-b7fb-aa0c026e5f6b" Nov 24 19:33:30 crc kubenswrapper[4812]: E1124 19:33:30.629612 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-x8j6n" podUID="2ac87fe9-d5cf-44a3-b5fe-0717ea4087c8" Nov 24 19:33:31 crc kubenswrapper[4812]: I1124 19:33:31.537307 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-ntd69\" (UID: \"eff10de0-2386-4e12-97a9-b6f08b1eda95\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" Nov 24 19:33:31 crc kubenswrapper[4812]: I1124 19:33:31.537805 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-metrics-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-ntd69\" (UID: \"eff10de0-2386-4e12-97a9-b6f08b1eda95\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" Nov 24 19:33:31 crc kubenswrapper[4812]: I1124 19:33:31.543247 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-webhook-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-ntd69\" (UID: \"eff10de0-2386-4e12-97a9-b6f08b1eda95\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" Nov 24 19:33:31 crc kubenswrapper[4812]: I1124 19:33:31.543370 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eff10de0-2386-4e12-97a9-b6f08b1eda95-metrics-certs\") pod \"openstack-operator-controller-manager-7cd5954d9-ntd69\" (UID: \"eff10de0-2386-4e12-97a9-b6f08b1eda95\") " pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" Nov 24 19:33:31 crc kubenswrapper[4812]: I1124 19:33:31.685788 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" Nov 24 19:33:32 crc kubenswrapper[4812]: I1124 19:33:32.998455 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:33:32 crc kubenswrapper[4812]: I1124 19:33:32.998529 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.153006 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69"] Nov 24 19:33:40 crc kubenswrapper[4812]: W1124 19:33:40.227676 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeff10de0_2386_4e12_97a9_b6f08b1eda95.slice/crio-dd49af18b493ccf05313b1ae6825a336f29833f5fed77f4fdf00957fe2a84b7a WatchSource:0}: Error finding container dd49af18b493ccf05313b1ae6825a336f29833f5fed77f4fdf00957fe2a84b7a: Status 404 returned error can't find the container with id dd49af18b493ccf05313b1ae6825a336f29833f5fed77f4fdf00957fe2a84b7a Nov 24 19:33:40 crc kubenswrapper[4812]: E1124 19:33:40.558363 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jk9nh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-774b86978c-mz2ht_openstack-operators(2f7164ed-0304-404a-938f-134952b55d15): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 19:33:40 crc kubenswrapper[4812]: E1124 19:33:40.560258 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-mz2ht" podUID="2f7164ed-0304-404a-938f-134952b55d15" Nov 24 19:33:40 crc kubenswrapper[4812]: E1124 19:33:40.598173 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sgx9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-58bb8d67cc-5fw6l_openstack-operators(b745dd21-3908-4dba-8966-a4be4aea8aa4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 19:33:40 crc kubenswrapper[4812]: E1124 19:33:40.599323 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-5fw6l" podUID="b745dd21-3908-4dba-8966-a4be4aea8aa4" Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.698936 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-b5ghq" event={"ID":"3aafb524-8fa5-4752-b3f5-a4c700c8b4ee","Type":"ContainerStarted","Data":"ced9c2694187e715893de10fe0f190f491a5ca967a60199344df8c8ef6bb8cde"} Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.702967 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5j5g2" event={"ID":"82485996-0922-41c0-903c-e44eadf8be30","Type":"ContainerStarted","Data":"5855ef9337b6f47087dc4eec4d47085ab2eb1ab8efe7c22abf2e235b1ba6cfa5"} Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.707062 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-mz2ht" event={"ID":"2f7164ed-0304-404a-938f-134952b55d15","Type":"ContainerStarted","Data":"42fcdb9045ae3fc4f3ab995cb7376f27f60e3640850aecdf91028c9346508b22"} Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.707669 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-mz2ht" Nov 24 19:33:40 crc kubenswrapper[4812]: E1124 19:33:40.709432 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-mz2ht" podUID="2f7164ed-0304-404a-938f-134952b55d15" Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.710705 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cgt44" event={"ID":"8e0c969b-35df-4a7c-85ff-ae79ea881c06","Type":"ContainerStarted","Data":"f1e8080bda0f0d64559464640e3f9941fc1b86b1c8b963030face9b7f1d84da3"} Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.712963 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-98txq" event={"ID":"e41fac25-1110-4e6e-a45e-5529ef0ef2f1","Type":"ContainerStarted","Data":"dfc679c20987853223c6193e91509183f5cc8e455d51d0d9d59a9b7f17ec005f"} Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.746736 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-s9pm6" event={"ID":"aa881065-4590-4fbb-9f31-926164d07125","Type":"ContainerStarted","Data":"f47da1fdbee384f6d2e794ee6684b6fd67b663c8a0a84cfbdd9a456520a393a9"} Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.769985 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-68nf9" event={"ID":"2ec7107c-8ae0-4a00-901e-e70ac99520e7","Type":"ContainerStarted","Data":"97f8aa0edd86ea297c2221ef3f10e267a54eee9bb6e3f250890fa0e0a53b327b"} Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.775718 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-48cqh" event={"ID":"851fd2ec-631c-41d1-826f-934b3561cd70","Type":"ContainerStarted","Data":"6159e6b3c725bd65b9ae738d209e8e228d3140c2e14fdcfbf76821ba33ba1ef3"} Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.804230 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-zxcm8" event={"ID":"393e1e76-aacc-4ae3-95e3-cb2f4cddd6bb","Type":"ContainerStarted","Data":"188e19545816984515db28afd5f873432a7bdaecb7ae220f681d15bf5f4b3028"} Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.807163 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-5fw6l" event={"ID":"b745dd21-3908-4dba-8966-a4be4aea8aa4","Type":"ContainerStarted","Data":"650f7ed2a668f6c1f1ffd6811ad08776da97820eff03671fcba4b2df8402d149"} Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.807300 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-5fw6l" Nov 24 19:33:40 crc kubenswrapper[4812]: E1124 19:33:40.810101 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-5fw6l" podUID="b745dd21-3908-4dba-8966-a4be4aea8aa4" Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.811438 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-6h6pl" event={"ID":"c317b8b9-c5b6-4aa3-b666-425be5ee68fc","Type":"ContainerStarted","Data":"edf954d0f448047af250ce89234c5772c4090faec0a39a79fbdf5a6bd1a10da6"} Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.812923 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-8bjbz" event={"ID":"34cfb52c-d093-4d71-bba3-0ab2e2047e74","Type":"ContainerStarted","Data":"f8291ddf513f3abaef8a67501c8af4aef29f227f822d4b030a33a4fc6ebe4acf"} Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.837447 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mrr28" event={"ID":"a601c469-c7c1-4311-8927-a0ebccc2722b","Type":"ContainerStarted","Data":"f25eec6c0a8029cd118e88b6ba6cfb96d077415a6ddbc3d3f8ac495c5bc91e28"} Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.845000 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fprj2" event={"ID":"4a7a540a-2cc4-4703-bfa6-b85acdffe4a7","Type":"ContainerStarted","Data":"6027ce14aae76b4dae6dd0e977abc28819252c18e5da1d10456a59d802ee9ee9"} Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.861181 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" event={"ID":"eff10de0-2386-4e12-97a9-b6f08b1eda95","Type":"ContainerStarted","Data":"f8fcf7d7f76843e80bf900ae52ac13178e99cc8097d7d1caa48a89480f303419"} Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.861227 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" event={"ID":"eff10de0-2386-4e12-97a9-b6f08b1eda95","Type":"ContainerStarted","Data":"dd49af18b493ccf05313b1ae6825a336f29833f5fed77f4fdf00957fe2a84b7a"} Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.861583 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.878613 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mrr28" podStartSLOduration=3.105576717 podStartE2EDuration="13.878598739s" podCreationTimestamp="2025-11-24 19:33:27 +0000 UTC" firstStartedPulling="2025-11-24 19:33:29.195961916 +0000 UTC m=+1002.984914287" lastFinishedPulling="2025-11-24 19:33:39.968983898 +0000 UTC m=+1013.757936309" observedRunningTime="2025-11-24 19:33:40.87759576 +0000 UTC m=+1014.666548141" watchObservedRunningTime="2025-11-24 19:33:40.878598739 +0000 UTC m=+1014.667551110" Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.913896 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wqgxl" event={"ID":"baf03f1e-597c-434a-b90d-821e44d586d1","Type":"ContainerStarted","Data":"964cb91d71e0b4c85e3a8c51a7f0885346266a9d8f50deaef72aeecd9b4d6c8c"} Nov 24 19:33:40 crc kubenswrapper[4812]: I1124 19:33:40.951953 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" podStartSLOduration=13.951936615 podStartE2EDuration="13.951936615s" podCreationTimestamp="2025-11-24 19:33:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:33:40.945264125 +0000 UTC m=+1014.734216496" watchObservedRunningTime="2025-11-24 19:33:40.951936615 +0000 UTC m=+1014.740888986" Nov 24 19:33:41 crc kubenswrapper[4812]: E1124 19:33:41.934892 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-5fw6l" podUID="b745dd21-3908-4dba-8966-a4be4aea8aa4" Nov 24 19:33:41 crc kubenswrapper[4812]: E1124 19:33:41.935431 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-mz2ht" podUID="2f7164ed-0304-404a-938f-134952b55d15" Nov 24 19:33:47 crc kubenswrapper[4812]: I1124 19:33:47.438278 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-mz2ht" Nov 24 19:33:47 crc kubenswrapper[4812]: E1124 19:33:47.441133 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-mz2ht" podUID="2f7164ed-0304-404a-938f-134952b55d15" Nov 24 19:33:47 crc kubenswrapper[4812]: I1124 19:33:47.690282 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-5fw6l" Nov 24 19:33:47 crc kubenswrapper[4812]: E1124 19:33:47.695517 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-5fw6l" podUID="b745dd21-3908-4dba-8966-a4be4aea8aa4" Nov 24 19:33:51 crc kubenswrapper[4812]: I1124 19:33:51.692082 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7cd5954d9-ntd69" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.096852 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-ptcwx" event={"ID":"31db8161-54b4-417d-b6ec-85109904df50","Type":"ContainerStarted","Data":"bdb8998c48e1fb36fa9c9755e22f6ea59c94a6021080e0ccfdcecc0641e2d50e"} Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.097292 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-ptcwx" event={"ID":"31db8161-54b4-417d-b6ec-85109904df50","Type":"ContainerStarted","Data":"29a05e16843f6137d31db232cf4456b8d1715683158d288ed2712e92a0abe56f"} Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.098323 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-ptcwx" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.108979 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-6h6pl" event={"ID":"c317b8b9-c5b6-4aa3-b666-425be5ee68fc","Type":"ContainerStarted","Data":"2061f02cdc16db698675afea9eb6d4f6bbb267c016c8c5ef99d2d98aaec3b08f"} Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.109874 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-6h6pl" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.113150 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-6h6pl" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.121668 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-48cqh" event={"ID":"851fd2ec-631c-41d1-826f-934b3561cd70","Type":"ContainerStarted","Data":"a11f553051abd93bd4ef2ab5c86d3a6ba9e99bb67c2c54b39242b1ae06ceca13"} Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.122687 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-ptcwx" podStartSLOduration=3.479879753 podStartE2EDuration="28.122665123s" podCreationTimestamp="2025-11-24 19:33:27 +0000 UTC" firstStartedPulling="2025-11-24 19:33:29.065694761 +0000 UTC m=+1002.854647122" lastFinishedPulling="2025-11-24 19:33:53.708480131 +0000 UTC m=+1027.497432492" observedRunningTime="2025-11-24 19:33:55.11905427 +0000 UTC m=+1028.908006641" watchObservedRunningTime="2025-11-24 19:33:55.122665123 +0000 UTC m=+1028.911617494" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.124602 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-48cqh" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.130933 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-48cqh" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.141299 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-6h6pl" podStartSLOduration=2.654020313 podStartE2EDuration="28.141272242s" podCreationTimestamp="2025-11-24 19:33:27 +0000 UTC" firstStartedPulling="2025-11-24 19:33:28.472199171 +0000 UTC m=+1002.261151542" lastFinishedPulling="2025-11-24 19:33:53.95945106 +0000 UTC m=+1027.748403471" observedRunningTime="2025-11-24 19:33:55.137634549 +0000 UTC m=+1028.926586920" watchObservedRunningTime="2025-11-24 19:33:55.141272242 +0000 UTC m=+1028.930224613" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.146674 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-98txq" event={"ID":"e41fac25-1110-4e6e-a45e-5529ef0ef2f1","Type":"ContainerStarted","Data":"bc0828c53389731dc02d37ceaa9a9474eb2438db128f08920b3a752db22955c5"} Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.147058 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-98txq" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.154101 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-98txq" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.154216 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-b5ghq" event={"ID":"3aafb524-8fa5-4752-b3f5-a4c700c8b4ee","Type":"ContainerStarted","Data":"9487506ab5c2aa83e725f8a630b7e7ad00ba7c1cc1ad41b3f579d0e85833f271"} Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.158842 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-b5ghq" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.175989 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-b5ghq" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.204254 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-b5ghq" podStartSLOduration=2.8722192189999998 podStartE2EDuration="28.204239063s" podCreationTimestamp="2025-11-24 19:33:27 +0000 UTC" firstStartedPulling="2025-11-24 19:33:28.626923751 +0000 UTC m=+1002.415876132" lastFinishedPulling="2025-11-24 19:33:53.958943585 +0000 UTC m=+1027.747895976" observedRunningTime="2025-11-24 19:33:55.201761053 +0000 UTC m=+1028.990713424" watchObservedRunningTime="2025-11-24 19:33:55.204239063 +0000 UTC m=+1028.993191424" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.207194 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-28xt4" event={"ID":"05059f83-e196-4181-916f-ba36ed828d22","Type":"ContainerStarted","Data":"f5633a331feddabcb060e3ea235e6dfe2ab87f56400d5c29f738fc7d5c911ad6"} Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.208756 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-28xt4" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.212313 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-48cqh" podStartSLOduration=3.727140543 podStartE2EDuration="29.212293402s" podCreationTimestamp="2025-11-24 19:33:26 +0000 UTC" firstStartedPulling="2025-11-24 19:33:28.473794266 +0000 UTC m=+1002.262746637" lastFinishedPulling="2025-11-24 19:33:53.958947125 +0000 UTC m=+1027.747899496" observedRunningTime="2025-11-24 19:33:55.17565691 +0000 UTC m=+1028.964609281" watchObservedRunningTime="2025-11-24 19:33:55.212293402 +0000 UTC m=+1029.001245773" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.258832 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-98txq" podStartSLOduration=3.7334309020000003 podStartE2EDuration="29.258809405s" podCreationTimestamp="2025-11-24 19:33:26 +0000 UTC" firstStartedPulling="2025-11-24 19:33:28.471841921 +0000 UTC m=+1002.260794292" lastFinishedPulling="2025-11-24 19:33:53.997220414 +0000 UTC m=+1027.786172795" observedRunningTime="2025-11-24 19:33:55.226177647 +0000 UTC m=+1029.015130018" watchObservedRunningTime="2025-11-24 19:33:55.258809405 +0000 UTC m=+1029.047761776" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.272616 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-x8j6n" event={"ID":"2ac87fe9-d5cf-44a3-b5fe-0717ea4087c8","Type":"ContainerStarted","Data":"8dcd7a96dac76d0ff54b44324c1830002717c13cac7c754b32a289e07402208d"} Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.276623 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cb74df96-28xt4" podStartSLOduration=3.615808748 podStartE2EDuration="28.276606611s" podCreationTimestamp="2025-11-24 19:33:27 +0000 UTC" firstStartedPulling="2025-11-24 19:33:29.047753171 +0000 UTC m=+1002.836705542" lastFinishedPulling="2025-11-24 19:33:53.708551034 +0000 UTC m=+1027.497503405" observedRunningTime="2025-11-24 19:33:55.273137973 +0000 UTC m=+1029.062090344" watchObservedRunningTime="2025-11-24 19:33:55.276606611 +0000 UTC m=+1029.065558982" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.305813 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cgt44" event={"ID":"8e0c969b-35df-4a7c-85ff-ae79ea881c06","Type":"ContainerStarted","Data":"35e1c7577f1c90f837eedb71ba25c9ed251af05902f5fee637fdfcfb3e26863f"} Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.307418 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cgt44" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.322921 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jpljb" event={"ID":"599321b1-4c22-4f86-a8fb-16ab7335db6a","Type":"ContainerStarted","Data":"39b7b2ac6f3efd9eb75ae5b90d7303122c8c30a569eb9148231c760a02af3d08"} Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.322988 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jpljb" event={"ID":"599321b1-4c22-4f86-a8fb-16ab7335db6a","Type":"ContainerStarted","Data":"46ff84233a966a0ad705d664043ef77ac85c3833a7b63f57ec9509b17e94d8fe"} Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.323790 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jpljb" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.329328 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cgt44" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.331196 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cgt44" podStartSLOduration=3.169925487 podStartE2EDuration="28.331184874s" podCreationTimestamp="2025-11-24 19:33:27 +0000 UTC" firstStartedPulling="2025-11-24 19:33:28.925351129 +0000 UTC m=+1002.714303500" lastFinishedPulling="2025-11-24 19:33:54.086610516 +0000 UTC m=+1027.875562887" observedRunningTime="2025-11-24 19:33:55.329723152 +0000 UTC m=+1029.118675543" watchObservedRunningTime="2025-11-24 19:33:55.331184874 +0000 UTC m=+1029.120137255" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.352948 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-7thtk" event={"ID":"39c92508-c338-4865-b7fb-aa0c026e5f6b","Type":"ContainerStarted","Data":"f81a8700a281b6eb31ac837911b4382c2c2d2d17b54359e77e0ea1fbe9557ea8"} Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.353612 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-7thtk" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.354393 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jpljb" podStartSLOduration=8.733445332 podStartE2EDuration="28.354383453s" podCreationTimestamp="2025-11-24 19:33:27 +0000 UTC" firstStartedPulling="2025-11-24 19:33:28.941681384 +0000 UTC m=+1002.730633775" lastFinishedPulling="2025-11-24 19:33:48.562619525 +0000 UTC m=+1022.351571896" observedRunningTime="2025-11-24 19:33:55.352698306 +0000 UTC m=+1029.141650687" watchObservedRunningTime="2025-11-24 19:33:55.354383453 +0000 UTC m=+1029.143335824" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.377713 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-zxcm8" event={"ID":"393e1e76-aacc-4ae3-95e3-cb2f4cddd6bb","Type":"ContainerStarted","Data":"7076b959d3960dc63a5537829b5b92484526c9c0ade3db184e720f710cc78273"} Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.378317 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-zxcm8" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.380552 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-zxcm8" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.383438 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-h7bnm" event={"ID":"8b38e013-64de-4e6d-8092-b9607b1c21f7","Type":"ContainerStarted","Data":"eb60fafd9d0a44b8afdd213a94999c1ac55e801d8412aa1beb52b1cd5aa4caae"} Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.383554 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-h7bnm" event={"ID":"8b38e013-64de-4e6d-8092-b9607b1c21f7","Type":"ContainerStarted","Data":"2cad62f9f6448a52a56fdefa6c22a36d2ce521ba6b819f1eaacf4fcbd6a1b720"} Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.384201 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-h7bnm" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.394827 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wqgxl" event={"ID":"baf03f1e-597c-434a-b90d-821e44d586d1","Type":"ContainerStarted","Data":"da62e5bdbd3cf0bff10546b3010dafa02d203e8c7af4c875ccf8a04f59259d6e"} Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.395993 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wqgxl" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.401056 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wqgxl" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.401670 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-8bjbz" event={"ID":"34cfb52c-d093-4d71-bba3-0ab2e2047e74","Type":"ContainerStarted","Data":"254b6fbb7421619c92670ba167ad027f35c0d6c8733e220bc8834e9ce2eb44d1"} Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.402666 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-7thtk" podStartSLOduration=3.5612077749999997 podStartE2EDuration="28.402651056s" podCreationTimestamp="2025-11-24 19:33:27 +0000 UTC" firstStartedPulling="2025-11-24 19:33:28.930596228 +0000 UTC m=+1002.719548609" lastFinishedPulling="2025-11-24 19:33:53.772039519 +0000 UTC m=+1027.560991890" observedRunningTime="2025-11-24 19:33:55.399785785 +0000 UTC m=+1029.188738156" watchObservedRunningTime="2025-11-24 19:33:55.402651056 +0000 UTC m=+1029.191603417" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.402878 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-8bjbz" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.414648 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-8bjbz" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.427113 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-s9pm6" event={"ID":"aa881065-4590-4fbb-9f31-926164d07125","Type":"ContainerStarted","Data":"b15251f46064d7f4646c389f0650c9c2d3143058c239355bab086abce678de24"} Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.428173 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-s9pm6" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.430701 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-zxcm8" podStartSLOduration=3.092904187 podStartE2EDuration="28.430681854s" podCreationTimestamp="2025-11-24 19:33:27 +0000 UTC" firstStartedPulling="2025-11-24 19:33:28.646268052 +0000 UTC m=+1002.435220423" lastFinishedPulling="2025-11-24 19:33:53.984045719 +0000 UTC m=+1027.772998090" observedRunningTime="2025-11-24 19:33:55.415047999 +0000 UTC m=+1029.204000370" watchObservedRunningTime="2025-11-24 19:33:55.430681854 +0000 UTC m=+1029.219634225" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.436551 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-s9pm6" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.459597 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fprj2" event={"ID":"4a7a540a-2cc4-4703-bfa6-b85acdffe4a7","Type":"ContainerStarted","Data":"3613b86b7dffce76faeedaabbdfbcc9dbb0c53e6e6c8d8a06a8ba2d9738f1fb7"} Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.460715 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fprj2" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.461262 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b58f89467-8bjbz" podStartSLOduration=4.325544394 podStartE2EDuration="28.461247903s" podCreationTimestamp="2025-11-24 19:33:27 +0000 UTC" firstStartedPulling="2025-11-24 19:33:29.871781677 +0000 UTC m=+1003.660734048" lastFinishedPulling="2025-11-24 19:33:54.007485186 +0000 UTC m=+1027.796437557" observedRunningTime="2025-11-24 19:33:55.458989059 +0000 UTC m=+1029.247941420" watchObservedRunningTime="2025-11-24 19:33:55.461247903 +0000 UTC m=+1029.250200284" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.478586 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fprj2" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.486229 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-68nf9" event={"ID":"2ec7107c-8ae0-4a00-901e-e70ac99520e7","Type":"ContainerStarted","Data":"c0a428192ca3977644292233e1c5261fb1fcb78aa345acb2c03d8beae7496ed0"} Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.489401 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-68nf9" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.501595 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-68nf9" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.505028 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-wqgxl" podStartSLOduration=3.01780695 podStartE2EDuration="29.505007778s" podCreationTimestamp="2025-11-24 19:33:26 +0000 UTC" firstStartedPulling="2025-11-24 19:33:27.850120448 +0000 UTC m=+1001.639072819" lastFinishedPulling="2025-11-24 19:33:54.337321276 +0000 UTC m=+1028.126273647" observedRunningTime="2025-11-24 19:33:55.485723709 +0000 UTC m=+1029.274676080" watchObservedRunningTime="2025-11-24 19:33:55.505007778 +0000 UTC m=+1029.293960149" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.511276 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg58x" event={"ID":"76a00195-5eec-4790-93cf-0167e77c2d69","Type":"ContainerStarted","Data":"41c1baf8b2903eea90100dd3c8642539c1593f1b839914e37902c6e46bd8aff7"} Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.512209 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg58x" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.514410 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-864885998-h7bnm" podStartSLOduration=3.6932757819999997 podStartE2EDuration="28.514400025s" podCreationTimestamp="2025-11-24 19:33:27 +0000 UTC" firstStartedPulling="2025-11-24 19:33:29.063742625 +0000 UTC m=+1002.852694996" lastFinishedPulling="2025-11-24 19:33:53.884866868 +0000 UTC m=+1027.673819239" observedRunningTime="2025-11-24 19:33:55.510965457 +0000 UTC m=+1029.299917828" watchObservedRunningTime="2025-11-24 19:33:55.514400025 +0000 UTC m=+1029.303352396" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.589642 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-s9pm6" podStartSLOduration=3.389149681 podStartE2EDuration="28.589620994s" podCreationTimestamp="2025-11-24 19:33:27 +0000 UTC" firstStartedPulling="2025-11-24 19:33:28.758487133 +0000 UTC m=+1002.547439504" lastFinishedPulling="2025-11-24 19:33:53.958958436 +0000 UTC m=+1027.747910817" observedRunningTime="2025-11-24 19:33:55.577654754 +0000 UTC m=+1029.366607125" watchObservedRunningTime="2025-11-24 19:33:55.589620994 +0000 UTC m=+1029.378573355" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.641073 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg58x" podStartSLOduration=5.034297503 podStartE2EDuration="28.641057517s" podCreationTimestamp="2025-11-24 19:33:27 +0000 UTC" firstStartedPulling="2025-11-24 19:33:29.053815863 +0000 UTC m=+1002.842768234" lastFinishedPulling="2025-11-24 19:33:52.660575877 +0000 UTC m=+1026.449528248" observedRunningTime="2025-11-24 19:33:55.640602364 +0000 UTC m=+1029.429554735" watchObservedRunningTime="2025-11-24 19:33:55.641057517 +0000 UTC m=+1029.430009878" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.641750 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-fprj2" podStartSLOduration=3.368270759 podStartE2EDuration="28.641746777s" podCreationTimestamp="2025-11-24 19:33:27 +0000 UTC" firstStartedPulling="2025-11-24 19:33:28.646026815 +0000 UTC m=+1002.434979186" lastFinishedPulling="2025-11-24 19:33:53.919502833 +0000 UTC m=+1027.708455204" observedRunningTime="2025-11-24 19:33:55.612079773 +0000 UTC m=+1029.401032144" watchObservedRunningTime="2025-11-24 19:33:55.641746777 +0000 UTC m=+1029.430699148" Nov 24 19:33:55 crc kubenswrapper[4812]: I1124 19:33:55.662070 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-68nf9" podStartSLOduration=3.621998435 podStartE2EDuration="28.662048144s" podCreationTimestamp="2025-11-24 19:33:27 +0000 UTC" firstStartedPulling="2025-11-24 19:33:28.919981947 +0000 UTC m=+1002.708934318" lastFinishedPulling="2025-11-24 19:33:53.960031656 +0000 UTC m=+1027.748984027" observedRunningTime="2025-11-24 19:33:55.656483126 +0000 UTC m=+1029.445435497" watchObservedRunningTime="2025-11-24 19:33:55.662048144 +0000 UTC m=+1029.451000505" Nov 24 19:33:56 crc kubenswrapper[4812]: I1124 19:33:56.523460 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-x8j6n" event={"ID":"2ac87fe9-d5cf-44a3-b5fe-0717ea4087c8","Type":"ContainerStarted","Data":"318e28e5e3eba6961f9d7088bec19f670c39975ef51bbf6d4c65bcfa71d29046"} Nov 24 19:33:56 crc kubenswrapper[4812]: I1124 19:33:56.523616 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-x8j6n" Nov 24 19:33:56 crc kubenswrapper[4812]: I1124 19:33:56.526018 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg58x" event={"ID":"76a00195-5eec-4790-93cf-0167e77c2d69","Type":"ContainerStarted","Data":"d5e47adf2e8c2bda40038c0358b22a28886b3055a34e7202f854b4c04390cea4"} Nov 24 19:33:56 crc kubenswrapper[4812]: I1124 19:33:56.529391 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-7thtk" event={"ID":"39c92508-c338-4865-b7fb-aa0c026e5f6b","Type":"ContainerStarted","Data":"148d43510c1526a143da7786a7ec19c5240ade852d571a6d98be002ffe85a590"} Nov 24 19:33:56 crc kubenswrapper[4812]: I1124 19:33:56.533904 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5j5g2" event={"ID":"82485996-0922-41c0-903c-e44eadf8be30","Type":"ContainerStarted","Data":"1199057608fe7e99ce57879da4594129ac2f397f70638ad5e48dcb52632fd223"} Nov 24 19:33:56 crc kubenswrapper[4812]: I1124 19:33:56.534106 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5j5g2" Nov 24 19:33:56 crc kubenswrapper[4812]: I1124 19:33:56.537629 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-28xt4" event={"ID":"05059f83-e196-4181-916f-ba36ed828d22","Type":"ContainerStarted","Data":"a7302016fe41826c335bd5f5f060962269f3059d9c5d0ab6b4bd58db5616314b"} Nov 24 19:33:56 crc kubenswrapper[4812]: I1124 19:33:56.540321 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5j5g2" Nov 24 19:33:56 crc kubenswrapper[4812]: I1124 19:33:56.551838 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-x8j6n" podStartSLOduration=5.954453054 podStartE2EDuration="29.551814031s" podCreationTimestamp="2025-11-24 19:33:27 +0000 UTC" firstStartedPulling="2025-11-24 19:33:29.06391475 +0000 UTC m=+1002.852867121" lastFinishedPulling="2025-11-24 19:33:52.661275727 +0000 UTC m=+1026.450228098" observedRunningTime="2025-11-24 19:33:56.547200719 +0000 UTC m=+1030.336153100" watchObservedRunningTime="2025-11-24 19:33:56.551814031 +0000 UTC m=+1030.340766412" Nov 24 19:33:56 crc kubenswrapper[4812]: I1124 19:33:56.569725 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-5j5g2" podStartSLOduration=4.032467649 podStartE2EDuration="29.569700569s" podCreationTimestamp="2025-11-24 19:33:27 +0000 UTC" firstStartedPulling="2025-11-24 19:33:28.496246485 +0000 UTC m=+1002.285198856" lastFinishedPulling="2025-11-24 19:33:54.033479395 +0000 UTC m=+1027.822431776" observedRunningTime="2025-11-24 19:33:56.566112567 +0000 UTC m=+1030.355064978" watchObservedRunningTime="2025-11-24 19:33:56.569700569 +0000 UTC m=+1030.358652960" Nov 24 19:34:00 crc kubenswrapper[4812]: I1124 19:34:00.579303 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-5fw6l" event={"ID":"b745dd21-3908-4dba-8966-a4be4aea8aa4","Type":"ContainerStarted","Data":"b831a29e5603b0e87faee4ad08261e289976a3828a60449c1c81dbeb1083d32a"} Nov 24 19:34:00 crc kubenswrapper[4812]: I1124 19:34:00.605020 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-5fw6l" podStartSLOduration=22.626211736 podStartE2EDuration="33.60499872s" podCreationTimestamp="2025-11-24 19:33:27 +0000 UTC" firstStartedPulling="2025-11-24 19:33:28.92500851 +0000 UTC m=+1002.713960881" lastFinishedPulling="2025-11-24 19:33:39.903795484 +0000 UTC m=+1013.692747865" observedRunningTime="2025-11-24 19:34:00.602837718 +0000 UTC m=+1034.391790129" watchObservedRunningTime="2025-11-24 19:34:00.60499872 +0000 UTC m=+1034.393951101" Nov 24 19:34:02 crc kubenswrapper[4812]: I1124 19:34:02.597193 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-mz2ht" event={"ID":"2f7164ed-0304-404a-938f-134952b55d15","Type":"ContainerStarted","Data":"0cc649f9157c1a30aa2812f999d8665abf1adac97ede4223c6b4dfc02ef56cc8"} Nov 24 19:34:02 crc kubenswrapper[4812]: I1124 19:34:02.998874 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:34:02 crc kubenswrapper[4812]: I1124 19:34:02.998988 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:34:07 crc kubenswrapper[4812]: I1124 19:34:07.716872 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-jpljb" Nov 24 19:34:07 crc kubenswrapper[4812]: I1124 19:34:07.723736 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-7thtk" Nov 24 19:34:07 crc kubenswrapper[4812]: I1124 19:34:07.743758 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-774b86978c-mz2ht" podStartSLOduration=29.31491147 podStartE2EDuration="40.743735305s" podCreationTimestamp="2025-11-24 19:33:27 +0000 UTC" firstStartedPulling="2025-11-24 19:33:28.4721845 +0000 UTC m=+1002.261136871" lastFinishedPulling="2025-11-24 19:33:39.901008295 +0000 UTC m=+1013.689960706" observedRunningTime="2025-11-24 19:34:02.623466928 +0000 UTC m=+1036.412419349" watchObservedRunningTime="2025-11-24 19:34:07.743735305 +0000 UTC m=+1041.532687676" Nov 24 19:34:07 crc kubenswrapper[4812]: I1124 19:34:07.777317 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-jg58x" Nov 24 19:34:07 crc kubenswrapper[4812]: I1124 19:34:07.804550 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-ptcwx" Nov 24 19:34:07 crc kubenswrapper[4812]: I1124 19:34:07.858185 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-567f98c9d-x8j6n" Nov 24 19:34:07 crc kubenswrapper[4812]: I1124 19:34:07.951939 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-28xt4" Nov 24 19:34:08 crc kubenswrapper[4812]: I1124 19:34:08.066072 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-h7bnm" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.047461 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-bwbdb"] Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.049450 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-bwbdb" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.054734 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-bwbdb"] Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.061498 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.061675 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.061709 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-bch9m" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.068227 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.074362 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6584b49599-72xv8"] Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.075658 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-72xv8" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.079251 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.106929 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-72xv8"] Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.154493 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c27766ce-c455-4000-a42c-b16fd701edf2-config\") pod \"dnsmasq-dns-7bdd77c89-bwbdb\" (UID: \"c27766ce-c455-4000-a42c-b16fd701edf2\") " pod="openstack/dnsmasq-dns-7bdd77c89-bwbdb" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.154560 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk69l\" (UniqueName: \"kubernetes.io/projected/c27766ce-c455-4000-a42c-b16fd701edf2-kube-api-access-hk69l\") pod \"dnsmasq-dns-7bdd77c89-bwbdb\" (UID: \"c27766ce-c455-4000-a42c-b16fd701edf2\") " pod="openstack/dnsmasq-dns-7bdd77c89-bwbdb" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.255593 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kscz6\" (UniqueName: \"kubernetes.io/projected/b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee-kube-api-access-kscz6\") pod \"dnsmasq-dns-6584b49599-72xv8\" (UID: \"b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee\") " pod="openstack/dnsmasq-dns-6584b49599-72xv8" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.255708 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee-config\") pod \"dnsmasq-dns-6584b49599-72xv8\" (UID: \"b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee\") " pod="openstack/dnsmasq-dns-6584b49599-72xv8" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.255735 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee-dns-svc\") pod \"dnsmasq-dns-6584b49599-72xv8\" (UID: \"b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee\") " pod="openstack/dnsmasq-dns-6584b49599-72xv8" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.255785 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c27766ce-c455-4000-a42c-b16fd701edf2-config\") pod \"dnsmasq-dns-7bdd77c89-bwbdb\" (UID: \"c27766ce-c455-4000-a42c-b16fd701edf2\") " pod="openstack/dnsmasq-dns-7bdd77c89-bwbdb" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.255831 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk69l\" (UniqueName: \"kubernetes.io/projected/c27766ce-c455-4000-a42c-b16fd701edf2-kube-api-access-hk69l\") pod \"dnsmasq-dns-7bdd77c89-bwbdb\" (UID: \"c27766ce-c455-4000-a42c-b16fd701edf2\") " pod="openstack/dnsmasq-dns-7bdd77c89-bwbdb" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.256967 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c27766ce-c455-4000-a42c-b16fd701edf2-config\") pod \"dnsmasq-dns-7bdd77c89-bwbdb\" (UID: \"c27766ce-c455-4000-a42c-b16fd701edf2\") " pod="openstack/dnsmasq-dns-7bdd77c89-bwbdb" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.283758 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk69l\" (UniqueName: \"kubernetes.io/projected/c27766ce-c455-4000-a42c-b16fd701edf2-kube-api-access-hk69l\") pod \"dnsmasq-dns-7bdd77c89-bwbdb\" (UID: \"c27766ce-c455-4000-a42c-b16fd701edf2\") " pod="openstack/dnsmasq-dns-7bdd77c89-bwbdb" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.357106 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee-config\") pod \"dnsmasq-dns-6584b49599-72xv8\" (UID: \"b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee\") " pod="openstack/dnsmasq-dns-6584b49599-72xv8" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.357173 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee-dns-svc\") pod \"dnsmasq-dns-6584b49599-72xv8\" (UID: \"b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee\") " pod="openstack/dnsmasq-dns-6584b49599-72xv8" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.357307 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kscz6\" (UniqueName: \"kubernetes.io/projected/b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee-kube-api-access-kscz6\") pod \"dnsmasq-dns-6584b49599-72xv8\" (UID: \"b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee\") " pod="openstack/dnsmasq-dns-6584b49599-72xv8" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.358736 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee-config\") pod \"dnsmasq-dns-6584b49599-72xv8\" (UID: \"b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee\") " pod="openstack/dnsmasq-dns-6584b49599-72xv8" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.358930 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee-dns-svc\") pod \"dnsmasq-dns-6584b49599-72xv8\" (UID: \"b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee\") " pod="openstack/dnsmasq-dns-6584b49599-72xv8" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.388148 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kscz6\" (UniqueName: \"kubernetes.io/projected/b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee-kube-api-access-kscz6\") pod \"dnsmasq-dns-6584b49599-72xv8\" (UID: \"b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee\") " pod="openstack/dnsmasq-dns-6584b49599-72xv8" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.396903 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-bwbdb" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.407427 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-72xv8" Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.802742 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-72xv8"] Nov 24 19:34:23 crc kubenswrapper[4812]: W1124 19:34:23.806231 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7c9a9a9_b28c_4e3f_83c8_2b6204bc6dee.slice/crio-9175f9fc4c3b949a8acc08be4fb3ff75d1e3291b01db46ad1eab3c84c5be99a8 WatchSource:0}: Error finding container 9175f9fc4c3b949a8acc08be4fb3ff75d1e3291b01db46ad1eab3c84c5be99a8: Status 404 returned error can't find the container with id 9175f9fc4c3b949a8acc08be4fb3ff75d1e3291b01db46ad1eab3c84c5be99a8 Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.808941 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 19:34:23 crc kubenswrapper[4812]: I1124 19:34:23.933850 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-bwbdb"] Nov 24 19:34:23 crc kubenswrapper[4812]: W1124 19:34:23.940427 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc27766ce_c455_4000_a42c_b16fd701edf2.slice/crio-be7b801451f63c3b442e9002a2476b6068df2fd551a62ddf862957ed5b594995 WatchSource:0}: Error finding container be7b801451f63c3b442e9002a2476b6068df2fd551a62ddf862957ed5b594995: Status 404 returned error can't find the container with id be7b801451f63c3b442e9002a2476b6068df2fd551a62ddf862957ed5b594995 Nov 24 19:34:24 crc kubenswrapper[4812]: I1124 19:34:24.792133 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdd77c89-bwbdb" event={"ID":"c27766ce-c455-4000-a42c-b16fd701edf2","Type":"ContainerStarted","Data":"be7b801451f63c3b442e9002a2476b6068df2fd551a62ddf862957ed5b594995"} Nov 24 19:34:24 crc kubenswrapper[4812]: I1124 19:34:24.793360 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6584b49599-72xv8" event={"ID":"b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee","Type":"ContainerStarted","Data":"9175f9fc4c3b949a8acc08be4fb3ff75d1e3291b01db46ad1eab3c84c5be99a8"} Nov 24 19:34:25 crc kubenswrapper[4812]: I1124 19:34:25.880025 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-bwbdb"] Nov 24 19:34:25 crc kubenswrapper[4812]: I1124 19:34:25.901192 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-p5q9c"] Nov 24 19:34:25 crc kubenswrapper[4812]: I1124 19:34:25.910860 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-p5q9c"] Nov 24 19:34:25 crc kubenswrapper[4812]: I1124 19:34:25.910966 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6d9948dc-p5q9c" Nov 24 19:34:25 crc kubenswrapper[4812]: I1124 19:34:25.999231 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13f7de91-4d87-4631-a9bd-efded8c9ea1f-config\") pod \"dnsmasq-dns-7c6d9948dc-p5q9c\" (UID: \"13f7de91-4d87-4631-a9bd-efded8c9ea1f\") " pod="openstack/dnsmasq-dns-7c6d9948dc-p5q9c" Nov 24 19:34:25 crc kubenswrapper[4812]: I1124 19:34:25.999275 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13f7de91-4d87-4631-a9bd-efded8c9ea1f-dns-svc\") pod \"dnsmasq-dns-7c6d9948dc-p5q9c\" (UID: \"13f7de91-4d87-4631-a9bd-efded8c9ea1f\") " pod="openstack/dnsmasq-dns-7c6d9948dc-p5q9c" Nov 24 19:34:25 crc kubenswrapper[4812]: I1124 19:34:25.999349 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvbsc\" (UniqueName: \"kubernetes.io/projected/13f7de91-4d87-4631-a9bd-efded8c9ea1f-kube-api-access-pvbsc\") pod \"dnsmasq-dns-7c6d9948dc-p5q9c\" (UID: \"13f7de91-4d87-4631-a9bd-efded8c9ea1f\") " pod="openstack/dnsmasq-dns-7c6d9948dc-p5q9c" Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.100816 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvbsc\" (UniqueName: \"kubernetes.io/projected/13f7de91-4d87-4631-a9bd-efded8c9ea1f-kube-api-access-pvbsc\") pod \"dnsmasq-dns-7c6d9948dc-p5q9c\" (UID: \"13f7de91-4d87-4631-a9bd-efded8c9ea1f\") " pod="openstack/dnsmasq-dns-7c6d9948dc-p5q9c" Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.100946 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13f7de91-4d87-4631-a9bd-efded8c9ea1f-config\") pod \"dnsmasq-dns-7c6d9948dc-p5q9c\" (UID: \"13f7de91-4d87-4631-a9bd-efded8c9ea1f\") " pod="openstack/dnsmasq-dns-7c6d9948dc-p5q9c" Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.100993 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13f7de91-4d87-4631-a9bd-efded8c9ea1f-dns-svc\") pod \"dnsmasq-dns-7c6d9948dc-p5q9c\" (UID: \"13f7de91-4d87-4631-a9bd-efded8c9ea1f\") " pod="openstack/dnsmasq-dns-7c6d9948dc-p5q9c" Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.102723 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13f7de91-4d87-4631-a9bd-efded8c9ea1f-config\") pod \"dnsmasq-dns-7c6d9948dc-p5q9c\" (UID: \"13f7de91-4d87-4631-a9bd-efded8c9ea1f\") " pod="openstack/dnsmasq-dns-7c6d9948dc-p5q9c" Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.103266 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13f7de91-4d87-4631-a9bd-efded8c9ea1f-dns-svc\") pod \"dnsmasq-dns-7c6d9948dc-p5q9c\" (UID: \"13f7de91-4d87-4631-a9bd-efded8c9ea1f\") " pod="openstack/dnsmasq-dns-7c6d9948dc-p5q9c" Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.123727 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvbsc\" (UniqueName: \"kubernetes.io/projected/13f7de91-4d87-4631-a9bd-efded8c9ea1f-kube-api-access-pvbsc\") pod \"dnsmasq-dns-7c6d9948dc-p5q9c\" (UID: \"13f7de91-4d87-4631-a9bd-efded8c9ea1f\") " pod="openstack/dnsmasq-dns-7c6d9948dc-p5q9c" Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.173177 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-72xv8"] Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.198828 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-8sccn"] Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.200108 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-8sccn" Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.214052 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-8sccn"] Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.227686 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6d9948dc-p5q9c" Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.303773 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w69t5\" (UniqueName: \"kubernetes.io/projected/0791a0ac-3be3-46d3-bc7f-3b3f421dc984-kube-api-access-w69t5\") pod \"dnsmasq-dns-6486446b9f-8sccn\" (UID: \"0791a0ac-3be3-46d3-bc7f-3b3f421dc984\") " pod="openstack/dnsmasq-dns-6486446b9f-8sccn" Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.303934 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0791a0ac-3be3-46d3-bc7f-3b3f421dc984-dns-svc\") pod \"dnsmasq-dns-6486446b9f-8sccn\" (UID: \"0791a0ac-3be3-46d3-bc7f-3b3f421dc984\") " pod="openstack/dnsmasq-dns-6486446b9f-8sccn" Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.304074 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0791a0ac-3be3-46d3-bc7f-3b3f421dc984-config\") pod \"dnsmasq-dns-6486446b9f-8sccn\" (UID: \"0791a0ac-3be3-46d3-bc7f-3b3f421dc984\") " pod="openstack/dnsmasq-dns-6486446b9f-8sccn" Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.405456 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0791a0ac-3be3-46d3-bc7f-3b3f421dc984-config\") pod \"dnsmasq-dns-6486446b9f-8sccn\" (UID: \"0791a0ac-3be3-46d3-bc7f-3b3f421dc984\") " pod="openstack/dnsmasq-dns-6486446b9f-8sccn" Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.405875 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w69t5\" (UniqueName: \"kubernetes.io/projected/0791a0ac-3be3-46d3-bc7f-3b3f421dc984-kube-api-access-w69t5\") pod \"dnsmasq-dns-6486446b9f-8sccn\" (UID: \"0791a0ac-3be3-46d3-bc7f-3b3f421dc984\") " pod="openstack/dnsmasq-dns-6486446b9f-8sccn" Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.405938 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0791a0ac-3be3-46d3-bc7f-3b3f421dc984-dns-svc\") pod \"dnsmasq-dns-6486446b9f-8sccn\" (UID: \"0791a0ac-3be3-46d3-bc7f-3b3f421dc984\") " pod="openstack/dnsmasq-dns-6486446b9f-8sccn" Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.406634 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0791a0ac-3be3-46d3-bc7f-3b3f421dc984-config\") pod \"dnsmasq-dns-6486446b9f-8sccn\" (UID: \"0791a0ac-3be3-46d3-bc7f-3b3f421dc984\") " pod="openstack/dnsmasq-dns-6486446b9f-8sccn" Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.406749 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0791a0ac-3be3-46d3-bc7f-3b3f421dc984-dns-svc\") pod \"dnsmasq-dns-6486446b9f-8sccn\" (UID: \"0791a0ac-3be3-46d3-bc7f-3b3f421dc984\") " pod="openstack/dnsmasq-dns-6486446b9f-8sccn" Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.510191 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w69t5\" (UniqueName: \"kubernetes.io/projected/0791a0ac-3be3-46d3-bc7f-3b3f421dc984-kube-api-access-w69t5\") pod \"dnsmasq-dns-6486446b9f-8sccn\" (UID: \"0791a0ac-3be3-46d3-bc7f-3b3f421dc984\") " pod="openstack/dnsmasq-dns-6486446b9f-8sccn" Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.518126 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-8sccn" Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.702305 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-p5q9c"] Nov 24 19:34:26 crc kubenswrapper[4812]: I1124 19:34:26.819880 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6d9948dc-p5q9c" event={"ID":"13f7de91-4d87-4631-a9bd-efded8c9ea1f","Type":"ContainerStarted","Data":"d5639aefe5afd698d5ab3f5dbd048b87e34b6dcc98cfc4a5ba60857779b4b90c"} Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.025013 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.027755 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.030786 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.032845 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.032989 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.033210 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.033776 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.033781 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.034483 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9ql7x" Nov 24 19:34:27 crc kubenswrapper[4812]: W1124 19:34:27.039003 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0791a0ac_3be3_46d3_bc7f_3b3f421dc984.slice/crio-67ae17f7256b77dcec82a30b7d7bded4e6bd4f35f0f2e5d692760a6967acbe89 WatchSource:0}: Error finding container 67ae17f7256b77dcec82a30b7d7bded4e6bd4f35f0f2e5d692760a6967acbe89: Status 404 returned error can't find the container with id 67ae17f7256b77dcec82a30b7d7bded4e6bd4f35f0f2e5d692760a6967acbe89 Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.050829 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-8sccn"] Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.062686 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.225104 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.225537 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.225585 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.225664 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7cb8ede6-6163-4906-89f7-7fe6458edc36-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.225716 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.225761 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7cb8ede6-6163-4906-89f7-7fe6458edc36-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.225839 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-config-data\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.225877 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.225908 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.225937 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn4nv\" (UniqueName: \"kubernetes.io/projected/7cb8ede6-6163-4906-89f7-7fe6458edc36-kube-api-access-zn4nv\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.225995 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.313286 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.317483 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.324911 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.324980 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.325030 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.325036 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.325171 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.325267 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.325648 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2ccls" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.328651 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7cb8ede6-6163-4906-89f7-7fe6458edc36-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.328743 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-config-data\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.328793 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.328814 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.328835 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn4nv\" (UniqueName: \"kubernetes.io/projected/7cb8ede6-6163-4906-89f7-7fe6458edc36-kube-api-access-zn4nv\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.328880 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.328945 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.328991 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.329028 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.329097 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7cb8ede6-6163-4906-89f7-7fe6458edc36-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.329120 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.329437 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.329762 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.330278 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.330662 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.332686 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-config-data\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.337131 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7cb8ede6-6163-4906-89f7-7fe6458edc36-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.352195 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.356475 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.375154 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.375216 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.376234 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7cb8ede6-6163-4906-89f7-7fe6458edc36-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.376509 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn4nv\" (UniqueName: \"kubernetes.io/projected/7cb8ede6-6163-4906-89f7-7fe6458edc36-kube-api-access-zn4nv\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.378241 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.432049 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.432125 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.432150 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.432175 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.432509 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.432537 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db2cd03e-b999-48ea-b540-7fd35356ba8b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.432563 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db2cd03e-b999-48ea-b540-7fd35356ba8b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.432612 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6plcr\" (UniqueName: \"kubernetes.io/projected/db2cd03e-b999-48ea-b540-7fd35356ba8b-kube-api-access-6plcr\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.432645 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.432671 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.432706 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.535584 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6plcr\" (UniqueName: \"kubernetes.io/projected/db2cd03e-b999-48ea-b540-7fd35356ba8b-kube-api-access-6plcr\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.535640 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.535690 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.535718 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.535870 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.535904 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.535922 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.535938 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.536076 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.536102 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db2cd03e-b999-48ea-b540-7fd35356ba8b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.536119 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db2cd03e-b999-48ea-b540-7fd35356ba8b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.536356 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.536411 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.536916 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.537324 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.537705 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.538849 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.539737 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db2cd03e-b999-48ea-b540-7fd35356ba8b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.541416 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.554117 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db2cd03e-b999-48ea-b540-7fd35356ba8b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.564681 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6plcr\" (UniqueName: \"kubernetes.io/projected/db2cd03e-b999-48ea-b540-7fd35356ba8b-kube-api-access-6plcr\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.593108 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.611132 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.650126 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.726092 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:34:27 crc kubenswrapper[4812]: I1124 19:34:27.834798 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-8sccn" event={"ID":"0791a0ac-3be3-46d3-bc7f-3b3f421dc984","Type":"ContainerStarted","Data":"67ae17f7256b77dcec82a30b7d7bded4e6bd4f35f0f2e5d692760a6967acbe89"} Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.210800 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 19:34:28 crc kubenswrapper[4812]: W1124 19:34:28.222897 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cb8ede6_6163_4906_89f7_7fe6458edc36.slice/crio-e4c3681637a7f9a6d4ce3d769069698835fa06a760dd9e1e654a8a69f0cbb071 WatchSource:0}: Error finding container e4c3681637a7f9a6d4ce3d769069698835fa06a760dd9e1e654a8a69f0cbb071: Status 404 returned error can't find the container with id e4c3681637a7f9a6d4ce3d769069698835fa06a760dd9e1e654a8a69f0cbb071 Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.369778 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 19:34:28 crc kubenswrapper[4812]: W1124 19:34:28.382522 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb2cd03e_b999_48ea_b540_7fd35356ba8b.slice/crio-908faf8b3239cedbb34edf774cab8f878190278ed040ba8f8bc97dcc566dfde4 WatchSource:0}: Error finding container 908faf8b3239cedbb34edf774cab8f878190278ed040ba8f8bc97dcc566dfde4: Status 404 returned error can't find the container with id 908faf8b3239cedbb34edf774cab8f878190278ed040ba8f8bc97dcc566dfde4 Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.508782 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.510082 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.517357 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.517573 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.518940 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-zbtr8" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.519327 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.532540 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.545501 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.701684 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2264eb6-f494-4800-832b-d1e1d02daf4e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.701813 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2264eb6-f494-4800-832b-d1e1d02daf4e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.701872 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdgzx\" (UniqueName: \"kubernetes.io/projected/b2264eb6-f494-4800-832b-d1e1d02daf4e-kube-api-access-mdgzx\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.701957 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2264eb6-f494-4800-832b-d1e1d02daf4e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.701978 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b2264eb6-f494-4800-832b-d1e1d02daf4e-kolla-config\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.702072 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.702183 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b2264eb6-f494-4800-832b-d1e1d02daf4e-config-data-default\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.702256 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b2264eb6-f494-4800-832b-d1e1d02daf4e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.804655 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2264eb6-f494-4800-832b-d1e1d02daf4e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.804970 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2264eb6-f494-4800-832b-d1e1d02daf4e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.804993 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdgzx\" (UniqueName: \"kubernetes.io/projected/b2264eb6-f494-4800-832b-d1e1d02daf4e-kube-api-access-mdgzx\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.805013 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2264eb6-f494-4800-832b-d1e1d02daf4e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.805029 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b2264eb6-f494-4800-832b-d1e1d02daf4e-kolla-config\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.805062 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.805089 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b2264eb6-f494-4800-832b-d1e1d02daf4e-config-data-default\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.805105 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b2264eb6-f494-4800-832b-d1e1d02daf4e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.805594 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b2264eb6-f494-4800-832b-d1e1d02daf4e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.805986 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b2264eb6-f494-4800-832b-d1e1d02daf4e-kolla-config\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.806189 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.807016 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b2264eb6-f494-4800-832b-d1e1d02daf4e-config-data-default\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.807424 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2264eb6-f494-4800-832b-d1e1d02daf4e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.812938 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2264eb6-f494-4800-832b-d1e1d02daf4e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.815009 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2264eb6-f494-4800-832b-d1e1d02daf4e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.821307 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdgzx\" (UniqueName: \"kubernetes.io/projected/b2264eb6-f494-4800-832b-d1e1d02daf4e-kube-api-access-mdgzx\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.836836 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.848051 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7cb8ede6-6163-4906-89f7-7fe6458edc36","Type":"ContainerStarted","Data":"e4c3681637a7f9a6d4ce3d769069698835fa06a760dd9e1e654a8a69f0cbb071"} Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.848818 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 19:34:28 crc kubenswrapper[4812]: I1124 19:34:28.850248 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db2cd03e-b999-48ea-b540-7fd35356ba8b","Type":"ContainerStarted","Data":"908faf8b3239cedbb34edf774cab8f878190278ed040ba8f8bc97dcc566dfde4"} Nov 24 19:34:29 crc kubenswrapper[4812]: I1124 19:34:29.440897 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 19:34:29 crc kubenswrapper[4812]: W1124 19:34:29.451987 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2264eb6_f494_4800_832b_d1e1d02daf4e.slice/crio-8e687f92f86ed01924f84d1ca0ca051d992fccfd8078818b09102e6f0e4652f8 WatchSource:0}: Error finding container 8e687f92f86ed01924f84d1ca0ca051d992fccfd8078818b09102e6f0e4652f8: Status 404 returned error can't find the container with id 8e687f92f86ed01924f84d1ca0ca051d992fccfd8078818b09102e6f0e4652f8 Nov 24 19:34:29 crc kubenswrapper[4812]: I1124 19:34:29.775982 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 19:34:29 crc kubenswrapper[4812]: I1124 19:34:29.777273 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:29 crc kubenswrapper[4812]: I1124 19:34:29.780108 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 24 19:34:29 crc kubenswrapper[4812]: I1124 19:34:29.781168 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 24 19:34:29 crc kubenswrapper[4812]: I1124 19:34:29.782155 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 24 19:34:29 crc kubenswrapper[4812]: I1124 19:34:29.790714 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-lr5n5" Nov 24 19:34:29 crc kubenswrapper[4812]: I1124 19:34:29.799542 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 19:34:29 crc kubenswrapper[4812]: I1124 19:34:29.874188 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b2264eb6-f494-4800-832b-d1e1d02daf4e","Type":"ContainerStarted","Data":"8e687f92f86ed01924f84d1ca0ca051d992fccfd8078818b09102e6f0e4652f8"} Nov 24 19:34:29 crc kubenswrapper[4812]: I1124 19:34:29.925028 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:29 crc kubenswrapper[4812]: I1124 19:34:29.925377 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:29 crc kubenswrapper[4812]: I1124 19:34:29.925407 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:29 crc kubenswrapper[4812]: I1124 19:34:29.925427 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:29 crc kubenswrapper[4812]: I1124 19:34:29.926348 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:29 crc kubenswrapper[4812]: I1124 19:34:29.926387 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g686\" (UniqueName: \"kubernetes.io/projected/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-kube-api-access-4g686\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:29 crc kubenswrapper[4812]: I1124 19:34:29.926412 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:29 crc kubenswrapper[4812]: I1124 19:34:29.926442 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.027903 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.027960 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g686\" (UniqueName: \"kubernetes.io/projected/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-kube-api-access-4g686\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.027987 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.028010 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.028050 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.028104 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.028129 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.028160 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.030451 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.031461 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.031615 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.031781 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.035148 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.038308 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.044141 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.065835 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.075079 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g686\" (UniqueName: \"kubernetes.io/projected/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-kube-api-access-4g686\") pod \"openstack-cell1-galera-0\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.110192 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.171185 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.172200 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.178053 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.178490 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.178642 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-gmtcn" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.182220 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.336986 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/174dd2a1-292b-4e07-8ed8-48d4109e9f57-memcached-tls-certs\") pod \"memcached-0\" (UID: \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\") " pod="openstack/memcached-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.337028 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx9q6\" (UniqueName: \"kubernetes.io/projected/174dd2a1-292b-4e07-8ed8-48d4109e9f57-kube-api-access-qx9q6\") pod \"memcached-0\" (UID: \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\") " pod="openstack/memcached-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.337060 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/174dd2a1-292b-4e07-8ed8-48d4109e9f57-config-data\") pod \"memcached-0\" (UID: \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\") " pod="openstack/memcached-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.337110 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/174dd2a1-292b-4e07-8ed8-48d4109e9f57-kolla-config\") pod \"memcached-0\" (UID: \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\") " pod="openstack/memcached-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.337156 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174dd2a1-292b-4e07-8ed8-48d4109e9f57-combined-ca-bundle\") pod \"memcached-0\" (UID: \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\") " pod="openstack/memcached-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.438558 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174dd2a1-292b-4e07-8ed8-48d4109e9f57-combined-ca-bundle\") pod \"memcached-0\" (UID: \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\") " pod="openstack/memcached-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.438645 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/174dd2a1-292b-4e07-8ed8-48d4109e9f57-memcached-tls-certs\") pod \"memcached-0\" (UID: \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\") " pod="openstack/memcached-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.438666 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx9q6\" (UniqueName: \"kubernetes.io/projected/174dd2a1-292b-4e07-8ed8-48d4109e9f57-kube-api-access-qx9q6\") pod \"memcached-0\" (UID: \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\") " pod="openstack/memcached-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.438711 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/174dd2a1-292b-4e07-8ed8-48d4109e9f57-config-data\") pod \"memcached-0\" (UID: \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\") " pod="openstack/memcached-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.438747 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/174dd2a1-292b-4e07-8ed8-48d4109e9f57-kolla-config\") pod \"memcached-0\" (UID: \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\") " pod="openstack/memcached-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.440262 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/174dd2a1-292b-4e07-8ed8-48d4109e9f57-config-data\") pod \"memcached-0\" (UID: \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\") " pod="openstack/memcached-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.440689 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/174dd2a1-292b-4e07-8ed8-48d4109e9f57-kolla-config\") pod \"memcached-0\" (UID: \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\") " pod="openstack/memcached-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.444142 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/174dd2a1-292b-4e07-8ed8-48d4109e9f57-memcached-tls-certs\") pod \"memcached-0\" (UID: \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\") " pod="openstack/memcached-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.446797 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174dd2a1-292b-4e07-8ed8-48d4109e9f57-combined-ca-bundle\") pod \"memcached-0\" (UID: \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\") " pod="openstack/memcached-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.467284 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx9q6\" (UniqueName: \"kubernetes.io/projected/174dd2a1-292b-4e07-8ed8-48d4109e9f57-kube-api-access-qx9q6\") pod \"memcached-0\" (UID: \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\") " pod="openstack/memcached-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.574275 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.788814 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 19:34:30 crc kubenswrapper[4812]: I1124 19:34:30.927749 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c","Type":"ContainerStarted","Data":"81cbdbc0e4b9a56a7fb4ff4568fcc82b7298ac4cf48d0b2c832c246076473a95"} Nov 24 19:34:31 crc kubenswrapper[4812]: I1124 19:34:31.199529 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 19:34:31 crc kubenswrapper[4812]: W1124 19:34:31.228684 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod174dd2a1_292b_4e07_8ed8_48d4109e9f57.slice/crio-97b35843411bd80efad5538b8fdc3d425ac2482bcd0f688f1834505060b3b6aa WatchSource:0}: Error finding container 97b35843411bd80efad5538b8fdc3d425ac2482bcd0f688f1834505060b3b6aa: Status 404 returned error can't find the container with id 97b35843411bd80efad5538b8fdc3d425ac2482bcd0f688f1834505060b3b6aa Nov 24 19:34:31 crc kubenswrapper[4812]: I1124 19:34:31.749110 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 19:34:31 crc kubenswrapper[4812]: I1124 19:34:31.755316 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 19:34:31 crc kubenswrapper[4812]: I1124 19:34:31.772219 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-cd6z9" Nov 24 19:34:31 crc kubenswrapper[4812]: I1124 19:34:31.776490 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k2q4\" (UniqueName: \"kubernetes.io/projected/c066990e-686f-4d3b-bb9d-185df0b741ec-kube-api-access-5k2q4\") pod \"kube-state-metrics-0\" (UID: \"c066990e-686f-4d3b-bb9d-185df0b741ec\") " pod="openstack/kube-state-metrics-0" Nov 24 19:34:31 crc kubenswrapper[4812]: I1124 19:34:31.781170 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 19:34:31 crc kubenswrapper[4812]: I1124 19:34:31.877788 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k2q4\" (UniqueName: \"kubernetes.io/projected/c066990e-686f-4d3b-bb9d-185df0b741ec-kube-api-access-5k2q4\") pod \"kube-state-metrics-0\" (UID: \"c066990e-686f-4d3b-bb9d-185df0b741ec\") " pod="openstack/kube-state-metrics-0" Nov 24 19:34:31 crc kubenswrapper[4812]: I1124 19:34:31.910349 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k2q4\" (UniqueName: \"kubernetes.io/projected/c066990e-686f-4d3b-bb9d-185df0b741ec-kube-api-access-5k2q4\") pod \"kube-state-metrics-0\" (UID: \"c066990e-686f-4d3b-bb9d-185df0b741ec\") " pod="openstack/kube-state-metrics-0" Nov 24 19:34:31 crc kubenswrapper[4812]: I1124 19:34:31.935645 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"174dd2a1-292b-4e07-8ed8-48d4109e9f57","Type":"ContainerStarted","Data":"97b35843411bd80efad5538b8fdc3d425ac2482bcd0f688f1834505060b3b6aa"} Nov 24 19:34:32 crc kubenswrapper[4812]: I1124 19:34:32.124162 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 19:34:32 crc kubenswrapper[4812]: I1124 19:34:32.706862 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 19:34:32 crc kubenswrapper[4812]: I1124 19:34:32.998513 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:34:32 crc kubenswrapper[4812]: I1124 19:34:32.998580 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:34:32 crc kubenswrapper[4812]: I1124 19:34:32.998628 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:34:32 crc kubenswrapper[4812]: I1124 19:34:32.999208 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d0e66c2294fc71bbe8db0b484c7769b46468b08b26476c3c5d5d976af6fcf62b"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 19:34:32 crc kubenswrapper[4812]: I1124 19:34:32.999256 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://d0e66c2294fc71bbe8db0b484c7769b46468b08b26476c3c5d5d976af6fcf62b" gracePeriod=600 Nov 24 19:34:33 crc kubenswrapper[4812]: I1124 19:34:33.959170 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="d0e66c2294fc71bbe8db0b484c7769b46468b08b26476c3c5d5d976af6fcf62b" exitCode=0 Nov 24 19:34:33 crc kubenswrapper[4812]: I1124 19:34:33.959228 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"d0e66c2294fc71bbe8db0b484c7769b46468b08b26476c3c5d5d976af6fcf62b"} Nov 24 19:34:33 crc kubenswrapper[4812]: I1124 19:34:33.959276 4812 scope.go:117] "RemoveContainer" containerID="90a8aabf222225c1c2c6e28ca52be1f8419a4b406dd9e47fff7f3d92641c4332" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.277248 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-d48t8"] Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.285144 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.290314 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-2km2c" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.290361 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.290649 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.296396 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-d48t8"] Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.309321 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-cgt9p"] Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.318942 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.328145 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-cgt9p"] Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.456256 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-475jv\" (UniqueName: \"kubernetes.io/projected/4f3dca2d-7c6c-428d-9789-1463444fe46f-kube-api-access-475jv\") pod \"ovn-controller-d48t8\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.456313 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-var-lib\") pod \"ovn-controller-ovs-cgt9p\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.456919 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f3dca2d-7c6c-428d-9789-1463444fe46f-var-run-ovn\") pod \"ovn-controller-d48t8\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.456963 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f3dca2d-7c6c-428d-9789-1463444fe46f-scripts\") pod \"ovn-controller-d48t8\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.457004 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f3dca2d-7c6c-428d-9789-1463444fe46f-var-run\") pod \"ovn-controller-d48t8\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.457025 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f3dca2d-7c6c-428d-9789-1463444fe46f-combined-ca-bundle\") pod \"ovn-controller-d48t8\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.457047 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f3dca2d-7c6c-428d-9789-1463444fe46f-ovn-controller-tls-certs\") pod \"ovn-controller-d48t8\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.457116 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxtdj\" (UniqueName: \"kubernetes.io/projected/ba9f8224-4f20-4c27-b242-3385791aed68-kube-api-access-xxtdj\") pod \"ovn-controller-ovs-cgt9p\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.457204 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f3dca2d-7c6c-428d-9789-1463444fe46f-var-log-ovn\") pod \"ovn-controller-d48t8\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.457248 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba9f8224-4f20-4c27-b242-3385791aed68-scripts\") pod \"ovn-controller-ovs-cgt9p\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.457300 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-var-log\") pod \"ovn-controller-ovs-cgt9p\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.457343 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-etc-ovs\") pod \"ovn-controller-ovs-cgt9p\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.457406 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-var-run\") pod \"ovn-controller-ovs-cgt9p\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.572482 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f3dca2d-7c6c-428d-9789-1463444fe46f-var-run\") pod \"ovn-controller-d48t8\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.572604 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f3dca2d-7c6c-428d-9789-1463444fe46f-combined-ca-bundle\") pod \"ovn-controller-d48t8\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.572727 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f3dca2d-7c6c-428d-9789-1463444fe46f-ovn-controller-tls-certs\") pod \"ovn-controller-d48t8\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.572757 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxtdj\" (UniqueName: \"kubernetes.io/projected/ba9f8224-4f20-4c27-b242-3385791aed68-kube-api-access-xxtdj\") pod \"ovn-controller-ovs-cgt9p\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.572782 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f3dca2d-7c6c-428d-9789-1463444fe46f-var-log-ovn\") pod \"ovn-controller-d48t8\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.572825 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba9f8224-4f20-4c27-b242-3385791aed68-scripts\") pod \"ovn-controller-ovs-cgt9p\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.572904 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-var-log\") pod \"ovn-controller-ovs-cgt9p\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.572941 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-etc-ovs\") pod \"ovn-controller-ovs-cgt9p\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.572974 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-var-run\") pod \"ovn-controller-ovs-cgt9p\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.573016 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-475jv\" (UniqueName: \"kubernetes.io/projected/4f3dca2d-7c6c-428d-9789-1463444fe46f-kube-api-access-475jv\") pod \"ovn-controller-d48t8\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.573021 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f3dca2d-7c6c-428d-9789-1463444fe46f-var-run\") pod \"ovn-controller-d48t8\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.573045 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-var-lib\") pod \"ovn-controller-ovs-cgt9p\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.573119 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f3dca2d-7c6c-428d-9789-1463444fe46f-var-run-ovn\") pod \"ovn-controller-d48t8\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.573140 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f3dca2d-7c6c-428d-9789-1463444fe46f-scripts\") pod \"ovn-controller-d48t8\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.573286 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f3dca2d-7c6c-428d-9789-1463444fe46f-var-run-ovn\") pod \"ovn-controller-d48t8\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.573488 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-var-log\") pod \"ovn-controller-ovs-cgt9p\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.573514 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-etc-ovs\") pod \"ovn-controller-ovs-cgt9p\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.573532 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f3dca2d-7c6c-428d-9789-1463444fe46f-var-log-ovn\") pod \"ovn-controller-d48t8\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.573541 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-var-run\") pod \"ovn-controller-ovs-cgt9p\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.573589 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-var-lib\") pod \"ovn-controller-ovs-cgt9p\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.575171 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f3dca2d-7c6c-428d-9789-1463444fe46f-scripts\") pod \"ovn-controller-d48t8\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.575570 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba9f8224-4f20-4c27-b242-3385791aed68-scripts\") pod \"ovn-controller-ovs-cgt9p\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.578628 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f3dca2d-7c6c-428d-9789-1463444fe46f-ovn-controller-tls-certs\") pod \"ovn-controller-d48t8\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.584994 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f3dca2d-7c6c-428d-9789-1463444fe46f-combined-ca-bundle\") pod \"ovn-controller-d48t8\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.590426 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxtdj\" (UniqueName: \"kubernetes.io/projected/ba9f8224-4f20-4c27-b242-3385791aed68-kube-api-access-xxtdj\") pod \"ovn-controller-ovs-cgt9p\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.591175 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-475jv\" (UniqueName: \"kubernetes.io/projected/4f3dca2d-7c6c-428d-9789-1463444fe46f-kube-api-access-475jv\") pod \"ovn-controller-d48t8\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.614836 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-d48t8" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.636779 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:34:36 crc kubenswrapper[4812]: I1124 19:34:36.990743 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c066990e-686f-4d3b-bb9d-185df0b741ec","Type":"ContainerStarted","Data":"ce01c7ef69b38dd6304cd8a104a2b1cbecd4f76a27c6f1e74293ac2bc37ec65c"} Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.173704 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.174850 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.181437 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.181691 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.183285 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.183614 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-pkn4t" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.185619 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.190105 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.286768 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/29c07c15-c183-4546-8c7b-574776382e9b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.286815 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c07c15-c183-4546-8c7b-574776382e9b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.286850 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29c07c15-c183-4546-8c7b-574776382e9b-config\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.286876 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/29c07c15-c183-4546-8c7b-574776382e9b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.287008 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29c07c15-c183-4546-8c7b-574776382e9b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.287198 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvskw\" (UniqueName: \"kubernetes.io/projected/29c07c15-c183-4546-8c7b-574776382e9b-kube-api-access-vvskw\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.287235 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.287254 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/29c07c15-c183-4546-8c7b-574776382e9b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.388570 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.388622 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/29c07c15-c183-4546-8c7b-574776382e9b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.388655 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/29c07c15-c183-4546-8c7b-574776382e9b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.388674 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c07c15-c183-4546-8c7b-574776382e9b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.388702 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29c07c15-c183-4546-8c7b-574776382e9b-config\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.388724 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/29c07c15-c183-4546-8c7b-574776382e9b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.388768 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29c07c15-c183-4546-8c7b-574776382e9b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.388822 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvskw\" (UniqueName: \"kubernetes.io/projected/29c07c15-c183-4546-8c7b-574776382e9b-kube-api-access-vvskw\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.388910 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.389378 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/29c07c15-c183-4546-8c7b-574776382e9b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.390172 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29c07c15-c183-4546-8c7b-574776382e9b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.390263 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29c07c15-c183-4546-8c7b-574776382e9b-config\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.396419 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/29c07c15-c183-4546-8c7b-574776382e9b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.397292 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/29c07c15-c183-4546-8c7b-574776382e9b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.399991 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c07c15-c183-4546-8c7b-574776382e9b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.408708 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvskw\" (UniqueName: \"kubernetes.io/projected/29c07c15-c183-4546-8c7b-574776382e9b-kube-api-access-vvskw\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.415121 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:37 crc kubenswrapper[4812]: I1124 19:34:37.510513 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.468669 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.476485 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.476587 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.479628 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.480174 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.480381 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.480545 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-9k65f" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.606612 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfeb89d5-a81d-411c-8808-ae9f506780e2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.606657 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dfeb89d5-a81d-411c-8808-ae9f506780e2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.606681 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dfeb89d5-a81d-411c-8808-ae9f506780e2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.606716 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bnxv\" (UniqueName: \"kubernetes.io/projected/dfeb89d5-a81d-411c-8808-ae9f506780e2-kube-api-access-9bnxv\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.606959 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfeb89d5-a81d-411c-8808-ae9f506780e2-config\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.607062 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.607136 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfeb89d5-a81d-411c-8808-ae9f506780e2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.607455 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfeb89d5-a81d-411c-8808-ae9f506780e2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.709703 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfeb89d5-a81d-411c-8808-ae9f506780e2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.709744 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dfeb89d5-a81d-411c-8808-ae9f506780e2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.709765 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dfeb89d5-a81d-411c-8808-ae9f506780e2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.709801 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bnxv\" (UniqueName: \"kubernetes.io/projected/dfeb89d5-a81d-411c-8808-ae9f506780e2-kube-api-access-9bnxv\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.709860 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfeb89d5-a81d-411c-8808-ae9f506780e2-config\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.709885 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.709916 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfeb89d5-a81d-411c-8808-ae9f506780e2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.709943 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfeb89d5-a81d-411c-8808-ae9f506780e2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.710791 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.711662 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dfeb89d5-a81d-411c-8808-ae9f506780e2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.711989 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfeb89d5-a81d-411c-8808-ae9f506780e2-config\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.712923 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dfeb89d5-a81d-411c-8808-ae9f506780e2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.714495 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfeb89d5-a81d-411c-8808-ae9f506780e2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.741104 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfeb89d5-a81d-411c-8808-ae9f506780e2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.752915 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfeb89d5-a81d-411c-8808-ae9f506780e2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.760003 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bnxv\" (UniqueName: \"kubernetes.io/projected/dfeb89d5-a81d-411c-8808-ae9f506780e2-kube-api-access-9bnxv\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.766293 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:38 crc kubenswrapper[4812]: I1124 19:34:38.796778 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 19:34:43 crc kubenswrapper[4812]: E1124 19:34:43.180772 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:95d67f51dfedd5bd3ec785b488425295b2d8c41feae3e6386ef471615381809b" Nov 24 19:34:43 crc kubenswrapper[4812]: E1124 19:34:43.181576 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:95d67f51dfedd5bd3ec785b488425295b2d8c41feae3e6386ef471615381809b,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6plcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(db2cd03e-b999-48ea-b540-7fd35356ba8b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 19:34:43 crc kubenswrapper[4812]: E1124 19:34:43.182767 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="db2cd03e-b999-48ea-b540-7fd35356ba8b" Nov 24 19:34:54 crc kubenswrapper[4812]: E1124 19:34:54.114281 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 24 19:34:54 crc kubenswrapper[4812]: E1124 19:34:54.115061 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvbsc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7c6d9948dc-p5q9c_openstack(13f7de91-4d87-4631-a9bd-efded8c9ea1f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 19:34:54 crc kubenswrapper[4812]: E1124 19:34:54.117220 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7c6d9948dc-p5q9c" podUID="13f7de91-4d87-4631-a9bd-efded8c9ea1f" Nov 24 19:34:54 crc kubenswrapper[4812]: E1124 19:34:54.147558 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba\\\"\"" pod="openstack/dnsmasq-dns-7c6d9948dc-p5q9c" podUID="13f7de91-4d87-4631-a9bd-efded8c9ea1f" Nov 24 19:34:54 crc kubenswrapper[4812]: E1124 19:34:54.158551 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 24 19:34:54 crc kubenswrapper[4812]: E1124 19:34:54.158715 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w69t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6486446b9f-8sccn_openstack(0791a0ac-3be3-46d3-bc7f-3b3f421dc984): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 19:34:54 crc kubenswrapper[4812]: E1124 19:34:54.161292 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6486446b9f-8sccn" podUID="0791a0ac-3be3-46d3-bc7f-3b3f421dc984" Nov 24 19:34:54 crc kubenswrapper[4812]: E1124 19:34:54.170982 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 24 19:34:54 crc kubenswrapper[4812]: E1124 19:34:54.171224 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kscz6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6584b49599-72xv8_openstack(b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 19:34:54 crc kubenswrapper[4812]: E1124 19:34:54.172771 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6584b49599-72xv8" podUID="b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee" Nov 24 19:34:54 crc kubenswrapper[4812]: E1124 19:34:54.266603 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 24 19:34:54 crc kubenswrapper[4812]: E1124 19:34:54.267122 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hk69l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7bdd77c89-bwbdb_openstack(c27766ce-c455-4000-a42c-b16fd701edf2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 19:34:54 crc kubenswrapper[4812]: E1124 19:34:54.268255 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7bdd77c89-bwbdb" podUID="c27766ce-c455-4000-a42c-b16fd701edf2" Nov 24 19:34:54 crc kubenswrapper[4812]: I1124 19:34:54.672961 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-cgt9p"] Nov 24 19:34:54 crc kubenswrapper[4812]: I1124 19:34:54.802183 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 19:34:54 crc kubenswrapper[4812]: I1124 19:34:54.883445 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-d48t8"] Nov 24 19:34:54 crc kubenswrapper[4812]: I1124 19:34:54.887616 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 19:34:54 crc kubenswrapper[4812]: W1124 19:34:54.890805 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29c07c15_c183_4546_8c7b_574776382e9b.slice/crio-eae3ae89009ffccbe83d7a6ba4cfe9b9ba857c8cf8f22d177ca4038a9cfd5056 WatchSource:0}: Error finding container eae3ae89009ffccbe83d7a6ba4cfe9b9ba857c8cf8f22d177ca4038a9cfd5056: Status 404 returned error can't find the container with id eae3ae89009ffccbe83d7a6ba4cfe9b9ba857c8cf8f22d177ca4038a9cfd5056 Nov 24 19:34:54 crc kubenswrapper[4812]: W1124 19:34:54.984942 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f3dca2d_7c6c_428d_9789_1463444fe46f.slice/crio-fa4205f170679e4765dd002880f1f84cf71f114abf126553d1d5a71badd9a9bc WatchSource:0}: Error finding container fa4205f170679e4765dd002880f1f84cf71f114abf126553d1d5a71badd9a9bc: Status 404 returned error can't find the container with id fa4205f170679e4765dd002880f1f84cf71f114abf126553d1d5a71badd9a9bc Nov 24 19:34:55 crc kubenswrapper[4812]: I1124 19:34:55.154829 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cgt9p" event={"ID":"ba9f8224-4f20-4c27-b242-3385791aed68","Type":"ContainerStarted","Data":"2ba64b4c1c22dae45b432998cfce6e7d324e8169cf8b0d00fc53fc3863ed3a32"} Nov 24 19:34:55 crc kubenswrapper[4812]: I1124 19:34:55.156456 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-d48t8" event={"ID":"4f3dca2d-7c6c-428d-9789-1463444fe46f","Type":"ContainerStarted","Data":"fa4205f170679e4765dd002880f1f84cf71f114abf126553d1d5a71badd9a9bc"} Nov 24 19:34:55 crc kubenswrapper[4812]: I1124 19:34:55.157479 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"29c07c15-c183-4546-8c7b-574776382e9b","Type":"ContainerStarted","Data":"eae3ae89009ffccbe83d7a6ba4cfe9b9ba857c8cf8f22d177ca4038a9cfd5056"} Nov 24 19:34:55 crc kubenswrapper[4812]: I1124 19:34:55.161157 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"dfeb89d5-a81d-411c-8808-ae9f506780e2","Type":"ContainerStarted","Data":"cbdf6f20669301fbd2866d17b92f7a44649a69a94ca6c94ae200cebc5be35a8c"} Nov 24 19:34:55 crc kubenswrapper[4812]: I1124 19:34:55.164608 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"b36a42e19accf5660fd1a99293b3671794dc547e12b8dcbfeada632fbf2982a3"} Nov 24 19:34:55 crc kubenswrapper[4812]: E1124 19:34:55.165790 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba\\\"\"" pod="openstack/dnsmasq-dns-6486446b9f-8sccn" podUID="0791a0ac-3be3-46d3-bc7f-3b3f421dc984" Nov 24 19:34:55 crc kubenswrapper[4812]: I1124 19:34:55.740183 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-72xv8" Nov 24 19:34:55 crc kubenswrapper[4812]: I1124 19:34:55.757037 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kscz6\" (UniqueName: \"kubernetes.io/projected/b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee-kube-api-access-kscz6\") pod \"b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee\" (UID: \"b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee\") " Nov 24 19:34:55 crc kubenswrapper[4812]: I1124 19:34:55.757187 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee-config\") pod \"b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee\" (UID: \"b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee\") " Nov 24 19:34:55 crc kubenswrapper[4812]: I1124 19:34:55.757277 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee-dns-svc\") pod \"b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee\" (UID: \"b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee\") " Nov 24 19:34:55 crc kubenswrapper[4812]: I1124 19:34:55.757687 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee-config" (OuterVolumeSpecName: "config") pod "b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee" (UID: "b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:34:55 crc kubenswrapper[4812]: I1124 19:34:55.757719 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee" (UID: "b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:34:55 crc kubenswrapper[4812]: I1124 19:34:55.757921 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:34:55 crc kubenswrapper[4812]: I1124 19:34:55.757952 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 19:34:55 crc kubenswrapper[4812]: I1124 19:34:55.839957 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee-kube-api-access-kscz6" (OuterVolumeSpecName: "kube-api-access-kscz6") pod "b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee" (UID: "b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee"). InnerVolumeSpecName "kube-api-access-kscz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:34:55 crc kubenswrapper[4812]: I1124 19:34:55.859264 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kscz6\" (UniqueName: \"kubernetes.io/projected/b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee-kube-api-access-kscz6\") on node \"crc\" DevicePath \"\"" Nov 24 19:34:56 crc kubenswrapper[4812]: I1124 19:34:56.221876 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6584b49599-72xv8" event={"ID":"b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee","Type":"ContainerDied","Data":"9175f9fc4c3b949a8acc08be4fb3ff75d1e3291b01db46ad1eab3c84c5be99a8"} Nov 24 19:34:56 crc kubenswrapper[4812]: I1124 19:34:56.222011 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-72xv8" Nov 24 19:34:56 crc kubenswrapper[4812]: I1124 19:34:56.245701 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"174dd2a1-292b-4e07-8ed8-48d4109e9f57","Type":"ContainerStarted","Data":"ea38aff07288e1f408c0f6650bafdbaba5ec40ffed460a32bda946e57bb0490c"} Nov 24 19:34:56 crc kubenswrapper[4812]: I1124 19:34:56.247231 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 24 19:34:56 crc kubenswrapper[4812]: I1124 19:34:56.273500 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b2264eb6-f494-4800-832b-d1e1d02daf4e","Type":"ContainerStarted","Data":"d053bfd0635142ea6ca3037bd768b91d0f01e9e71438c5bf178927a708e4ef99"} Nov 24 19:34:56 crc kubenswrapper[4812]: I1124 19:34:56.282388 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.435983199 podStartE2EDuration="26.282189343s" podCreationTimestamp="2025-11-24 19:34:30 +0000 UTC" firstStartedPulling="2025-11-24 19:34:31.238110009 +0000 UTC m=+1065.027062380" lastFinishedPulling="2025-11-24 19:34:54.084316153 +0000 UTC m=+1087.873268524" observedRunningTime="2025-11-24 19:34:56.278767476 +0000 UTC m=+1090.067719857" watchObservedRunningTime="2025-11-24 19:34:56.282189343 +0000 UTC m=+1090.071141714" Nov 24 19:34:56 crc kubenswrapper[4812]: I1124 19:34:56.342250 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-72xv8"] Nov 24 19:34:56 crc kubenswrapper[4812]: I1124 19:34:56.347075 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-72xv8"] Nov 24 19:34:56 crc kubenswrapper[4812]: I1124 19:34:56.867930 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-bwbdb" Nov 24 19:34:56 crc kubenswrapper[4812]: I1124 19:34:56.872296 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c27766ce-c455-4000-a42c-b16fd701edf2-config\") pod \"c27766ce-c455-4000-a42c-b16fd701edf2\" (UID: \"c27766ce-c455-4000-a42c-b16fd701edf2\") " Nov 24 19:34:56 crc kubenswrapper[4812]: I1124 19:34:56.872414 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk69l\" (UniqueName: \"kubernetes.io/projected/c27766ce-c455-4000-a42c-b16fd701edf2-kube-api-access-hk69l\") pod \"c27766ce-c455-4000-a42c-b16fd701edf2\" (UID: \"c27766ce-c455-4000-a42c-b16fd701edf2\") " Nov 24 19:34:56 crc kubenswrapper[4812]: I1124 19:34:56.873388 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c27766ce-c455-4000-a42c-b16fd701edf2-config" (OuterVolumeSpecName: "config") pod "c27766ce-c455-4000-a42c-b16fd701edf2" (UID: "c27766ce-c455-4000-a42c-b16fd701edf2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:34:56 crc kubenswrapper[4812]: I1124 19:34:56.878781 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c27766ce-c455-4000-a42c-b16fd701edf2-kube-api-access-hk69l" (OuterVolumeSpecName: "kube-api-access-hk69l") pod "c27766ce-c455-4000-a42c-b16fd701edf2" (UID: "c27766ce-c455-4000-a42c-b16fd701edf2"). InnerVolumeSpecName "kube-api-access-hk69l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:34:56 crc kubenswrapper[4812]: I1124 19:34:56.975097 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c27766ce-c455-4000-a42c-b16fd701edf2-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:34:56 crc kubenswrapper[4812]: I1124 19:34:56.975126 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk69l\" (UniqueName: \"kubernetes.io/projected/c27766ce-c455-4000-a42c-b16fd701edf2-kube-api-access-hk69l\") on node \"crc\" DevicePath \"\"" Nov 24 19:34:56 crc kubenswrapper[4812]: I1124 19:34:56.977802 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee" path="/var/lib/kubelet/pods/b7c9a9a9-b28c-4e3f-83c8-2b6204bc6dee/volumes" Nov 24 19:34:57 crc kubenswrapper[4812]: I1124 19:34:57.281468 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db2cd03e-b999-48ea-b540-7fd35356ba8b","Type":"ContainerStarted","Data":"0a9548dd5692b6153f3561ecedbb50d79cca6e5b8420150e9bcf2b64a40747c0"} Nov 24 19:34:57 crc kubenswrapper[4812]: I1124 19:34:57.282636 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdd77c89-bwbdb" event={"ID":"c27766ce-c455-4000-a42c-b16fd701edf2","Type":"ContainerDied","Data":"be7b801451f63c3b442e9002a2476b6068df2fd551a62ddf862957ed5b594995"} Nov 24 19:34:57 crc kubenswrapper[4812]: I1124 19:34:57.282929 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-bwbdb" Nov 24 19:34:57 crc kubenswrapper[4812]: I1124 19:34:57.285383 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c","Type":"ContainerStarted","Data":"fa420cfb2941e121e115e251da95c5f8bb88cdb054bc433739412a5b4c2401d3"} Nov 24 19:34:57 crc kubenswrapper[4812]: I1124 19:34:57.288409 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7cb8ede6-6163-4906-89f7-7fe6458edc36","Type":"ContainerStarted","Data":"19fddee3a853c3fd532fe650fc31347397a2376049b8e783055c2e331f93470a"} Nov 24 19:34:57 crc kubenswrapper[4812]: I1124 19:34:57.426820 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-bwbdb"] Nov 24 19:34:57 crc kubenswrapper[4812]: I1124 19:34:57.433356 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-bwbdb"] Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.464811 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-n2bml"] Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.466630 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.470954 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.485399 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-n2bml"] Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.599471 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cfe0ed1-d668-482c-8303-2ca93bdfc057-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-n2bml\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.599531 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfe0ed1-d668-482c-8303-2ca93bdfc057-config\") pod \"ovn-controller-metrics-n2bml\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.599550 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8cfe0ed1-d668-482c-8303-2ca93bdfc057-ovs-rundir\") pod \"ovn-controller-metrics-n2bml\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.599576 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26sgz\" (UniqueName: \"kubernetes.io/projected/8cfe0ed1-d668-482c-8303-2ca93bdfc057-kube-api-access-26sgz\") pod \"ovn-controller-metrics-n2bml\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.599596 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8cfe0ed1-d668-482c-8303-2ca93bdfc057-ovn-rundir\") pod \"ovn-controller-metrics-n2bml\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.599657 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cfe0ed1-d668-482c-8303-2ca93bdfc057-combined-ca-bundle\") pod \"ovn-controller-metrics-n2bml\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.626538 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-p5q9c"] Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.650869 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65c9b8d4f7-g5xtt"] Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.652326 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.655664 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.676970 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65c9b8d4f7-g5xtt"] Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.701414 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfe0ed1-d668-482c-8303-2ca93bdfc057-config\") pod \"ovn-controller-metrics-n2bml\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.701456 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8cfe0ed1-d668-482c-8303-2ca93bdfc057-ovs-rundir\") pod \"ovn-controller-metrics-n2bml\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.701488 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26sgz\" (UniqueName: \"kubernetes.io/projected/8cfe0ed1-d668-482c-8303-2ca93bdfc057-kube-api-access-26sgz\") pod \"ovn-controller-metrics-n2bml\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.701506 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8cfe0ed1-d668-482c-8303-2ca93bdfc057-ovn-rundir\") pod \"ovn-controller-metrics-n2bml\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.701568 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cfe0ed1-d668-482c-8303-2ca93bdfc057-combined-ca-bundle\") pod \"ovn-controller-metrics-n2bml\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.701611 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cfe0ed1-d668-482c-8303-2ca93bdfc057-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-n2bml\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.702846 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfe0ed1-d668-482c-8303-2ca93bdfc057-config\") pod \"ovn-controller-metrics-n2bml\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.703045 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8cfe0ed1-d668-482c-8303-2ca93bdfc057-ovs-rundir\") pod \"ovn-controller-metrics-n2bml\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.703283 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8cfe0ed1-d668-482c-8303-2ca93bdfc057-ovn-rundir\") pod \"ovn-controller-metrics-n2bml\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.710037 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cfe0ed1-d668-482c-8303-2ca93bdfc057-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-n2bml\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.710645 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cfe0ed1-d668-482c-8303-2ca93bdfc057-combined-ca-bundle\") pod \"ovn-controller-metrics-n2bml\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.722554 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26sgz\" (UniqueName: \"kubernetes.io/projected/8cfe0ed1-d668-482c-8303-2ca93bdfc057-kube-api-access-26sgz\") pod \"ovn-controller-metrics-n2bml\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.789322 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.802905 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-config\") pod \"dnsmasq-dns-65c9b8d4f7-g5xtt\" (UID: \"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.802965 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52skr\" (UniqueName: \"kubernetes.io/projected/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-kube-api-access-52skr\") pod \"dnsmasq-dns-65c9b8d4f7-g5xtt\" (UID: \"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.803010 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-dns-svc\") pod \"dnsmasq-dns-65c9b8d4f7-g5xtt\" (UID: \"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.803053 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-ovsdbserver-sb\") pod \"dnsmasq-dns-65c9b8d4f7-g5xtt\" (UID: \"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.836244 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-8sccn"] Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.904711 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-dns-svc\") pod \"dnsmasq-dns-65c9b8d4f7-g5xtt\" (UID: \"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.904782 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-ovsdbserver-sb\") pod \"dnsmasq-dns-65c9b8d4f7-g5xtt\" (UID: \"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.904836 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-config\") pod \"dnsmasq-dns-65c9b8d4f7-g5xtt\" (UID: \"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.904872 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52skr\" (UniqueName: \"kubernetes.io/projected/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-kube-api-access-52skr\") pod \"dnsmasq-dns-65c9b8d4f7-g5xtt\" (UID: \"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.905942 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-dns-svc\") pod \"dnsmasq-dns-65c9b8d4f7-g5xtt\" (UID: \"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.904725 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-kfwgx"] Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.906964 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-ovsdbserver-sb\") pod \"dnsmasq-dns-65c9b8d4f7-g5xtt\" (UID: \"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.907229 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.907485 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-config\") pod \"dnsmasq-dns-65c9b8d4f7-g5xtt\" (UID: \"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.917452 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.925619 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-kfwgx"] Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.966524 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52skr\" (UniqueName: \"kubernetes.io/projected/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-kube-api-access-52skr\") pod \"dnsmasq-dns-65c9b8d4f7-g5xtt\" (UID: \"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27\") " pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.978708 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" Nov 24 19:34:58 crc kubenswrapper[4812]: I1124 19:34:58.986834 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c27766ce-c455-4000-a42c-b16fd701edf2" path="/var/lib/kubelet/pods/c27766ce-c455-4000-a42c-b16fd701edf2/volumes" Nov 24 19:34:59 crc kubenswrapper[4812]: I1124 19:34:59.005968 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-ovsdbserver-sb\") pod \"dnsmasq-dns-5c476d78c5-kfwgx\" (UID: \"8c58650e-77d4-4825-90e6-28abfa829f45\") " pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" Nov 24 19:34:59 crc kubenswrapper[4812]: I1124 19:34:59.006036 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-config\") pod \"dnsmasq-dns-5c476d78c5-kfwgx\" (UID: \"8c58650e-77d4-4825-90e6-28abfa829f45\") " pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" Nov 24 19:34:59 crc kubenswrapper[4812]: I1124 19:34:59.006067 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-ovsdbserver-nb\") pod \"dnsmasq-dns-5c476d78c5-kfwgx\" (UID: \"8c58650e-77d4-4825-90e6-28abfa829f45\") " pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" Nov 24 19:34:59 crc kubenswrapper[4812]: I1124 19:34:59.006098 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj49g\" (UniqueName: \"kubernetes.io/projected/8c58650e-77d4-4825-90e6-28abfa829f45-kube-api-access-hj49g\") pod \"dnsmasq-dns-5c476d78c5-kfwgx\" (UID: \"8c58650e-77d4-4825-90e6-28abfa829f45\") " pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" Nov 24 19:34:59 crc kubenswrapper[4812]: I1124 19:34:59.006123 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-dns-svc\") pod \"dnsmasq-dns-5c476d78c5-kfwgx\" (UID: \"8c58650e-77d4-4825-90e6-28abfa829f45\") " pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" Nov 24 19:34:59 crc kubenswrapper[4812]: I1124 19:34:59.116021 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-ovsdbserver-sb\") pod \"dnsmasq-dns-5c476d78c5-kfwgx\" (UID: \"8c58650e-77d4-4825-90e6-28abfa829f45\") " pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" Nov 24 19:34:59 crc kubenswrapper[4812]: I1124 19:34:59.116135 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-config\") pod \"dnsmasq-dns-5c476d78c5-kfwgx\" (UID: \"8c58650e-77d4-4825-90e6-28abfa829f45\") " pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" Nov 24 19:34:59 crc kubenswrapper[4812]: I1124 19:34:59.116185 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-ovsdbserver-nb\") pod \"dnsmasq-dns-5c476d78c5-kfwgx\" (UID: \"8c58650e-77d4-4825-90e6-28abfa829f45\") " pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" Nov 24 19:34:59 crc kubenswrapper[4812]: I1124 19:34:59.116236 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj49g\" (UniqueName: \"kubernetes.io/projected/8c58650e-77d4-4825-90e6-28abfa829f45-kube-api-access-hj49g\") pod \"dnsmasq-dns-5c476d78c5-kfwgx\" (UID: \"8c58650e-77d4-4825-90e6-28abfa829f45\") " pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" Nov 24 19:34:59 crc kubenswrapper[4812]: I1124 19:34:59.116274 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-dns-svc\") pod \"dnsmasq-dns-5c476d78c5-kfwgx\" (UID: \"8c58650e-77d4-4825-90e6-28abfa829f45\") " pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" Nov 24 19:34:59 crc kubenswrapper[4812]: I1124 19:34:59.117292 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-dns-svc\") pod \"dnsmasq-dns-5c476d78c5-kfwgx\" (UID: \"8c58650e-77d4-4825-90e6-28abfa829f45\") " pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" Nov 24 19:34:59 crc kubenswrapper[4812]: I1124 19:34:59.117312 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-config\") pod \"dnsmasq-dns-5c476d78c5-kfwgx\" (UID: \"8c58650e-77d4-4825-90e6-28abfa829f45\") " pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" Nov 24 19:34:59 crc kubenswrapper[4812]: I1124 19:34:59.118073 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-ovsdbserver-sb\") pod \"dnsmasq-dns-5c476d78c5-kfwgx\" (UID: \"8c58650e-77d4-4825-90e6-28abfa829f45\") " pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" Nov 24 19:34:59 crc kubenswrapper[4812]: I1124 19:34:59.118140 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-ovsdbserver-nb\") pod \"dnsmasq-dns-5c476d78c5-kfwgx\" (UID: \"8c58650e-77d4-4825-90e6-28abfa829f45\") " pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" Nov 24 19:34:59 crc kubenswrapper[4812]: I1124 19:34:59.135631 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj49g\" (UniqueName: \"kubernetes.io/projected/8c58650e-77d4-4825-90e6-28abfa829f45-kube-api-access-hj49g\") pod \"dnsmasq-dns-5c476d78c5-kfwgx\" (UID: \"8c58650e-77d4-4825-90e6-28abfa829f45\") " pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" Nov 24 19:34:59 crc kubenswrapper[4812]: I1124 19:34:59.248178 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" Nov 24 19:34:59 crc kubenswrapper[4812]: I1124 19:34:59.899353 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6d9948dc-p5q9c" Nov 24 19:34:59 crc kubenswrapper[4812]: I1124 19:34:59.977280 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-8sccn" Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.029417 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13f7de91-4d87-4631-a9bd-efded8c9ea1f-dns-svc\") pod \"13f7de91-4d87-4631-a9bd-efded8c9ea1f\" (UID: \"13f7de91-4d87-4631-a9bd-efded8c9ea1f\") " Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.029485 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvbsc\" (UniqueName: \"kubernetes.io/projected/13f7de91-4d87-4631-a9bd-efded8c9ea1f-kube-api-access-pvbsc\") pod \"13f7de91-4d87-4631-a9bd-efded8c9ea1f\" (UID: \"13f7de91-4d87-4631-a9bd-efded8c9ea1f\") " Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.029604 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13f7de91-4d87-4631-a9bd-efded8c9ea1f-config\") pod \"13f7de91-4d87-4631-a9bd-efded8c9ea1f\" (UID: \"13f7de91-4d87-4631-a9bd-efded8c9ea1f\") " Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.031458 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13f7de91-4d87-4631-a9bd-efded8c9ea1f-config" (OuterVolumeSpecName: "config") pod "13f7de91-4d87-4631-a9bd-efded8c9ea1f" (UID: "13f7de91-4d87-4631-a9bd-efded8c9ea1f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.032161 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13f7de91-4d87-4631-a9bd-efded8c9ea1f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "13f7de91-4d87-4631-a9bd-efded8c9ea1f" (UID: "13f7de91-4d87-4631-a9bd-efded8c9ea1f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.049575 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13f7de91-4d87-4631-a9bd-efded8c9ea1f-kube-api-access-pvbsc" (OuterVolumeSpecName: "kube-api-access-pvbsc") pod "13f7de91-4d87-4631-a9bd-efded8c9ea1f" (UID: "13f7de91-4d87-4631-a9bd-efded8c9ea1f"). InnerVolumeSpecName "kube-api-access-pvbsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.130946 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0791a0ac-3be3-46d3-bc7f-3b3f421dc984-config\") pod \"0791a0ac-3be3-46d3-bc7f-3b3f421dc984\" (UID: \"0791a0ac-3be3-46d3-bc7f-3b3f421dc984\") " Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.131068 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w69t5\" (UniqueName: \"kubernetes.io/projected/0791a0ac-3be3-46d3-bc7f-3b3f421dc984-kube-api-access-w69t5\") pod \"0791a0ac-3be3-46d3-bc7f-3b3f421dc984\" (UID: \"0791a0ac-3be3-46d3-bc7f-3b3f421dc984\") " Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.131179 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0791a0ac-3be3-46d3-bc7f-3b3f421dc984-dns-svc\") pod \"0791a0ac-3be3-46d3-bc7f-3b3f421dc984\" (UID: \"0791a0ac-3be3-46d3-bc7f-3b3f421dc984\") " Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.131413 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0791a0ac-3be3-46d3-bc7f-3b3f421dc984-config" (OuterVolumeSpecName: "config") pod "0791a0ac-3be3-46d3-bc7f-3b3f421dc984" (UID: "0791a0ac-3be3-46d3-bc7f-3b3f421dc984"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.131546 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0791a0ac-3be3-46d3-bc7f-3b3f421dc984-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0791a0ac-3be3-46d3-bc7f-3b3f421dc984" (UID: "0791a0ac-3be3-46d3-bc7f-3b3f421dc984"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.131868 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0791a0ac-3be3-46d3-bc7f-3b3f421dc984-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.131884 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13f7de91-4d87-4631-a9bd-efded8c9ea1f-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.131893 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0791a0ac-3be3-46d3-bc7f-3b3f421dc984-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.131901 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13f7de91-4d87-4631-a9bd-efded8c9ea1f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.131911 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvbsc\" (UniqueName: \"kubernetes.io/projected/13f7de91-4d87-4631-a9bd-efded8c9ea1f-kube-api-access-pvbsc\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.138128 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0791a0ac-3be3-46d3-bc7f-3b3f421dc984-kube-api-access-w69t5" (OuterVolumeSpecName: "kube-api-access-w69t5") pod "0791a0ac-3be3-46d3-bc7f-3b3f421dc984" (UID: "0791a0ac-3be3-46d3-bc7f-3b3f421dc984"). InnerVolumeSpecName "kube-api-access-w69t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.216405 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-n2bml"] Nov 24 19:35:00 crc kubenswrapper[4812]: W1124 19:35:00.225100 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cfe0ed1_d668_482c_8303_2ca93bdfc057.slice/crio-7b608d902f8126cf5abff924aed7bf979e6a0f29b24dbc92e7e25d48f3863032 WatchSource:0}: Error finding container 7b608d902f8126cf5abff924aed7bf979e6a0f29b24dbc92e7e25d48f3863032: Status 404 returned error can't find the container with id 7b608d902f8126cf5abff924aed7bf979e6a0f29b24dbc92e7e25d48f3863032 Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.233565 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w69t5\" (UniqueName: \"kubernetes.io/projected/0791a0ac-3be3-46d3-bc7f-3b3f421dc984-kube-api-access-w69t5\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.291967 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65c9b8d4f7-g5xtt"] Nov 24 19:35:00 crc kubenswrapper[4812]: W1124 19:35:00.299214 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00d308d2_17b6_46fe_94ac_0cc7e0fd1e27.slice/crio-97c9fc3b8e9619c32517080914a225b8b4272366939e5d6d5c47e54cd3955a34 WatchSource:0}: Error finding container 97c9fc3b8e9619c32517080914a225b8b4272366939e5d6d5c47e54cd3955a34: Status 404 returned error can't find the container with id 97c9fc3b8e9619c32517080914a225b8b4272366939e5d6d5c47e54cd3955a34 Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.299611 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-kfwgx"] Nov 24 19:35:00 crc kubenswrapper[4812]: W1124 19:35:00.306245 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c58650e_77d4_4825_90e6_28abfa829f45.slice/crio-187b1f7cfc6dd9d2dd3199deb52ea3f128773cae512190e311695769a4591dea WatchSource:0}: Error finding container 187b1f7cfc6dd9d2dd3199deb52ea3f128773cae512190e311695769a4591dea: Status 404 returned error can't find the container with id 187b1f7cfc6dd9d2dd3199deb52ea3f128773cae512190e311695769a4591dea Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.314095 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c6d9948dc-p5q9c" event={"ID":"13f7de91-4d87-4631-a9bd-efded8c9ea1f","Type":"ContainerDied","Data":"d5639aefe5afd698d5ab3f5dbd048b87e34b6dcc98cfc4a5ba60857779b4b90c"} Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.314115 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c6d9948dc-p5q9c" Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.315798 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" event={"ID":"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27","Type":"ContainerStarted","Data":"97c9fc3b8e9619c32517080914a225b8b4272366939e5d6d5c47e54cd3955a34"} Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.317545 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-d48t8" event={"ID":"4f3dca2d-7c6c-428d-9789-1463444fe46f","Type":"ContainerStarted","Data":"d5ff05e4d3f2fbc8607a3a9a2e2d57c02a86f1e0f1e41e800c7ae06df2e9f0dc"} Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.317624 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-d48t8" Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.319227 4812 generic.go:334] "Generic (PLEG): container finished" podID="b2264eb6-f494-4800-832b-d1e1d02daf4e" containerID="d053bfd0635142ea6ca3037bd768b91d0f01e9e71438c5bf178927a708e4ef99" exitCode=0 Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.319288 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b2264eb6-f494-4800-832b-d1e1d02daf4e","Type":"ContainerDied","Data":"d053bfd0635142ea6ca3037bd768b91d0f01e9e71438c5bf178927a708e4ef99"} Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.321106 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c066990e-686f-4d3b-bb9d-185df0b741ec","Type":"ContainerStarted","Data":"6e43bdba808025dd018f60635760781cec24460f3701b1a7863bd20153c99de7"} Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.321204 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.325418 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-8sccn" Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.325414 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-8sccn" event={"ID":"0791a0ac-3be3-46d3-bc7f-3b3f421dc984","Type":"ContainerDied","Data":"67ae17f7256b77dcec82a30b7d7bded4e6bd4f35f0f2e5d692760a6967acbe89"} Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.327220 4812 generic.go:334] "Generic (PLEG): container finished" podID="1ea33728-2445-4a6f-9ac6-ab0aeff9f95c" containerID="fa420cfb2941e121e115e251da95c5f8bb88cdb054bc433739412a5b4c2401d3" exitCode=0 Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.327298 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c","Type":"ContainerDied","Data":"fa420cfb2941e121e115e251da95c5f8bb88cdb054bc433739412a5b4c2401d3"} Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.328702 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-n2bml" event={"ID":"8cfe0ed1-d668-482c-8303-2ca93bdfc057","Type":"ContainerStarted","Data":"7b608d902f8126cf5abff924aed7bf979e6a0f29b24dbc92e7e25d48f3863032"} Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.330703 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cgt9p" event={"ID":"ba9f8224-4f20-4c27-b242-3385791aed68","Type":"ContainerStarted","Data":"accc8d815be23caec019536041925afd0f5ed5a4e6e387fd00b9c231da9efae1"} Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.338459 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"29c07c15-c183-4546-8c7b-574776382e9b","Type":"ContainerStarted","Data":"c1e926415066f8182e74a12929d98c3843c7ccb4cba6f365608af1c2c5278dee"} Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.340119 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-d48t8" podStartSLOduration=19.605984747 podStartE2EDuration="24.340107518s" podCreationTimestamp="2025-11-24 19:34:36 +0000 UTC" firstStartedPulling="2025-11-24 19:34:54.987320654 +0000 UTC m=+1088.776273025" lastFinishedPulling="2025-11-24 19:34:59.721443425 +0000 UTC m=+1093.510395796" observedRunningTime="2025-11-24 19:35:00.337504224 +0000 UTC m=+1094.126456605" watchObservedRunningTime="2025-11-24 19:35:00.340107518 +0000 UTC m=+1094.129059899" Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.342269 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"dfeb89d5-a81d-411c-8808-ae9f506780e2","Type":"ContainerStarted","Data":"542fa987ba71008e6087798dc608e5f12d1c0e5b34bd8042efb1db753a6ec061"} Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.435641 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=5.720952784 podStartE2EDuration="29.435615598s" podCreationTimestamp="2025-11-24 19:34:31 +0000 UTC" firstStartedPulling="2025-11-24 19:34:36.00677258 +0000 UTC m=+1069.795724941" lastFinishedPulling="2025-11-24 19:34:59.721435384 +0000 UTC m=+1093.510387755" observedRunningTime="2025-11-24 19:35:00.430646357 +0000 UTC m=+1094.219598748" watchObservedRunningTime="2025-11-24 19:35:00.435615598 +0000 UTC m=+1094.224567979" Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.477456 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-p5q9c"] Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.483705 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c6d9948dc-p5q9c"] Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.506399 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-8sccn"] Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.517502 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-8sccn"] Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.578231 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.988830 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0791a0ac-3be3-46d3-bc7f-3b3f421dc984" path="/var/lib/kubelet/pods/0791a0ac-3be3-46d3-bc7f-3b3f421dc984/volumes" Nov 24 19:35:00 crc kubenswrapper[4812]: I1124 19:35:00.989798 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13f7de91-4d87-4631-a9bd-efded8c9ea1f" path="/var/lib/kubelet/pods/13f7de91-4d87-4631-a9bd-efded8c9ea1f/volumes" Nov 24 19:35:01 crc kubenswrapper[4812]: I1124 19:35:01.352417 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b2264eb6-f494-4800-832b-d1e1d02daf4e","Type":"ContainerStarted","Data":"38f77e06a7c796fa3248ed4ee27a5c9f23b546d855d2d2787a29ffdee9ac28be"} Nov 24 19:35:01 crc kubenswrapper[4812]: I1124 19:35:01.355161 4812 generic.go:334] "Generic (PLEG): container finished" podID="8c58650e-77d4-4825-90e6-28abfa829f45" containerID="d72b4682d9880137251bd32b7da0019a56f9216dea15ad22f3ff40a7403af439" exitCode=0 Nov 24 19:35:01 crc kubenswrapper[4812]: I1124 19:35:01.355251 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" event={"ID":"8c58650e-77d4-4825-90e6-28abfa829f45","Type":"ContainerDied","Data":"d72b4682d9880137251bd32b7da0019a56f9216dea15ad22f3ff40a7403af439"} Nov 24 19:35:01 crc kubenswrapper[4812]: I1124 19:35:01.355271 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" event={"ID":"8c58650e-77d4-4825-90e6-28abfa829f45","Type":"ContainerStarted","Data":"187b1f7cfc6dd9d2dd3199deb52ea3f128773cae512190e311695769a4591dea"} Nov 24 19:35:01 crc kubenswrapper[4812]: I1124 19:35:01.359008 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c","Type":"ContainerStarted","Data":"c006dae87ca76eaf2e493bd8a0fbad15d198c2c947c2d61201bc1d4f3e9c9d97"} Nov 24 19:35:01 crc kubenswrapper[4812]: I1124 19:35:01.361192 4812 generic.go:334] "Generic (PLEG): container finished" podID="ba9f8224-4f20-4c27-b242-3385791aed68" containerID="accc8d815be23caec019536041925afd0f5ed5a4e6e387fd00b9c231da9efae1" exitCode=0 Nov 24 19:35:01 crc kubenswrapper[4812]: I1124 19:35:01.361237 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cgt9p" event={"ID":"ba9f8224-4f20-4c27-b242-3385791aed68","Type":"ContainerDied","Data":"accc8d815be23caec019536041925afd0f5ed5a4e6e387fd00b9c231da9efae1"} Nov 24 19:35:01 crc kubenswrapper[4812]: I1124 19:35:01.363610 4812 generic.go:334] "Generic (PLEG): container finished" podID="00d308d2-17b6-46fe-94ac-0cc7e0fd1e27" containerID="20c53295fb70b389b9936a6ba822796ec16ef2f16597f3c0bcbd9aaf77b56180" exitCode=0 Nov 24 19:35:01 crc kubenswrapper[4812]: I1124 19:35:01.365375 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" event={"ID":"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27","Type":"ContainerDied","Data":"20c53295fb70b389b9936a6ba822796ec16ef2f16597f3c0bcbd9aaf77b56180"} Nov 24 19:35:01 crc kubenswrapper[4812]: I1124 19:35:01.378311 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.748828575 podStartE2EDuration="34.378293105s" podCreationTimestamp="2025-11-24 19:34:27 +0000 UTC" firstStartedPulling="2025-11-24 19:34:29.454888394 +0000 UTC m=+1063.243840765" lastFinishedPulling="2025-11-24 19:34:54.084352924 +0000 UTC m=+1087.873305295" observedRunningTime="2025-11-24 19:35:01.37496281 +0000 UTC m=+1095.163915181" watchObservedRunningTime="2025-11-24 19:35:01.378293105 +0000 UTC m=+1095.167245466" Nov 24 19:35:01 crc kubenswrapper[4812]: I1124 19:35:01.471430 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=10.058150293 podStartE2EDuration="33.471412027s" podCreationTimestamp="2025-11-24 19:34:28 +0000 UTC" firstStartedPulling="2025-11-24 19:34:30.85400372 +0000 UTC m=+1064.642956091" lastFinishedPulling="2025-11-24 19:34:54.267265444 +0000 UTC m=+1088.056217825" observedRunningTime="2025-11-24 19:35:01.463124342 +0000 UTC m=+1095.252076713" watchObservedRunningTime="2025-11-24 19:35:01.471412027 +0000 UTC m=+1095.260364398" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.018195 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-kfwgx"] Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.047778 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-jjph7"] Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.049081 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.061880 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-jjph7"] Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.171039 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-dns-svc\") pod \"dnsmasq-dns-5c9fdb784c-jjph7\" (UID: \"8b3cbac7-75be-4160-84d2-1dcf750c19de\") " pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.171128 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-982mn\" (UniqueName: \"kubernetes.io/projected/8b3cbac7-75be-4160-84d2-1dcf750c19de-kube-api-access-982mn\") pod \"dnsmasq-dns-5c9fdb784c-jjph7\" (UID: \"8b3cbac7-75be-4160-84d2-1dcf750c19de\") " pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.171161 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9fdb784c-jjph7\" (UID: \"8b3cbac7-75be-4160-84d2-1dcf750c19de\") " pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.171187 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9fdb784c-jjph7\" (UID: \"8b3cbac7-75be-4160-84d2-1dcf750c19de\") " pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.171237 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-config\") pod \"dnsmasq-dns-5c9fdb784c-jjph7\" (UID: \"8b3cbac7-75be-4160-84d2-1dcf750c19de\") " pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.272642 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-982mn\" (UniqueName: \"kubernetes.io/projected/8b3cbac7-75be-4160-84d2-1dcf750c19de-kube-api-access-982mn\") pod \"dnsmasq-dns-5c9fdb784c-jjph7\" (UID: \"8b3cbac7-75be-4160-84d2-1dcf750c19de\") " pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.272692 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9fdb784c-jjph7\" (UID: \"8b3cbac7-75be-4160-84d2-1dcf750c19de\") " pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.272715 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9fdb784c-jjph7\" (UID: \"8b3cbac7-75be-4160-84d2-1dcf750c19de\") " pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.272763 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-config\") pod \"dnsmasq-dns-5c9fdb784c-jjph7\" (UID: \"8b3cbac7-75be-4160-84d2-1dcf750c19de\") " pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.272786 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-dns-svc\") pod \"dnsmasq-dns-5c9fdb784c-jjph7\" (UID: \"8b3cbac7-75be-4160-84d2-1dcf750c19de\") " pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.273475 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-dns-svc\") pod \"dnsmasq-dns-5c9fdb784c-jjph7\" (UID: \"8b3cbac7-75be-4160-84d2-1dcf750c19de\") " pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.274128 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9fdb784c-jjph7\" (UID: \"8b3cbac7-75be-4160-84d2-1dcf750c19de\") " pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.274826 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-config\") pod \"dnsmasq-dns-5c9fdb784c-jjph7\" (UID: \"8b3cbac7-75be-4160-84d2-1dcf750c19de\") " pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.275137 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9fdb784c-jjph7\" (UID: \"8b3cbac7-75be-4160-84d2-1dcf750c19de\") " pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.291931 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-982mn\" (UniqueName: \"kubernetes.io/projected/8b3cbac7-75be-4160-84d2-1dcf750c19de-kube-api-access-982mn\") pod \"dnsmasq-dns-5c9fdb784c-jjph7\" (UID: \"8b3cbac7-75be-4160-84d2-1dcf750c19de\") " pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.370428 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.384002 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cgt9p" event={"ID":"ba9f8224-4f20-4c27-b242-3385791aed68","Type":"ContainerStarted","Data":"623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680"} Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.384051 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.384065 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cgt9p" event={"ID":"ba9f8224-4f20-4c27-b242-3385791aed68","Type":"ContainerStarted","Data":"20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5"} Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.384091 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.387908 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" event={"ID":"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27","Type":"ContainerStarted","Data":"cc495a5f2e1a9e6d1a83570b491fb90543abb9e65abba2135126e51fffead5db"} Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.388386 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.391081 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" event={"ID":"8c58650e-77d4-4825-90e6-28abfa829f45","Type":"ContainerStarted","Data":"1e83f1fdff78871b75ea17b20fc5b1ca6fb660f78882f0c2267d405078785d4c"} Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.391436 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.404450 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-cgt9p" podStartSLOduration=21.476570892 podStartE2EDuration="26.404434229s" podCreationTimestamp="2025-11-24 19:34:36 +0000 UTC" firstStartedPulling="2025-11-24 19:34:54.793634489 +0000 UTC m=+1088.582586860" lastFinishedPulling="2025-11-24 19:34:59.721497826 +0000 UTC m=+1093.510450197" observedRunningTime="2025-11-24 19:35:02.401232718 +0000 UTC m=+1096.190185089" watchObservedRunningTime="2025-11-24 19:35:02.404434229 +0000 UTC m=+1096.193386600" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.444489 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" podStartSLOduration=4.021966318 podStartE2EDuration="4.444468925s" podCreationTimestamp="2025-11-24 19:34:58 +0000 UTC" firstStartedPulling="2025-11-24 19:35:00.30174288 +0000 UTC m=+1094.090695251" lastFinishedPulling="2025-11-24 19:35:00.724245487 +0000 UTC m=+1094.513197858" observedRunningTime="2025-11-24 19:35:02.444123745 +0000 UTC m=+1096.233076116" watchObservedRunningTime="2025-11-24 19:35:02.444468925 +0000 UTC m=+1096.233421296" Nov 24 19:35:02 crc kubenswrapper[4812]: I1124 19:35:02.445051 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" podStartSLOduration=4.030027316 podStartE2EDuration="4.445045161s" podCreationTimestamp="2025-11-24 19:34:58 +0000 UTC" firstStartedPulling="2025-11-24 19:35:00.308164362 +0000 UTC m=+1094.097116733" lastFinishedPulling="2025-11-24 19:35:00.723182207 +0000 UTC m=+1094.512134578" observedRunningTime="2025-11-24 19:35:02.430104827 +0000 UTC m=+1096.219057218" watchObservedRunningTime="2025-11-24 19:35:02.445045161 +0000 UTC m=+1096.233997532" Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.199178 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.205703 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.209045 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-j2x6n" Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.213490 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.213687 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.213828 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.217551 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.289563 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " pod="openstack/swift-storage-0" Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.289611 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zlfw\" (UniqueName: \"kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-kube-api-access-6zlfw\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " pod="openstack/swift-storage-0" Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.289646 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-cache\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " pod="openstack/swift-storage-0" Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.289822 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " pod="openstack/swift-storage-0" Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.289884 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-lock\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " pod="openstack/swift-storage-0" Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.392059 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " pod="openstack/swift-storage-0" Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.392357 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zlfw\" (UniqueName: \"kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-kube-api-access-6zlfw\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " pod="openstack/swift-storage-0" Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.392400 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-cache\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " pod="openstack/swift-storage-0" Nov 24 19:35:03 crc kubenswrapper[4812]: E1124 19:35:03.392256 4812 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.392434 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " pod="openstack/swift-storage-0" Nov 24 19:35:03 crc kubenswrapper[4812]: E1124 19:35:03.392443 4812 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.392457 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-lock\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " pod="openstack/swift-storage-0" Nov 24 19:35:03 crc kubenswrapper[4812]: E1124 19:35:03.392497 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift podName:c127eda7-bbfe-4197-b5cf-f4f99824d0c8 nodeName:}" failed. No retries permitted until 2025-11-24 19:35:03.892478133 +0000 UTC m=+1097.681430504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift") pod "swift-storage-0" (UID: "c127eda7-bbfe-4197-b5cf-f4f99824d0c8") : configmap "swift-ring-files" not found Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.392928 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/swift-storage-0" Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.392955 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-lock\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " pod="openstack/swift-storage-0" Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.392927 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-cache\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " pod="openstack/swift-storage-0" Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.406295 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" podUID="8c58650e-77d4-4825-90e6-28abfa829f45" containerName="dnsmasq-dns" containerID="cri-o://1e83f1fdff78871b75ea17b20fc5b1ca6fb660f78882f0c2267d405078785d4c" gracePeriod=10 Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.419040 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " pod="openstack/swift-storage-0" Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.429433 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zlfw\" (UniqueName: \"kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-kube-api-access-6zlfw\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " pod="openstack/swift-storage-0" Nov 24 19:35:03 crc kubenswrapper[4812]: I1124 19:35:03.902510 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " pod="openstack/swift-storage-0" Nov 24 19:35:03 crc kubenswrapper[4812]: E1124 19:35:03.903039 4812 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 19:35:03 crc kubenswrapper[4812]: E1124 19:35:03.903056 4812 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 19:35:03 crc kubenswrapper[4812]: E1124 19:35:03.903101 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift podName:c127eda7-bbfe-4197-b5cf-f4f99824d0c8 nodeName:}" failed. No retries permitted until 2025-11-24 19:35:04.90308562 +0000 UTC m=+1098.692037991 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift") pod "swift-storage-0" (UID: "c127eda7-bbfe-4197-b5cf-f4f99824d0c8") : configmap "swift-ring-files" not found Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.195156 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.310282 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-config\") pod \"8c58650e-77d4-4825-90e6-28abfa829f45\" (UID: \"8c58650e-77d4-4825-90e6-28abfa829f45\") " Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.310476 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-dns-svc\") pod \"8c58650e-77d4-4825-90e6-28abfa829f45\" (UID: \"8c58650e-77d4-4825-90e6-28abfa829f45\") " Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.310555 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-ovsdbserver-nb\") pod \"8c58650e-77d4-4825-90e6-28abfa829f45\" (UID: \"8c58650e-77d4-4825-90e6-28abfa829f45\") " Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.310591 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-ovsdbserver-sb\") pod \"8c58650e-77d4-4825-90e6-28abfa829f45\" (UID: \"8c58650e-77d4-4825-90e6-28abfa829f45\") " Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.310617 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj49g\" (UniqueName: \"kubernetes.io/projected/8c58650e-77d4-4825-90e6-28abfa829f45-kube-api-access-hj49g\") pod \"8c58650e-77d4-4825-90e6-28abfa829f45\" (UID: \"8c58650e-77d4-4825-90e6-28abfa829f45\") " Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.317362 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c58650e-77d4-4825-90e6-28abfa829f45-kube-api-access-hj49g" (OuterVolumeSpecName: "kube-api-access-hj49g") pod "8c58650e-77d4-4825-90e6-28abfa829f45" (UID: "8c58650e-77d4-4825-90e6-28abfa829f45"). InnerVolumeSpecName "kube-api-access-hj49g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.367180 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c58650e-77d4-4825-90e6-28abfa829f45" (UID: "8c58650e-77d4-4825-90e6-28abfa829f45"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.374772 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8c58650e-77d4-4825-90e6-28abfa829f45" (UID: "8c58650e-77d4-4825-90e6-28abfa829f45"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.376531 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-config" (OuterVolumeSpecName: "config") pod "8c58650e-77d4-4825-90e6-28abfa829f45" (UID: "8c58650e-77d4-4825-90e6-28abfa829f45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.379451 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8c58650e-77d4-4825-90e6-28abfa829f45" (UID: "8c58650e-77d4-4825-90e6-28abfa829f45"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.411949 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.412054 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.412068 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.412079 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj49g\" (UniqueName: \"kubernetes.io/projected/8c58650e-77d4-4825-90e6-28abfa829f45-kube-api-access-hj49g\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.412088 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c58650e-77d4-4825-90e6-28abfa829f45-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.417937 4812 generic.go:334] "Generic (PLEG): container finished" podID="8c58650e-77d4-4825-90e6-28abfa829f45" containerID="1e83f1fdff78871b75ea17b20fc5b1ca6fb660f78882f0c2267d405078785d4c" exitCode=0 Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.418729 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.419493 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" event={"ID":"8c58650e-77d4-4825-90e6-28abfa829f45","Type":"ContainerDied","Data":"1e83f1fdff78871b75ea17b20fc5b1ca6fb660f78882f0c2267d405078785d4c"} Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.419559 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-kfwgx" event={"ID":"8c58650e-77d4-4825-90e6-28abfa829f45","Type":"ContainerDied","Data":"187b1f7cfc6dd9d2dd3199deb52ea3f128773cae512190e311695769a4591dea"} Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.419579 4812 scope.go:117] "RemoveContainer" containerID="1e83f1fdff78871b75ea17b20fc5b1ca6fb660f78882f0c2267d405078785d4c" Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.516277 4812 scope.go:117] "RemoveContainer" containerID="d72b4682d9880137251bd32b7da0019a56f9216dea15ad22f3ff40a7403af439" Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.542828 4812 scope.go:117] "RemoveContainer" containerID="1e83f1fdff78871b75ea17b20fc5b1ca6fb660f78882f0c2267d405078785d4c" Nov 24 19:35:04 crc kubenswrapper[4812]: E1124 19:35:04.543411 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e83f1fdff78871b75ea17b20fc5b1ca6fb660f78882f0c2267d405078785d4c\": container with ID starting with 1e83f1fdff78871b75ea17b20fc5b1ca6fb660f78882f0c2267d405078785d4c not found: ID does not exist" containerID="1e83f1fdff78871b75ea17b20fc5b1ca6fb660f78882f0c2267d405078785d4c" Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.543447 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e83f1fdff78871b75ea17b20fc5b1ca6fb660f78882f0c2267d405078785d4c"} err="failed to get container status \"1e83f1fdff78871b75ea17b20fc5b1ca6fb660f78882f0c2267d405078785d4c\": rpc error: code = NotFound desc = could not find container \"1e83f1fdff78871b75ea17b20fc5b1ca6fb660f78882f0c2267d405078785d4c\": container with ID starting with 1e83f1fdff78871b75ea17b20fc5b1ca6fb660f78882f0c2267d405078785d4c not found: ID does not exist" Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.543472 4812 scope.go:117] "RemoveContainer" containerID="d72b4682d9880137251bd32b7da0019a56f9216dea15ad22f3ff40a7403af439" Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.543695 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-kfwgx"] Nov 24 19:35:04 crc kubenswrapper[4812]: E1124 19:35:04.544027 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d72b4682d9880137251bd32b7da0019a56f9216dea15ad22f3ff40a7403af439\": container with ID starting with d72b4682d9880137251bd32b7da0019a56f9216dea15ad22f3ff40a7403af439 not found: ID does not exist" containerID="d72b4682d9880137251bd32b7da0019a56f9216dea15ad22f3ff40a7403af439" Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.544057 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d72b4682d9880137251bd32b7da0019a56f9216dea15ad22f3ff40a7403af439"} err="failed to get container status \"d72b4682d9880137251bd32b7da0019a56f9216dea15ad22f3ff40a7403af439\": rpc error: code = NotFound desc = could not find container \"d72b4682d9880137251bd32b7da0019a56f9216dea15ad22f3ff40a7403af439\": container with ID starting with d72b4682d9880137251bd32b7da0019a56f9216dea15ad22f3ff40a7403af439 not found: ID does not exist" Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.548775 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-kfwgx"] Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.594121 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-jjph7"] Nov 24 19:35:04 crc kubenswrapper[4812]: W1124 19:35:04.600685 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b3cbac7_75be_4160_84d2_1dcf750c19de.slice/crio-bdf4f34e772286332d0daf1d9542e3368bf9d7413c133d4cb4d8ea98d846acd8 WatchSource:0}: Error finding container bdf4f34e772286332d0daf1d9542e3368bf9d7413c133d4cb4d8ea98d846acd8: Status 404 returned error can't find the container with id bdf4f34e772286332d0daf1d9542e3368bf9d7413c133d4cb4d8ea98d846acd8 Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.919811 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " pod="openstack/swift-storage-0" Nov 24 19:35:04 crc kubenswrapper[4812]: E1124 19:35:04.920118 4812 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 19:35:04 crc kubenswrapper[4812]: E1124 19:35:04.920159 4812 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 19:35:04 crc kubenswrapper[4812]: E1124 19:35:04.920242 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift podName:c127eda7-bbfe-4197-b5cf-f4f99824d0c8 nodeName:}" failed. No retries permitted until 2025-11-24 19:35:06.920210429 +0000 UTC m=+1100.709162840 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift") pod "swift-storage-0" (UID: "c127eda7-bbfe-4197-b5cf-f4f99824d0c8") : configmap "swift-ring-files" not found Nov 24 19:35:04 crc kubenswrapper[4812]: I1124 19:35:04.979756 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c58650e-77d4-4825-90e6-28abfa829f45" path="/var/lib/kubelet/pods/8c58650e-77d4-4825-90e6-28abfa829f45/volumes" Nov 24 19:35:05 crc kubenswrapper[4812]: I1124 19:35:05.429742 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-n2bml" event={"ID":"8cfe0ed1-d668-482c-8303-2ca93bdfc057","Type":"ContainerStarted","Data":"62d03c4a9e5685c9c943a0d1a4db0073f9bc0452d073dbf66734cda8bf0c38b1"} Nov 24 19:35:05 crc kubenswrapper[4812]: I1124 19:35:05.435675 4812 generic.go:334] "Generic (PLEG): container finished" podID="8b3cbac7-75be-4160-84d2-1dcf750c19de" containerID="691fe36d3615693669c1adbf5fba5712842f50a099bce01b19207fa150039f3f" exitCode=0 Nov 24 19:35:05 crc kubenswrapper[4812]: I1124 19:35:05.435714 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" event={"ID":"8b3cbac7-75be-4160-84d2-1dcf750c19de","Type":"ContainerDied","Data":"691fe36d3615693669c1adbf5fba5712842f50a099bce01b19207fa150039f3f"} Nov 24 19:35:05 crc kubenswrapper[4812]: I1124 19:35:05.435759 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" event={"ID":"8b3cbac7-75be-4160-84d2-1dcf750c19de","Type":"ContainerStarted","Data":"bdf4f34e772286332d0daf1d9542e3368bf9d7413c133d4cb4d8ea98d846acd8"} Nov 24 19:35:05 crc kubenswrapper[4812]: I1124 19:35:05.453161 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"29c07c15-c183-4546-8c7b-574776382e9b","Type":"ContainerStarted","Data":"d59c90fc27bffc2d0a2421c8e4a2fca424269770fabbb3a8764ba143d1d50c6e"} Nov 24 19:35:05 crc kubenswrapper[4812]: I1124 19:35:05.456045 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"dfeb89d5-a81d-411c-8808-ae9f506780e2","Type":"ContainerStarted","Data":"b80b07ff93663ea7089effb52d3abdb13f3c43cfcfd43f87f18a84303d9f1a1c"} Nov 24 19:35:05 crc kubenswrapper[4812]: I1124 19:35:05.457684 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-n2bml" podStartSLOduration=3.342854229 podStartE2EDuration="7.457667539s" podCreationTimestamp="2025-11-24 19:34:58 +0000 UTC" firstStartedPulling="2025-11-24 19:35:00.227472252 +0000 UTC m=+1094.016424643" lastFinishedPulling="2025-11-24 19:35:04.342285582 +0000 UTC m=+1098.131237953" observedRunningTime="2025-11-24 19:35:05.448918801 +0000 UTC m=+1099.237871192" watchObservedRunningTime="2025-11-24 19:35:05.457667539 +0000 UTC m=+1099.246619920" Nov 24 19:35:05 crc kubenswrapper[4812]: I1124 19:35:05.564864 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=20.158872015 podStartE2EDuration="29.56484124s" podCreationTimestamp="2025-11-24 19:34:36 +0000 UTC" firstStartedPulling="2025-11-24 19:34:54.892958457 +0000 UTC m=+1088.681910828" lastFinishedPulling="2025-11-24 19:35:04.298927682 +0000 UTC m=+1098.087880053" observedRunningTime="2025-11-24 19:35:05.51373871 +0000 UTC m=+1099.302691101" watchObservedRunningTime="2025-11-24 19:35:05.56484124 +0000 UTC m=+1099.353793611" Nov 24 19:35:05 crc kubenswrapper[4812]: I1124 19:35:05.565770 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=19.265279114 podStartE2EDuration="28.565761186s" podCreationTimestamp="2025-11-24 19:34:37 +0000 UTC" firstStartedPulling="2025-11-24 19:34:54.985102281 +0000 UTC m=+1088.774054652" lastFinishedPulling="2025-11-24 19:35:04.285584353 +0000 UTC m=+1098.074536724" observedRunningTime="2025-11-24 19:35:05.543636688 +0000 UTC m=+1099.332589069" watchObservedRunningTime="2025-11-24 19:35:05.565761186 +0000 UTC m=+1099.354713557" Nov 24 19:35:05 crc kubenswrapper[4812]: I1124 19:35:05.797655 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 24 19:35:05 crc kubenswrapper[4812]: I1124 19:35:05.843269 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 24 19:35:06 crc kubenswrapper[4812]: I1124 19:35:06.475208 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" event={"ID":"8b3cbac7-75be-4160-84d2-1dcf750c19de","Type":"ContainerStarted","Data":"5e92be46f30aebbddd1ccb1217b71c3cbf513d339a1383acb1f4bc96c35bc840"} Nov 24 19:35:06 crc kubenswrapper[4812]: I1124 19:35:06.475614 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 24 19:35:06 crc kubenswrapper[4812]: I1124 19:35:06.507826 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" podStartSLOduration=4.507793903 podStartE2EDuration="4.507793903s" podCreationTimestamp="2025-11-24 19:35:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:35:06.504540851 +0000 UTC m=+1100.293493262" watchObservedRunningTime="2025-11-24 19:35:06.507793903 +0000 UTC m=+1100.296746324" Nov 24 19:35:06 crc kubenswrapper[4812]: I1124 19:35:06.552563 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 24 19:35:06 crc kubenswrapper[4812]: I1124 19:35:06.957956 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " pod="openstack/swift-storage-0" Nov 24 19:35:06 crc kubenswrapper[4812]: E1124 19:35:06.958475 4812 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 19:35:06 crc kubenswrapper[4812]: E1124 19:35:06.958489 4812 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 19:35:06 crc kubenswrapper[4812]: E1124 19:35:06.958530 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift podName:c127eda7-bbfe-4197-b5cf-f4f99824d0c8 nodeName:}" failed. No retries permitted until 2025-11-24 19:35:10.958515942 +0000 UTC m=+1104.747468313 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift") pod "swift-storage-0" (UID: "c127eda7-bbfe-4197-b5cf-f4f99824d0c8") : configmap "swift-ring-files" not found Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.152499 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-ngkf6"] Nov 24 19:35:07 crc kubenswrapper[4812]: E1124 19:35:07.152816 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c58650e-77d4-4825-90e6-28abfa829f45" containerName="dnsmasq-dns" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.152830 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c58650e-77d4-4825-90e6-28abfa829f45" containerName="dnsmasq-dns" Nov 24 19:35:07 crc kubenswrapper[4812]: E1124 19:35:07.152844 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c58650e-77d4-4825-90e6-28abfa829f45" containerName="init" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.152851 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c58650e-77d4-4825-90e6-28abfa829f45" containerName="init" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.153043 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c58650e-77d4-4825-90e6-28abfa829f45" containerName="dnsmasq-dns" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.153591 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.155240 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.156208 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.156365 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.192974 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-ngkf6"] Nov 24 19:35:07 crc kubenswrapper[4812]: E1124 19:35:07.193669 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-wpl8z ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-ngkf6" podUID="f3602ce7-8b47-4d0e-9080-acd78c491dc9" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.200697 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-99q2n"] Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.202150 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.214544 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-99q2n"] Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.222247 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-ngkf6"] Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.262341 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f3602ce7-8b47-4d0e-9080-acd78c491dc9-dispersionconf\") pod \"swift-ring-rebalance-ngkf6\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.262380 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpl8z\" (UniqueName: \"kubernetes.io/projected/f3602ce7-8b47-4d0e-9080-acd78c491dc9-kube-api-access-wpl8z\") pod \"swift-ring-rebalance-ngkf6\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.262401 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3602ce7-8b47-4d0e-9080-acd78c491dc9-combined-ca-bundle\") pod \"swift-ring-rebalance-ngkf6\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.262487 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f3602ce7-8b47-4d0e-9080-acd78c491dc9-etc-swift\") pod \"swift-ring-rebalance-ngkf6\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.262551 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f3602ce7-8b47-4d0e-9080-acd78c491dc9-ring-data-devices\") pod \"swift-ring-rebalance-ngkf6\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.262615 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f3602ce7-8b47-4d0e-9080-acd78c491dc9-swiftconf\") pod \"swift-ring-rebalance-ngkf6\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.262662 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3602ce7-8b47-4d0e-9080-acd78c491dc9-scripts\") pod \"swift-ring-rebalance-ngkf6\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.364110 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b3986d6e-e683-4327-a5cd-db98e23ca287-dispersionconf\") pod \"swift-ring-rebalance-99q2n\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.364200 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f3602ce7-8b47-4d0e-9080-acd78c491dc9-etc-swift\") pod \"swift-ring-rebalance-ngkf6\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.364237 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b3986d6e-e683-4327-a5cd-db98e23ca287-ring-data-devices\") pod \"swift-ring-rebalance-99q2n\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.364269 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26tc6\" (UniqueName: \"kubernetes.io/projected/b3986d6e-e683-4327-a5cd-db98e23ca287-kube-api-access-26tc6\") pod \"swift-ring-rebalance-99q2n\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.364303 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f3602ce7-8b47-4d0e-9080-acd78c491dc9-ring-data-devices\") pod \"swift-ring-rebalance-ngkf6\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.364327 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3986d6e-e683-4327-a5cd-db98e23ca287-combined-ca-bundle\") pod \"swift-ring-rebalance-99q2n\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.364383 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3986d6e-e683-4327-a5cd-db98e23ca287-scripts\") pod \"swift-ring-rebalance-99q2n\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.364422 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b3986d6e-e683-4327-a5cd-db98e23ca287-swiftconf\") pod \"swift-ring-rebalance-99q2n\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.364494 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b3986d6e-e683-4327-a5cd-db98e23ca287-etc-swift\") pod \"swift-ring-rebalance-99q2n\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.364526 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f3602ce7-8b47-4d0e-9080-acd78c491dc9-swiftconf\") pod \"swift-ring-rebalance-ngkf6\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.364590 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3602ce7-8b47-4d0e-9080-acd78c491dc9-scripts\") pod \"swift-ring-rebalance-ngkf6\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.364631 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f3602ce7-8b47-4d0e-9080-acd78c491dc9-dispersionconf\") pod \"swift-ring-rebalance-ngkf6\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.364660 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpl8z\" (UniqueName: \"kubernetes.io/projected/f3602ce7-8b47-4d0e-9080-acd78c491dc9-kube-api-access-wpl8z\") pod \"swift-ring-rebalance-ngkf6\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.364682 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3602ce7-8b47-4d0e-9080-acd78c491dc9-combined-ca-bundle\") pod \"swift-ring-rebalance-ngkf6\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.364760 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f3602ce7-8b47-4d0e-9080-acd78c491dc9-etc-swift\") pod \"swift-ring-rebalance-ngkf6\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.365638 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f3602ce7-8b47-4d0e-9080-acd78c491dc9-ring-data-devices\") pod \"swift-ring-rebalance-ngkf6\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.366167 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3602ce7-8b47-4d0e-9080-acd78c491dc9-scripts\") pod \"swift-ring-rebalance-ngkf6\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.371469 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.372528 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f3602ce7-8b47-4d0e-9080-acd78c491dc9-dispersionconf\") pod \"swift-ring-rebalance-ngkf6\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.373055 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f3602ce7-8b47-4d0e-9080-acd78c491dc9-swiftconf\") pod \"swift-ring-rebalance-ngkf6\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.373214 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3602ce7-8b47-4d0e-9080-acd78c491dc9-combined-ca-bundle\") pod \"swift-ring-rebalance-ngkf6\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.391463 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpl8z\" (UniqueName: \"kubernetes.io/projected/f3602ce7-8b47-4d0e-9080-acd78c491dc9-kube-api-access-wpl8z\") pod \"swift-ring-rebalance-ngkf6\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.466491 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b3986d6e-e683-4327-a5cd-db98e23ca287-dispersionconf\") pod \"swift-ring-rebalance-99q2n\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.466585 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b3986d6e-e683-4327-a5cd-db98e23ca287-ring-data-devices\") pod \"swift-ring-rebalance-99q2n\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.466614 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26tc6\" (UniqueName: \"kubernetes.io/projected/b3986d6e-e683-4327-a5cd-db98e23ca287-kube-api-access-26tc6\") pod \"swift-ring-rebalance-99q2n\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.466645 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3986d6e-e683-4327-a5cd-db98e23ca287-combined-ca-bundle\") pod \"swift-ring-rebalance-99q2n\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.466669 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3986d6e-e683-4327-a5cd-db98e23ca287-scripts\") pod \"swift-ring-rebalance-99q2n\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.466694 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b3986d6e-e683-4327-a5cd-db98e23ca287-swiftconf\") pod \"swift-ring-rebalance-99q2n\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.466771 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b3986d6e-e683-4327-a5cd-db98e23ca287-etc-swift\") pod \"swift-ring-rebalance-99q2n\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.467318 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b3986d6e-e683-4327-a5cd-db98e23ca287-etc-swift\") pod \"swift-ring-rebalance-99q2n\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.468327 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b3986d6e-e683-4327-a5cd-db98e23ca287-ring-data-devices\") pod \"swift-ring-rebalance-99q2n\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.469188 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3986d6e-e683-4327-a5cd-db98e23ca287-scripts\") pod \"swift-ring-rebalance-99q2n\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.470944 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b3986d6e-e683-4327-a5cd-db98e23ca287-dispersionconf\") pod \"swift-ring-rebalance-99q2n\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.471232 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3986d6e-e683-4327-a5cd-db98e23ca287-combined-ca-bundle\") pod \"swift-ring-rebalance-99q2n\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.471673 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b3986d6e-e683-4327-a5cd-db98e23ca287-swiftconf\") pod \"swift-ring-rebalance-99q2n\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.483849 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.506733 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26tc6\" (UniqueName: \"kubernetes.io/projected/b3986d6e-e683-4327-a5cd-db98e23ca287-kube-api-access-26tc6\") pod \"swift-ring-rebalance-99q2n\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.511633 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.511687 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.531886 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.551531 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.580119 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.668880 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f3602ce7-8b47-4d0e-9080-acd78c491dc9-dispersionconf\") pod \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.668921 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f3602ce7-8b47-4d0e-9080-acd78c491dc9-swiftconf\") pod \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.668985 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpl8z\" (UniqueName: \"kubernetes.io/projected/f3602ce7-8b47-4d0e-9080-acd78c491dc9-kube-api-access-wpl8z\") pod \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.669036 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3602ce7-8b47-4d0e-9080-acd78c491dc9-combined-ca-bundle\") pod \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.669070 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f3602ce7-8b47-4d0e-9080-acd78c491dc9-etc-swift\") pod \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.669134 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f3602ce7-8b47-4d0e-9080-acd78c491dc9-ring-data-devices\") pod \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.669232 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3602ce7-8b47-4d0e-9080-acd78c491dc9-scripts\") pod \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\" (UID: \"f3602ce7-8b47-4d0e-9080-acd78c491dc9\") " Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.672110 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3602ce7-8b47-4d0e-9080-acd78c491dc9-scripts" (OuterVolumeSpecName: "scripts") pod "f3602ce7-8b47-4d0e-9080-acd78c491dc9" (UID: "f3602ce7-8b47-4d0e-9080-acd78c491dc9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.672831 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3602ce7-8b47-4d0e-9080-acd78c491dc9-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "f3602ce7-8b47-4d0e-9080-acd78c491dc9" (UID: "f3602ce7-8b47-4d0e-9080-acd78c491dc9"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.674304 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3602ce7-8b47-4d0e-9080-acd78c491dc9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f3602ce7-8b47-4d0e-9080-acd78c491dc9" (UID: "f3602ce7-8b47-4d0e-9080-acd78c491dc9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.681031 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3602ce7-8b47-4d0e-9080-acd78c491dc9-kube-api-access-wpl8z" (OuterVolumeSpecName: "kube-api-access-wpl8z") pod "f3602ce7-8b47-4d0e-9080-acd78c491dc9" (UID: "f3602ce7-8b47-4d0e-9080-acd78c491dc9"). InnerVolumeSpecName "kube-api-access-wpl8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.681413 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3602ce7-8b47-4d0e-9080-acd78c491dc9-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "f3602ce7-8b47-4d0e-9080-acd78c491dc9" (UID: "f3602ce7-8b47-4d0e-9080-acd78c491dc9"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.681551 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3602ce7-8b47-4d0e-9080-acd78c491dc9-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "f3602ce7-8b47-4d0e-9080-acd78c491dc9" (UID: "f3602ce7-8b47-4d0e-9080-acd78c491dc9"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.682448 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3602ce7-8b47-4d0e-9080-acd78c491dc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3602ce7-8b47-4d0e-9080-acd78c491dc9" (UID: "f3602ce7-8b47-4d0e-9080-acd78c491dc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.771435 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpl8z\" (UniqueName: \"kubernetes.io/projected/f3602ce7-8b47-4d0e-9080-acd78c491dc9-kube-api-access-wpl8z\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.771475 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3602ce7-8b47-4d0e-9080-acd78c491dc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.771488 4812 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/f3602ce7-8b47-4d0e-9080-acd78c491dc9-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.771501 4812 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/f3602ce7-8b47-4d0e-9080-acd78c491dc9-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.771513 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f3602ce7-8b47-4d0e-9080-acd78c491dc9-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.771522 4812 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/f3602ce7-8b47-4d0e-9080-acd78c491dc9-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:07 crc kubenswrapper[4812]: I1124 19:35:07.771532 4812 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/f3602ce7-8b47-4d0e-9080-acd78c491dc9-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.044684 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-99q2n"] Nov 24 19:35:08 crc kubenswrapper[4812]: W1124 19:35:08.052300 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3986d6e_e683_4327_a5cd_db98e23ca287.slice/crio-15a627f2459f718b8ef690c4286681bf625760b158b30d2e1de4f89014e7ab22 WatchSource:0}: Error finding container 15a627f2459f718b8ef690c4286681bf625760b158b30d2e1de4f89014e7ab22: Status 404 returned error can't find the container with id 15a627f2459f718b8ef690c4286681bf625760b158b30d2e1de4f89014e7ab22 Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.504280 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-99q2n" event={"ID":"b3986d6e-e683-4327-a5cd-db98e23ca287","Type":"ContainerStarted","Data":"15a627f2459f718b8ef690c4286681bf625760b158b30d2e1de4f89014e7ab22"} Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.504959 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ngkf6" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.563655 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-ngkf6"] Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.575052 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-ngkf6"] Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.605627 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.807944 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.810369 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.815557 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.816514 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-7p6rg" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.816728 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.816888 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.829017 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.851670 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.851707 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.895090 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0648d4b9-0096-4092-b2b9-70e23f9c863c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " pod="openstack/ovn-northd-0" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.895146 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0648d4b9-0096-4092-b2b9-70e23f9c863c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " pod="openstack/ovn-northd-0" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.895197 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0648d4b9-0096-4092-b2b9-70e23f9c863c-scripts\") pod \"ovn-northd-0\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " pod="openstack/ovn-northd-0" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.895216 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz55c\" (UniqueName: \"kubernetes.io/projected/0648d4b9-0096-4092-b2b9-70e23f9c863c-kube-api-access-nz55c\") pod \"ovn-northd-0\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " pod="openstack/ovn-northd-0" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.895308 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0648d4b9-0096-4092-b2b9-70e23f9c863c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " pod="openstack/ovn-northd-0" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.895423 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0648d4b9-0096-4092-b2b9-70e23f9c863c-config\") pod \"ovn-northd-0\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " pod="openstack/ovn-northd-0" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.895460 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0648d4b9-0096-4092-b2b9-70e23f9c863c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " pod="openstack/ovn-northd-0" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.975184 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3602ce7-8b47-4d0e-9080-acd78c491dc9" path="/var/lib/kubelet/pods/f3602ce7-8b47-4d0e-9080-acd78c491dc9/volumes" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.981468 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.996632 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0648d4b9-0096-4092-b2b9-70e23f9c863c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " pod="openstack/ovn-northd-0" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.996746 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0648d4b9-0096-4092-b2b9-70e23f9c863c-config\") pod \"ovn-northd-0\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " pod="openstack/ovn-northd-0" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.996788 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0648d4b9-0096-4092-b2b9-70e23f9c863c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " pod="openstack/ovn-northd-0" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.996822 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0648d4b9-0096-4092-b2b9-70e23f9c863c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " pod="openstack/ovn-northd-0" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.996848 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0648d4b9-0096-4092-b2b9-70e23f9c863c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " pod="openstack/ovn-northd-0" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.996903 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0648d4b9-0096-4092-b2b9-70e23f9c863c-scripts\") pod \"ovn-northd-0\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " pod="openstack/ovn-northd-0" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.996923 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz55c\" (UniqueName: \"kubernetes.io/projected/0648d4b9-0096-4092-b2b9-70e23f9c863c-kube-api-access-nz55c\") pod \"ovn-northd-0\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " pod="openstack/ovn-northd-0" Nov 24 19:35:08 crc kubenswrapper[4812]: I1124 19:35:08.997553 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0648d4b9-0096-4092-b2b9-70e23f9c863c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " pod="openstack/ovn-northd-0" Nov 24 19:35:09 crc kubenswrapper[4812]: I1124 19:35:09.004644 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0648d4b9-0096-4092-b2b9-70e23f9c863c-scripts\") pod \"ovn-northd-0\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " pod="openstack/ovn-northd-0" Nov 24 19:35:09 crc kubenswrapper[4812]: I1124 19:35:09.004758 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0648d4b9-0096-4092-b2b9-70e23f9c863c-config\") pod \"ovn-northd-0\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " pod="openstack/ovn-northd-0" Nov 24 19:35:09 crc kubenswrapper[4812]: I1124 19:35:09.005618 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0648d4b9-0096-4092-b2b9-70e23f9c863c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " pod="openstack/ovn-northd-0" Nov 24 19:35:09 crc kubenswrapper[4812]: I1124 19:35:09.013111 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0648d4b9-0096-4092-b2b9-70e23f9c863c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " pod="openstack/ovn-northd-0" Nov 24 19:35:09 crc kubenswrapper[4812]: I1124 19:35:09.013809 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0648d4b9-0096-4092-b2b9-70e23f9c863c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " pod="openstack/ovn-northd-0" Nov 24 19:35:09 crc kubenswrapper[4812]: I1124 19:35:09.021083 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz55c\" (UniqueName: \"kubernetes.io/projected/0648d4b9-0096-4092-b2b9-70e23f9c863c-kube-api-access-nz55c\") pod \"ovn-northd-0\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " pod="openstack/ovn-northd-0" Nov 24 19:35:09 crc kubenswrapper[4812]: I1124 19:35:09.029425 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 24 19:35:09 crc kubenswrapper[4812]: I1124 19:35:09.154414 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 19:35:09 crc kubenswrapper[4812]: I1124 19:35:09.638979 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 19:35:09 crc kubenswrapper[4812]: I1124 19:35:09.644650 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.110714 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.111551 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.169809 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c995-account-create-vdjsf"] Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.170951 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c995-account-create-vdjsf" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.189455 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.194322 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c995-account-create-vdjsf"] Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.218508 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-srrps"] Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.220025 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-srrps" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.222329 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npjxj\" (UniqueName: \"kubernetes.io/projected/e5a20600-c4a1-4a64-9c8b-c8cf8680abac-kube-api-access-npjxj\") pod \"keystone-c995-account-create-vdjsf\" (UID: \"e5a20600-c4a1-4a64-9c8b-c8cf8680abac\") " pod="openstack/keystone-c995-account-create-vdjsf" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.223142 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5a20600-c4a1-4a64-9c8b-c8cf8680abac-operator-scripts\") pod \"keystone-c995-account-create-vdjsf\" (UID: \"e5a20600-c4a1-4a64-9c8b-c8cf8680abac\") " pod="openstack/keystone-c995-account-create-vdjsf" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.237132 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-srrps"] Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.243815 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.324503 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a6064d5-ead6-4ac9-8c53-6c72585d76ff-operator-scripts\") pod \"keystone-db-create-srrps\" (UID: \"6a6064d5-ead6-4ac9-8c53-6c72585d76ff\") " pod="openstack/keystone-db-create-srrps" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.324876 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5a20600-c4a1-4a64-9c8b-c8cf8680abac-operator-scripts\") pod \"keystone-c995-account-create-vdjsf\" (UID: \"e5a20600-c4a1-4a64-9c8b-c8cf8680abac\") " pod="openstack/keystone-c995-account-create-vdjsf" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.324916 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6z8f\" (UniqueName: \"kubernetes.io/projected/6a6064d5-ead6-4ac9-8c53-6c72585d76ff-kube-api-access-f6z8f\") pod \"keystone-db-create-srrps\" (UID: \"6a6064d5-ead6-4ac9-8c53-6c72585d76ff\") " pod="openstack/keystone-db-create-srrps" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.324988 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npjxj\" (UniqueName: \"kubernetes.io/projected/e5a20600-c4a1-4a64-9c8b-c8cf8680abac-kube-api-access-npjxj\") pod \"keystone-c995-account-create-vdjsf\" (UID: \"e5a20600-c4a1-4a64-9c8b-c8cf8680abac\") " pod="openstack/keystone-c995-account-create-vdjsf" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.326286 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5a20600-c4a1-4a64-9c8b-c8cf8680abac-operator-scripts\") pod \"keystone-c995-account-create-vdjsf\" (UID: \"e5a20600-c4a1-4a64-9c8b-c8cf8680abac\") " pod="openstack/keystone-c995-account-create-vdjsf" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.357834 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npjxj\" (UniqueName: \"kubernetes.io/projected/e5a20600-c4a1-4a64-9c8b-c8cf8680abac-kube-api-access-npjxj\") pod \"keystone-c995-account-create-vdjsf\" (UID: \"e5a20600-c4a1-4a64-9c8b-c8cf8680abac\") " pod="openstack/keystone-c995-account-create-vdjsf" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.418727 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-7776w"] Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.421459 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7776w" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.426948 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6z8f\" (UniqueName: \"kubernetes.io/projected/6a6064d5-ead6-4ac9-8c53-6c72585d76ff-kube-api-access-f6z8f\") pod \"keystone-db-create-srrps\" (UID: \"6a6064d5-ead6-4ac9-8c53-6c72585d76ff\") " pod="openstack/keystone-db-create-srrps" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.427074 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a6064d5-ead6-4ac9-8c53-6c72585d76ff-operator-scripts\") pod \"keystone-db-create-srrps\" (UID: \"6a6064d5-ead6-4ac9-8c53-6c72585d76ff\") " pod="openstack/keystone-db-create-srrps" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.427755 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a6064d5-ead6-4ac9-8c53-6c72585d76ff-operator-scripts\") pod \"keystone-db-create-srrps\" (UID: \"6a6064d5-ead6-4ac9-8c53-6c72585d76ff\") " pod="openstack/keystone-db-create-srrps" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.443514 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7776w"] Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.460485 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6z8f\" (UniqueName: \"kubernetes.io/projected/6a6064d5-ead6-4ac9-8c53-6c72585d76ff-kube-api-access-f6z8f\") pod \"keystone-db-create-srrps\" (UID: \"6a6064d5-ead6-4ac9-8c53-6c72585d76ff\") " pod="openstack/keystone-db-create-srrps" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.490322 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c995-account-create-vdjsf" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.506465 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-0bb7-account-create-jvn5p"] Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.508238 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0bb7-account-create-jvn5p" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.510187 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.522436 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0bb7-account-create-jvn5p"] Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.528593 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmpmp\" (UniqueName: \"kubernetes.io/projected/6986c0d4-ba42-44f3-ba4f-51872e0345b6-kube-api-access-qmpmp\") pod \"placement-db-create-7776w\" (UID: \"6986c0d4-ba42-44f3-ba4f-51872e0345b6\") " pod="openstack/placement-db-create-7776w" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.529103 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6986c0d4-ba42-44f3-ba4f-51872e0345b6-operator-scripts\") pod \"placement-db-create-7776w\" (UID: \"6986c0d4-ba42-44f3-ba4f-51872e0345b6\") " pod="openstack/placement-db-create-7776w" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.535535 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0648d4b9-0096-4092-b2b9-70e23f9c863c","Type":"ContainerStarted","Data":"61f062143007d0e2ed51cf609156fdd92f7d94b279cda6b166dc5c6918fc570c"} Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.540804 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-srrps" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.608438 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.631053 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ed3a39d-494f-40f0-a336-1ccbc4d73f93-operator-scripts\") pod \"placement-0bb7-account-create-jvn5p\" (UID: \"0ed3a39d-494f-40f0-a336-1ccbc4d73f93\") " pod="openstack/placement-0bb7-account-create-jvn5p" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.631118 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb5zk\" (UniqueName: \"kubernetes.io/projected/0ed3a39d-494f-40f0-a336-1ccbc4d73f93-kube-api-access-bb5zk\") pod \"placement-0bb7-account-create-jvn5p\" (UID: \"0ed3a39d-494f-40f0-a336-1ccbc4d73f93\") " pod="openstack/placement-0bb7-account-create-jvn5p" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.631148 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6986c0d4-ba42-44f3-ba4f-51872e0345b6-operator-scripts\") pod \"placement-db-create-7776w\" (UID: \"6986c0d4-ba42-44f3-ba4f-51872e0345b6\") " pod="openstack/placement-db-create-7776w" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.631219 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmpmp\" (UniqueName: \"kubernetes.io/projected/6986c0d4-ba42-44f3-ba4f-51872e0345b6-kube-api-access-qmpmp\") pod \"placement-db-create-7776w\" (UID: \"6986c0d4-ba42-44f3-ba4f-51872e0345b6\") " pod="openstack/placement-db-create-7776w" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.632238 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6986c0d4-ba42-44f3-ba4f-51872e0345b6-operator-scripts\") pod \"placement-db-create-7776w\" (UID: \"6986c0d4-ba42-44f3-ba4f-51872e0345b6\") " pod="openstack/placement-db-create-7776w" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.646765 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmpmp\" (UniqueName: \"kubernetes.io/projected/6986c0d4-ba42-44f3-ba4f-51872e0345b6-kube-api-access-qmpmp\") pod \"placement-db-create-7776w\" (UID: \"6986c0d4-ba42-44f3-ba4f-51872e0345b6\") " pod="openstack/placement-db-create-7776w" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.712843 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-5n45p"] Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.713934 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5n45p" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.719142 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-5n45p"] Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.769021 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7776w" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.769511 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ed3a39d-494f-40f0-a336-1ccbc4d73f93-operator-scripts\") pod \"placement-0bb7-account-create-jvn5p\" (UID: \"0ed3a39d-494f-40f0-a336-1ccbc4d73f93\") " pod="openstack/placement-0bb7-account-create-jvn5p" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.769552 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb5zk\" (UniqueName: \"kubernetes.io/projected/0ed3a39d-494f-40f0-a336-1ccbc4d73f93-kube-api-access-bb5zk\") pod \"placement-0bb7-account-create-jvn5p\" (UID: \"0ed3a39d-494f-40f0-a336-1ccbc4d73f93\") " pod="openstack/placement-0bb7-account-create-jvn5p" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.769645 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c22383eb-b38e-415a-b0e4-bde087a20c04-operator-scripts\") pod \"glance-db-create-5n45p\" (UID: \"c22383eb-b38e-415a-b0e4-bde087a20c04\") " pod="openstack/glance-db-create-5n45p" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.769728 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnmcg\" (UniqueName: \"kubernetes.io/projected/c22383eb-b38e-415a-b0e4-bde087a20c04-kube-api-access-fnmcg\") pod \"glance-db-create-5n45p\" (UID: \"c22383eb-b38e-415a-b0e4-bde087a20c04\") " pod="openstack/glance-db-create-5n45p" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.770480 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ed3a39d-494f-40f0-a336-1ccbc4d73f93-operator-scripts\") pod \"placement-0bb7-account-create-jvn5p\" (UID: \"0ed3a39d-494f-40f0-a336-1ccbc4d73f93\") " pod="openstack/placement-0bb7-account-create-jvn5p" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.801011 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb5zk\" (UniqueName: \"kubernetes.io/projected/0ed3a39d-494f-40f0-a336-1ccbc4d73f93-kube-api-access-bb5zk\") pod \"placement-0bb7-account-create-jvn5p\" (UID: \"0ed3a39d-494f-40f0-a336-1ccbc4d73f93\") " pod="openstack/placement-0bb7-account-create-jvn5p" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.833601 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0bb7-account-create-jvn5p" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.871247 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnmcg\" (UniqueName: \"kubernetes.io/projected/c22383eb-b38e-415a-b0e4-bde087a20c04-kube-api-access-fnmcg\") pod \"glance-db-create-5n45p\" (UID: \"c22383eb-b38e-415a-b0e4-bde087a20c04\") " pod="openstack/glance-db-create-5n45p" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.871455 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c22383eb-b38e-415a-b0e4-bde087a20c04-operator-scripts\") pod \"glance-db-create-5n45p\" (UID: \"c22383eb-b38e-415a-b0e4-bde087a20c04\") " pod="openstack/glance-db-create-5n45p" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.872327 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c22383eb-b38e-415a-b0e4-bde087a20c04-operator-scripts\") pod \"glance-db-create-5n45p\" (UID: \"c22383eb-b38e-415a-b0e4-bde087a20c04\") " pod="openstack/glance-db-create-5n45p" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.909100 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnmcg\" (UniqueName: \"kubernetes.io/projected/c22383eb-b38e-415a-b0e4-bde087a20c04-kube-api-access-fnmcg\") pod \"glance-db-create-5n45p\" (UID: \"c22383eb-b38e-415a-b0e4-bde087a20c04\") " pod="openstack/glance-db-create-5n45p" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.910451 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7fd2-account-create-psclf"] Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.911494 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7fd2-account-create-psclf" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.914674 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.916779 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7fd2-account-create-psclf"] Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.973489 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mclpv\" (UniqueName: \"kubernetes.io/projected/b2b021e1-1d1a-422d-994a-3af278f1145b-kube-api-access-mclpv\") pod \"glance-7fd2-account-create-psclf\" (UID: \"b2b021e1-1d1a-422d-994a-3af278f1145b\") " pod="openstack/glance-7fd2-account-create-psclf" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.973547 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2b021e1-1d1a-422d-994a-3af278f1145b-operator-scripts\") pod \"glance-7fd2-account-create-psclf\" (UID: \"b2b021e1-1d1a-422d-994a-3af278f1145b\") " pod="openstack/glance-7fd2-account-create-psclf" Nov 24 19:35:10 crc kubenswrapper[4812]: I1124 19:35:10.973626 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " pod="openstack/swift-storage-0" Nov 24 19:35:10 crc kubenswrapper[4812]: E1124 19:35:10.973754 4812 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 19:35:10 crc kubenswrapper[4812]: E1124 19:35:10.973766 4812 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 19:35:10 crc kubenswrapper[4812]: E1124 19:35:10.973805 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift podName:c127eda7-bbfe-4197-b5cf-f4f99824d0c8 nodeName:}" failed. No retries permitted until 2025-11-24 19:35:18.973790037 +0000 UTC m=+1112.762742408 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift") pod "swift-storage-0" (UID: "c127eda7-bbfe-4197-b5cf-f4f99824d0c8") : configmap "swift-ring-files" not found Nov 24 19:35:11 crc kubenswrapper[4812]: I1124 19:35:11.075939 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mclpv\" (UniqueName: \"kubernetes.io/projected/b2b021e1-1d1a-422d-994a-3af278f1145b-kube-api-access-mclpv\") pod \"glance-7fd2-account-create-psclf\" (UID: \"b2b021e1-1d1a-422d-994a-3af278f1145b\") " pod="openstack/glance-7fd2-account-create-psclf" Nov 24 19:35:11 crc kubenswrapper[4812]: I1124 19:35:11.076014 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2b021e1-1d1a-422d-994a-3af278f1145b-operator-scripts\") pod \"glance-7fd2-account-create-psclf\" (UID: \"b2b021e1-1d1a-422d-994a-3af278f1145b\") " pod="openstack/glance-7fd2-account-create-psclf" Nov 24 19:35:11 crc kubenswrapper[4812]: I1124 19:35:11.077125 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2b021e1-1d1a-422d-994a-3af278f1145b-operator-scripts\") pod \"glance-7fd2-account-create-psclf\" (UID: \"b2b021e1-1d1a-422d-994a-3af278f1145b\") " pod="openstack/glance-7fd2-account-create-psclf" Nov 24 19:35:11 crc kubenswrapper[4812]: I1124 19:35:11.109057 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mclpv\" (UniqueName: \"kubernetes.io/projected/b2b021e1-1d1a-422d-994a-3af278f1145b-kube-api-access-mclpv\") pod \"glance-7fd2-account-create-psclf\" (UID: \"b2b021e1-1d1a-422d-994a-3af278f1145b\") " pod="openstack/glance-7fd2-account-create-psclf" Nov 24 19:35:11 crc kubenswrapper[4812]: I1124 19:35:11.118911 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5n45p" Nov 24 19:35:11 crc kubenswrapper[4812]: I1124 19:35:11.249758 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7fd2-account-create-psclf" Nov 24 19:35:12 crc kubenswrapper[4812]: I1124 19:35:12.130691 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 19:35:12 crc kubenswrapper[4812]: I1124 19:35:12.373377 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" Nov 24 19:35:12 crc kubenswrapper[4812]: I1124 19:35:12.427382 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65c9b8d4f7-g5xtt"] Nov 24 19:35:12 crc kubenswrapper[4812]: I1124 19:35:12.427836 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" podUID="00d308d2-17b6-46fe-94ac-0cc7e0fd1e27" containerName="dnsmasq-dns" containerID="cri-o://cc495a5f2e1a9e6d1a83570b491fb90543abb9e65abba2135126e51fffead5db" gracePeriod=10 Nov 24 19:35:13 crc kubenswrapper[4812]: I1124 19:35:13.564285 4812 generic.go:334] "Generic (PLEG): container finished" podID="00d308d2-17b6-46fe-94ac-0cc7e0fd1e27" containerID="cc495a5f2e1a9e6d1a83570b491fb90543abb9e65abba2135126e51fffead5db" exitCode=0 Nov 24 19:35:13 crc kubenswrapper[4812]: I1124 19:35:13.564374 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" event={"ID":"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27","Type":"ContainerDied","Data":"cc495a5f2e1a9e6d1a83570b491fb90543abb9e65abba2135126e51fffead5db"} Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.134290 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.246085 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-config\") pod \"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27\" (UID: \"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27\") " Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.246288 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-ovsdbserver-sb\") pod \"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27\" (UID: \"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27\") " Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.246465 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-dns-svc\") pod \"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27\" (UID: \"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27\") " Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.246683 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52skr\" (UniqueName: \"kubernetes.io/projected/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-kube-api-access-52skr\") pod \"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27\" (UID: \"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27\") " Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.250896 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-kube-api-access-52skr" (OuterVolumeSpecName: "kube-api-access-52skr") pod "00d308d2-17b6-46fe-94ac-0cc7e0fd1e27" (UID: "00d308d2-17b6-46fe-94ac-0cc7e0fd1e27"). InnerVolumeSpecName "kube-api-access-52skr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.294934 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "00d308d2-17b6-46fe-94ac-0cc7e0fd1e27" (UID: "00d308d2-17b6-46fe-94ac-0cc7e0fd1e27"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.298729 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-config" (OuterVolumeSpecName: "config") pod "00d308d2-17b6-46fe-94ac-0cc7e0fd1e27" (UID: "00d308d2-17b6-46fe-94ac-0cc7e0fd1e27"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.300556 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "00d308d2-17b6-46fe-94ac-0cc7e0fd1e27" (UID: "00d308d2-17b6-46fe-94ac-0cc7e0fd1e27"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.349346 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52skr\" (UniqueName: \"kubernetes.io/projected/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-kube-api-access-52skr\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.349379 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.349391 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.349404 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.380655 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0bb7-account-create-jvn5p"] Nov 24 19:35:14 crc kubenswrapper[4812]: W1124 19:35:14.390943 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ed3a39d_494f_40f0_a336_1ccbc4d73f93.slice/crio-fe893de49887504cc17021531901bc53358e2919ee53a61806f01531b013fc32 WatchSource:0}: Error finding container fe893de49887504cc17021531901bc53358e2919ee53a61806f01531b013fc32: Status 404 returned error can't find the container with id fe893de49887504cc17021531901bc53358e2919ee53a61806f01531b013fc32 Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.488013 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-7776w"] Nov 24 19:35:14 crc kubenswrapper[4812]: W1124 19:35:14.498190 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6986c0d4_ba42_44f3_ba4f_51872e0345b6.slice/crio-b8aaf282b2a7d1570b0fbb4b0208ff30f6f6e8c4efe1b4b82b89e3347c5fb4e7 WatchSource:0}: Error finding container b8aaf282b2a7d1570b0fbb4b0208ff30f6f6e8c4efe1b4b82b89e3347c5fb4e7: Status 404 returned error can't find the container with id b8aaf282b2a7d1570b0fbb4b0208ff30f6f6e8c4efe1b4b82b89e3347c5fb4e7 Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.512101 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7fd2-account-create-psclf"] Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.613499 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0bb7-account-create-jvn5p" event={"ID":"0ed3a39d-494f-40f0-a336-1ccbc4d73f93","Type":"ContainerStarted","Data":"c8fca5425aa6a7ad2399b179910d88ae1434ba33809ef43ae78dfaf618f07228"} Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.613541 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0bb7-account-create-jvn5p" event={"ID":"0ed3a39d-494f-40f0-a336-1ccbc4d73f93","Type":"ContainerStarted","Data":"fe893de49887504cc17021531901bc53358e2919ee53a61806f01531b013fc32"} Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.617202 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c995-account-create-vdjsf"] Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.620089 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7fd2-account-create-psclf" event={"ID":"b2b021e1-1d1a-422d-994a-3af278f1145b","Type":"ContainerStarted","Data":"01238ec8c46bc0a29cf6f5605d1df7f0c977656ca539746d6a822f3683c1896a"} Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.622062 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-99q2n" event={"ID":"b3986d6e-e683-4327-a5cd-db98e23ca287","Type":"ContainerStarted","Data":"fb57fda78afe686a7cc9cc2b74189b0701e0efae994b7b22fa603354146ddb49"} Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.625596 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" event={"ID":"00d308d2-17b6-46fe-94ac-0cc7e0fd1e27","Type":"ContainerDied","Data":"97c9fc3b8e9619c32517080914a225b8b4272366939e5d6d5c47e54cd3955a34"} Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.625635 4812 scope.go:117] "RemoveContainer" containerID="cc495a5f2e1a9e6d1a83570b491fb90543abb9e65abba2135126e51fffead5db" Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.625788 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.629153 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0648d4b9-0096-4092-b2b9-70e23f9c863c","Type":"ContainerStarted","Data":"14a1254945b1fd6a95f6b90d7cf54f5c07482fc6aea59fae62c4d9b5369203bf"} Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.629178 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0648d4b9-0096-4092-b2b9-70e23f9c863c","Type":"ContainerStarted","Data":"1612fa99105c6c6db36c69df3ab73e15ab7582ce223f0eb33cb1b46662439f92"} Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.629248 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.630133 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7776w" event={"ID":"6986c0d4-ba42-44f3-ba4f-51872e0345b6","Type":"ContainerStarted","Data":"b8aaf282b2a7d1570b0fbb4b0208ff30f6f6e8c4efe1b4b82b89e3347c5fb4e7"} Nov 24 19:35:14 crc kubenswrapper[4812]: W1124 19:35:14.639467 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5a20600_c4a1_4a64_9c8b_c8cf8680abac.slice/crio-82f255b3d15023a0bd4b0d7805d1f47c60d23e5434950cb1d0e4be16b9f7b48b WatchSource:0}: Error finding container 82f255b3d15023a0bd4b0d7805d1f47c60d23e5434950cb1d0e4be16b9f7b48b: Status 404 returned error can't find the container with id 82f255b3d15023a0bd4b0d7805d1f47c60d23e5434950cb1d0e4be16b9f7b48b Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.639629 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-0bb7-account-create-jvn5p" podStartSLOduration=4.639614467 podStartE2EDuration="4.639614467s" podCreationTimestamp="2025-11-24 19:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:35:14.634117411 +0000 UTC m=+1108.423069782" watchObservedRunningTime="2025-11-24 19:35:14.639614467 +0000 UTC m=+1108.428566848" Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.659316 4812 scope.go:117] "RemoveContainer" containerID="20c53295fb70b389b9936a6ba822796ec16ef2f16597f3c0bcbd9aaf77b56180" Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.667422 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-99q2n" podStartSLOduration=1.812088964 podStartE2EDuration="7.667403306s" podCreationTimestamp="2025-11-24 19:35:07 +0000 UTC" firstStartedPulling="2025-11-24 19:35:08.056450914 +0000 UTC m=+1101.845403315" lastFinishedPulling="2025-11-24 19:35:13.911765286 +0000 UTC m=+1107.700717657" observedRunningTime="2025-11-24 19:35:14.649968751 +0000 UTC m=+1108.438921112" watchObservedRunningTime="2025-11-24 19:35:14.667403306 +0000 UTC m=+1108.456355677" Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.667791 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-srrps"] Nov 24 19:35:14 crc kubenswrapper[4812]: W1124 19:35:14.676025 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a6064d5_ead6_4ac9_8c53_6c72585d76ff.slice/crio-e1afc0af2a7b893767184c98196baa2d15387bef1b56b1763712187365d06c5f WatchSource:0}: Error finding container e1afc0af2a7b893767184c98196baa2d15387bef1b56b1763712187365d06c5f: Status 404 returned error can't find the container with id e1afc0af2a7b893767184c98196baa2d15387bef1b56b1763712187365d06c5f Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.686709 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.461218224 podStartE2EDuration="6.686690003s" podCreationTimestamp="2025-11-24 19:35:08 +0000 UTC" firstStartedPulling="2025-11-24 19:35:09.670486868 +0000 UTC m=+1103.459439239" lastFinishedPulling="2025-11-24 19:35:13.895958647 +0000 UTC m=+1107.684911018" observedRunningTime="2025-11-24 19:35:14.667256411 +0000 UTC m=+1108.456208802" watchObservedRunningTime="2025-11-24 19:35:14.686690003 +0000 UTC m=+1108.475642374" Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.692494 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65c9b8d4f7-g5xtt"] Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.697969 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65c9b8d4f7-g5xtt"] Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.761973 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-5n45p"] Nov 24 19:35:14 crc kubenswrapper[4812]: I1124 19:35:14.975095 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00d308d2-17b6-46fe-94ac-0cc7e0fd1e27" path="/var/lib/kubelet/pods/00d308d2-17b6-46fe-94ac-0cc7e0fd1e27/volumes" Nov 24 19:35:15 crc kubenswrapper[4812]: I1124 19:35:15.647057 4812 generic.go:334] "Generic (PLEG): container finished" podID="6986c0d4-ba42-44f3-ba4f-51872e0345b6" containerID="27dd92bf0fa31dae5743c0d091290f515f61b6aec4150df2a4d104523c02a158" exitCode=0 Nov 24 19:35:15 crc kubenswrapper[4812]: I1124 19:35:15.647174 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7776w" event={"ID":"6986c0d4-ba42-44f3-ba4f-51872e0345b6","Type":"ContainerDied","Data":"27dd92bf0fa31dae5743c0d091290f515f61b6aec4150df2a4d104523c02a158"} Nov 24 19:35:15 crc kubenswrapper[4812]: I1124 19:35:15.652449 4812 generic.go:334] "Generic (PLEG): container finished" podID="0ed3a39d-494f-40f0-a336-1ccbc4d73f93" containerID="c8fca5425aa6a7ad2399b179910d88ae1434ba33809ef43ae78dfaf618f07228" exitCode=0 Nov 24 19:35:15 crc kubenswrapper[4812]: I1124 19:35:15.652531 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0bb7-account-create-jvn5p" event={"ID":"0ed3a39d-494f-40f0-a336-1ccbc4d73f93","Type":"ContainerDied","Data":"c8fca5425aa6a7ad2399b179910d88ae1434ba33809ef43ae78dfaf618f07228"} Nov 24 19:35:15 crc kubenswrapper[4812]: I1124 19:35:15.653856 4812 generic.go:334] "Generic (PLEG): container finished" podID="6a6064d5-ead6-4ac9-8c53-6c72585d76ff" containerID="3f636d722fa34d8944e84798a852e165a8362197815ef3149415654b567bad59" exitCode=0 Nov 24 19:35:15 crc kubenswrapper[4812]: I1124 19:35:15.653918 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-srrps" event={"ID":"6a6064d5-ead6-4ac9-8c53-6c72585d76ff","Type":"ContainerDied","Data":"3f636d722fa34d8944e84798a852e165a8362197815ef3149415654b567bad59"} Nov 24 19:35:15 crc kubenswrapper[4812]: I1124 19:35:15.653946 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-srrps" event={"ID":"6a6064d5-ead6-4ac9-8c53-6c72585d76ff","Type":"ContainerStarted","Data":"e1afc0af2a7b893767184c98196baa2d15387bef1b56b1763712187365d06c5f"} Nov 24 19:35:15 crc kubenswrapper[4812]: I1124 19:35:15.655127 4812 generic.go:334] "Generic (PLEG): container finished" podID="c22383eb-b38e-415a-b0e4-bde087a20c04" containerID="5b24363a93467b1f7aa9e690a4e55479fe80f7b68b45f314d458b78b448a1226" exitCode=0 Nov 24 19:35:15 crc kubenswrapper[4812]: I1124 19:35:15.655182 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5n45p" event={"ID":"c22383eb-b38e-415a-b0e4-bde087a20c04","Type":"ContainerDied","Data":"5b24363a93467b1f7aa9e690a4e55479fe80f7b68b45f314d458b78b448a1226"} Nov 24 19:35:15 crc kubenswrapper[4812]: I1124 19:35:15.655207 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5n45p" event={"ID":"c22383eb-b38e-415a-b0e4-bde087a20c04","Type":"ContainerStarted","Data":"62904cf036cfae53e86f3a77ec60fe6bc2361833ce3865fcd3f0cfd62b40e759"} Nov 24 19:35:15 crc kubenswrapper[4812]: I1124 19:35:15.667726 4812 generic.go:334] "Generic (PLEG): container finished" podID="e5a20600-c4a1-4a64-9c8b-c8cf8680abac" containerID="26e7b8154f6ad6d1baf3c0c70a991b1f542ee6aff6d15fb541f84daeb638b80a" exitCode=0 Nov 24 19:35:15 crc kubenswrapper[4812]: I1124 19:35:15.667856 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c995-account-create-vdjsf" event={"ID":"e5a20600-c4a1-4a64-9c8b-c8cf8680abac","Type":"ContainerDied","Data":"26e7b8154f6ad6d1baf3c0c70a991b1f542ee6aff6d15fb541f84daeb638b80a"} Nov 24 19:35:15 crc kubenswrapper[4812]: I1124 19:35:15.667895 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c995-account-create-vdjsf" event={"ID":"e5a20600-c4a1-4a64-9c8b-c8cf8680abac","Type":"ContainerStarted","Data":"82f255b3d15023a0bd4b0d7805d1f47c60d23e5434950cb1d0e4be16b9f7b48b"} Nov 24 19:35:15 crc kubenswrapper[4812]: I1124 19:35:15.670244 4812 generic.go:334] "Generic (PLEG): container finished" podID="b2b021e1-1d1a-422d-994a-3af278f1145b" containerID="3f6b939af1e2c730ee7d67e6cd8a16b9550676b93f648f93b4359aa7139b962a" exitCode=0 Nov 24 19:35:15 crc kubenswrapper[4812]: I1124 19:35:15.671477 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7fd2-account-create-psclf" event={"ID":"b2b021e1-1d1a-422d-994a-3af278f1145b","Type":"ContainerDied","Data":"3f6b939af1e2c730ee7d67e6cd8a16b9550676b93f648f93b4359aa7139b962a"} Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.103176 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0bb7-account-create-jvn5p" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.208794 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb5zk\" (UniqueName: \"kubernetes.io/projected/0ed3a39d-494f-40f0-a336-1ccbc4d73f93-kube-api-access-bb5zk\") pod \"0ed3a39d-494f-40f0-a336-1ccbc4d73f93\" (UID: \"0ed3a39d-494f-40f0-a336-1ccbc4d73f93\") " Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.208914 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ed3a39d-494f-40f0-a336-1ccbc4d73f93-operator-scripts\") pod \"0ed3a39d-494f-40f0-a336-1ccbc4d73f93\" (UID: \"0ed3a39d-494f-40f0-a336-1ccbc4d73f93\") " Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.210082 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ed3a39d-494f-40f0-a336-1ccbc4d73f93-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ed3a39d-494f-40f0-a336-1ccbc4d73f93" (UID: "0ed3a39d-494f-40f0-a336-1ccbc4d73f93"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.216454 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ed3a39d-494f-40f0-a336-1ccbc4d73f93-kube-api-access-bb5zk" (OuterVolumeSpecName: "kube-api-access-bb5zk") pod "0ed3a39d-494f-40f0-a336-1ccbc4d73f93" (UID: "0ed3a39d-494f-40f0-a336-1ccbc4d73f93"). InnerVolumeSpecName "kube-api-access-bb5zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.311143 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb5zk\" (UniqueName: \"kubernetes.io/projected/0ed3a39d-494f-40f0-a336-1ccbc4d73f93-kube-api-access-bb5zk\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.311450 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ed3a39d-494f-40f0-a336-1ccbc4d73f93-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.380081 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c995-account-create-vdjsf" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.389438 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5n45p" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.390998 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7fd2-account-create-psclf" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.406973 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-srrps" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.425063 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7776w" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.514780 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5a20600-c4a1-4a64-9c8b-c8cf8680abac-operator-scripts\") pod \"e5a20600-c4a1-4a64-9c8b-c8cf8680abac\" (UID: \"e5a20600-c4a1-4a64-9c8b-c8cf8680abac\") " Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.514813 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c22383eb-b38e-415a-b0e4-bde087a20c04-operator-scripts\") pod \"c22383eb-b38e-415a-b0e4-bde087a20c04\" (UID: \"c22383eb-b38e-415a-b0e4-bde087a20c04\") " Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.514863 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6z8f\" (UniqueName: \"kubernetes.io/projected/6a6064d5-ead6-4ac9-8c53-6c72585d76ff-kube-api-access-f6z8f\") pod \"6a6064d5-ead6-4ac9-8c53-6c72585d76ff\" (UID: \"6a6064d5-ead6-4ac9-8c53-6c72585d76ff\") " Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.514921 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnmcg\" (UniqueName: \"kubernetes.io/projected/c22383eb-b38e-415a-b0e4-bde087a20c04-kube-api-access-fnmcg\") pod \"c22383eb-b38e-415a-b0e4-bde087a20c04\" (UID: \"c22383eb-b38e-415a-b0e4-bde087a20c04\") " Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.515213 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npjxj\" (UniqueName: \"kubernetes.io/projected/e5a20600-c4a1-4a64-9c8b-c8cf8680abac-kube-api-access-npjxj\") pod \"e5a20600-c4a1-4a64-9c8b-c8cf8680abac\" (UID: \"e5a20600-c4a1-4a64-9c8b-c8cf8680abac\") " Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.515255 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2b021e1-1d1a-422d-994a-3af278f1145b-operator-scripts\") pod \"b2b021e1-1d1a-422d-994a-3af278f1145b\" (UID: \"b2b021e1-1d1a-422d-994a-3af278f1145b\") " Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.515283 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a6064d5-ead6-4ac9-8c53-6c72585d76ff-operator-scripts\") pod \"6a6064d5-ead6-4ac9-8c53-6c72585d76ff\" (UID: \"6a6064d5-ead6-4ac9-8c53-6c72585d76ff\") " Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.515315 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmpmp\" (UniqueName: \"kubernetes.io/projected/6986c0d4-ba42-44f3-ba4f-51872e0345b6-kube-api-access-qmpmp\") pod \"6986c0d4-ba42-44f3-ba4f-51872e0345b6\" (UID: \"6986c0d4-ba42-44f3-ba4f-51872e0345b6\") " Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.515341 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6986c0d4-ba42-44f3-ba4f-51872e0345b6-operator-scripts\") pod \"6986c0d4-ba42-44f3-ba4f-51872e0345b6\" (UID: \"6986c0d4-ba42-44f3-ba4f-51872e0345b6\") " Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.515412 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mclpv\" (UniqueName: \"kubernetes.io/projected/b2b021e1-1d1a-422d-994a-3af278f1145b-kube-api-access-mclpv\") pod \"b2b021e1-1d1a-422d-994a-3af278f1145b\" (UID: \"b2b021e1-1d1a-422d-994a-3af278f1145b\") " Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.515627 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5a20600-c4a1-4a64-9c8b-c8cf8680abac-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5a20600-c4a1-4a64-9c8b-c8cf8680abac" (UID: "e5a20600-c4a1-4a64-9c8b-c8cf8680abac"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.515918 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5a20600-c4a1-4a64-9c8b-c8cf8680abac-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.516498 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6986c0d4-ba42-44f3-ba4f-51872e0345b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6986c0d4-ba42-44f3-ba4f-51872e0345b6" (UID: "6986c0d4-ba42-44f3-ba4f-51872e0345b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.516586 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c22383eb-b38e-415a-b0e4-bde087a20c04-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c22383eb-b38e-415a-b0e4-bde087a20c04" (UID: "c22383eb-b38e-415a-b0e4-bde087a20c04"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.516667 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2b021e1-1d1a-422d-994a-3af278f1145b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2b021e1-1d1a-422d-994a-3af278f1145b" (UID: "b2b021e1-1d1a-422d-994a-3af278f1145b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.516719 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a6064d5-ead6-4ac9-8c53-6c72585d76ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6a6064d5-ead6-4ac9-8c53-6c72585d76ff" (UID: "6a6064d5-ead6-4ac9-8c53-6c72585d76ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.519750 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6986c0d4-ba42-44f3-ba4f-51872e0345b6-kube-api-access-qmpmp" (OuterVolumeSpecName: "kube-api-access-qmpmp") pod "6986c0d4-ba42-44f3-ba4f-51872e0345b6" (UID: "6986c0d4-ba42-44f3-ba4f-51872e0345b6"). InnerVolumeSpecName "kube-api-access-qmpmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.520311 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5a20600-c4a1-4a64-9c8b-c8cf8680abac-kube-api-access-npjxj" (OuterVolumeSpecName: "kube-api-access-npjxj") pod "e5a20600-c4a1-4a64-9c8b-c8cf8680abac" (UID: "e5a20600-c4a1-4a64-9c8b-c8cf8680abac"). InnerVolumeSpecName "kube-api-access-npjxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.520527 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c22383eb-b38e-415a-b0e4-bde087a20c04-kube-api-access-fnmcg" (OuterVolumeSpecName: "kube-api-access-fnmcg") pod "c22383eb-b38e-415a-b0e4-bde087a20c04" (UID: "c22383eb-b38e-415a-b0e4-bde087a20c04"). InnerVolumeSpecName "kube-api-access-fnmcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.521011 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a6064d5-ead6-4ac9-8c53-6c72585d76ff-kube-api-access-f6z8f" (OuterVolumeSpecName: "kube-api-access-f6z8f") pod "6a6064d5-ead6-4ac9-8c53-6c72585d76ff" (UID: "6a6064d5-ead6-4ac9-8c53-6c72585d76ff"). InnerVolumeSpecName "kube-api-access-f6z8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.521627 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2b021e1-1d1a-422d-994a-3af278f1145b-kube-api-access-mclpv" (OuterVolumeSpecName: "kube-api-access-mclpv") pod "b2b021e1-1d1a-422d-994a-3af278f1145b" (UID: "b2b021e1-1d1a-422d-994a-3af278f1145b"). InnerVolumeSpecName "kube-api-access-mclpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.618052 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npjxj\" (UniqueName: \"kubernetes.io/projected/e5a20600-c4a1-4a64-9c8b-c8cf8680abac-kube-api-access-npjxj\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.618309 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2b021e1-1d1a-422d-994a-3af278f1145b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.618401 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6a6064d5-ead6-4ac9-8c53-6c72585d76ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.618418 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmpmp\" (UniqueName: \"kubernetes.io/projected/6986c0d4-ba42-44f3-ba4f-51872e0345b6-kube-api-access-qmpmp\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.618456 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6986c0d4-ba42-44f3-ba4f-51872e0345b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.618470 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mclpv\" (UniqueName: \"kubernetes.io/projected/b2b021e1-1d1a-422d-994a-3af278f1145b-kube-api-access-mclpv\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.618483 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c22383eb-b38e-415a-b0e4-bde087a20c04-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.618494 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6z8f\" (UniqueName: \"kubernetes.io/projected/6a6064d5-ead6-4ac9-8c53-6c72585d76ff-kube-api-access-f6z8f\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.618505 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnmcg\" (UniqueName: \"kubernetes.io/projected/c22383eb-b38e-415a-b0e4-bde087a20c04-kube-api-access-fnmcg\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.687718 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c995-account-create-vdjsf" event={"ID":"e5a20600-c4a1-4a64-9c8b-c8cf8680abac","Type":"ContainerDied","Data":"82f255b3d15023a0bd4b0d7805d1f47c60d23e5434950cb1d0e4be16b9f7b48b"} Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.687762 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82f255b3d15023a0bd4b0d7805d1f47c60d23e5434950cb1d0e4be16b9f7b48b" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.687777 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c995-account-create-vdjsf" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.689691 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7fd2-account-create-psclf" event={"ID":"b2b021e1-1d1a-422d-994a-3af278f1145b","Type":"ContainerDied","Data":"01238ec8c46bc0a29cf6f5605d1df7f0c977656ca539746d6a822f3683c1896a"} Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.689722 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01238ec8c46bc0a29cf6f5605d1df7f0c977656ca539746d6a822f3683c1896a" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.689844 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7fd2-account-create-psclf" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.691227 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-7776w" event={"ID":"6986c0d4-ba42-44f3-ba4f-51872e0345b6","Type":"ContainerDied","Data":"b8aaf282b2a7d1570b0fbb4b0208ff30f6f6e8c4efe1b4b82b89e3347c5fb4e7"} Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.691256 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-7776w" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.691268 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8aaf282b2a7d1570b0fbb4b0208ff30f6f6e8c4efe1b4b82b89e3347c5fb4e7" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.693255 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0bb7-account-create-jvn5p" event={"ID":"0ed3a39d-494f-40f0-a336-1ccbc4d73f93","Type":"ContainerDied","Data":"fe893de49887504cc17021531901bc53358e2919ee53a61806f01531b013fc32"} Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.693294 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe893de49887504cc17021531901bc53358e2919ee53a61806f01531b013fc32" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.693307 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0bb7-account-create-jvn5p" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.695192 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-srrps" event={"ID":"6a6064d5-ead6-4ac9-8c53-6c72585d76ff","Type":"ContainerDied","Data":"e1afc0af2a7b893767184c98196baa2d15387bef1b56b1763712187365d06c5f"} Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.695217 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1afc0af2a7b893767184c98196baa2d15387bef1b56b1763712187365d06c5f" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.695219 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-srrps" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.696557 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-5n45p" event={"ID":"c22383eb-b38e-415a-b0e4-bde087a20c04","Type":"ContainerDied","Data":"62904cf036cfae53e86f3a77ec60fe6bc2361833ce3865fcd3f0cfd62b40e759"} Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.696590 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62904cf036cfae53e86f3a77ec60fe6bc2361833ce3865fcd3f0cfd62b40e759" Nov 24 19:35:17 crc kubenswrapper[4812]: I1124 19:35:17.696640 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-5n45p" Nov 24 19:35:18 crc kubenswrapper[4812]: I1124 19:35:18.980015 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-65c9b8d4f7-g5xtt" podUID="00d308d2-17b6-46fe-94ac-0cc7e0fd1e27" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: i/o timeout" Nov 24 19:35:19 crc kubenswrapper[4812]: I1124 19:35:19.043595 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " pod="openstack/swift-storage-0" Nov 24 19:35:19 crc kubenswrapper[4812]: E1124 19:35:19.043767 4812 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 19:35:19 crc kubenswrapper[4812]: E1124 19:35:19.043796 4812 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 19:35:19 crc kubenswrapper[4812]: E1124 19:35:19.043847 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift podName:c127eda7-bbfe-4197-b5cf-f4f99824d0c8 nodeName:}" failed. No retries permitted until 2025-11-24 19:35:35.043829328 +0000 UTC m=+1128.832781699 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift") pod "swift-storage-0" (UID: "c127eda7-bbfe-4197-b5cf-f4f99824d0c8") : configmap "swift-ring-files" not found Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.294828 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-fk6qb"] Nov 24 19:35:21 crc kubenswrapper[4812]: E1124 19:35:21.295888 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5a20600-c4a1-4a64-9c8b-c8cf8680abac" containerName="mariadb-account-create" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.295917 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5a20600-c4a1-4a64-9c8b-c8cf8680abac" containerName="mariadb-account-create" Nov 24 19:35:21 crc kubenswrapper[4812]: E1124 19:35:21.295946 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ed3a39d-494f-40f0-a336-1ccbc4d73f93" containerName="mariadb-account-create" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.295961 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ed3a39d-494f-40f0-a336-1ccbc4d73f93" containerName="mariadb-account-create" Nov 24 19:35:21 crc kubenswrapper[4812]: E1124 19:35:21.295990 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a6064d5-ead6-4ac9-8c53-6c72585d76ff" containerName="mariadb-database-create" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.296007 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a6064d5-ead6-4ac9-8c53-6c72585d76ff" containerName="mariadb-database-create" Nov 24 19:35:21 crc kubenswrapper[4812]: E1124 19:35:21.296051 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d308d2-17b6-46fe-94ac-0cc7e0fd1e27" containerName="init" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.296064 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d308d2-17b6-46fe-94ac-0cc7e0fd1e27" containerName="init" Nov 24 19:35:21 crc kubenswrapper[4812]: E1124 19:35:21.296085 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6986c0d4-ba42-44f3-ba4f-51872e0345b6" containerName="mariadb-database-create" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.296100 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6986c0d4-ba42-44f3-ba4f-51872e0345b6" containerName="mariadb-database-create" Nov 24 19:35:21 crc kubenswrapper[4812]: E1124 19:35:21.296144 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22383eb-b38e-415a-b0e4-bde087a20c04" containerName="mariadb-database-create" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.296156 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22383eb-b38e-415a-b0e4-bde087a20c04" containerName="mariadb-database-create" Nov 24 19:35:21 crc kubenswrapper[4812]: E1124 19:35:21.296183 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2b021e1-1d1a-422d-994a-3af278f1145b" containerName="mariadb-account-create" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.296194 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2b021e1-1d1a-422d-994a-3af278f1145b" containerName="mariadb-account-create" Nov 24 19:35:21 crc kubenswrapper[4812]: E1124 19:35:21.296215 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00d308d2-17b6-46fe-94ac-0cc7e0fd1e27" containerName="dnsmasq-dns" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.296230 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="00d308d2-17b6-46fe-94ac-0cc7e0fd1e27" containerName="dnsmasq-dns" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.296555 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="00d308d2-17b6-46fe-94ac-0cc7e0fd1e27" containerName="dnsmasq-dns" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.296582 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a6064d5-ead6-4ac9-8c53-6c72585d76ff" containerName="mariadb-database-create" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.296601 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6986c0d4-ba42-44f3-ba4f-51872e0345b6" containerName="mariadb-database-create" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.296632 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c22383eb-b38e-415a-b0e4-bde087a20c04" containerName="mariadb-database-create" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.296651 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2b021e1-1d1a-422d-994a-3af278f1145b" containerName="mariadb-account-create" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.296672 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5a20600-c4a1-4a64-9c8b-c8cf8680abac" containerName="mariadb-account-create" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.296692 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ed3a39d-494f-40f0-a336-1ccbc4d73f93" containerName="mariadb-account-create" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.297600 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fk6qb" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.308222 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.308477 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-z5qr5" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.311120 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-fk6qb"] Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.382986 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da169797-327e-4338-961d-4965dcb70d05-config-data\") pod \"glance-db-sync-fk6qb\" (UID: \"da169797-327e-4338-961d-4965dcb70d05\") " pod="openstack/glance-db-sync-fk6qb" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.383226 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da169797-327e-4338-961d-4965dcb70d05-combined-ca-bundle\") pod \"glance-db-sync-fk6qb\" (UID: \"da169797-327e-4338-961d-4965dcb70d05\") " pod="openstack/glance-db-sync-fk6qb" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.383309 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/da169797-327e-4338-961d-4965dcb70d05-db-sync-config-data\") pod \"glance-db-sync-fk6qb\" (UID: \"da169797-327e-4338-961d-4965dcb70d05\") " pod="openstack/glance-db-sync-fk6qb" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.383657 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zmbp\" (UniqueName: \"kubernetes.io/projected/da169797-327e-4338-961d-4965dcb70d05-kube-api-access-4zmbp\") pod \"glance-db-sync-fk6qb\" (UID: \"da169797-327e-4338-961d-4965dcb70d05\") " pod="openstack/glance-db-sync-fk6qb" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.485313 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zmbp\" (UniqueName: \"kubernetes.io/projected/da169797-327e-4338-961d-4965dcb70d05-kube-api-access-4zmbp\") pod \"glance-db-sync-fk6qb\" (UID: \"da169797-327e-4338-961d-4965dcb70d05\") " pod="openstack/glance-db-sync-fk6qb" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.485490 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da169797-327e-4338-961d-4965dcb70d05-config-data\") pod \"glance-db-sync-fk6qb\" (UID: \"da169797-327e-4338-961d-4965dcb70d05\") " pod="openstack/glance-db-sync-fk6qb" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.485537 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da169797-327e-4338-961d-4965dcb70d05-combined-ca-bundle\") pod \"glance-db-sync-fk6qb\" (UID: \"da169797-327e-4338-961d-4965dcb70d05\") " pod="openstack/glance-db-sync-fk6qb" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.485603 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/da169797-327e-4338-961d-4965dcb70d05-db-sync-config-data\") pod \"glance-db-sync-fk6qb\" (UID: \"da169797-327e-4338-961d-4965dcb70d05\") " pod="openstack/glance-db-sync-fk6qb" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.493831 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da169797-327e-4338-961d-4965dcb70d05-config-data\") pod \"glance-db-sync-fk6qb\" (UID: \"da169797-327e-4338-961d-4965dcb70d05\") " pod="openstack/glance-db-sync-fk6qb" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.494733 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da169797-327e-4338-961d-4965dcb70d05-combined-ca-bundle\") pod \"glance-db-sync-fk6qb\" (UID: \"da169797-327e-4338-961d-4965dcb70d05\") " pod="openstack/glance-db-sync-fk6qb" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.494742 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/da169797-327e-4338-961d-4965dcb70d05-db-sync-config-data\") pod \"glance-db-sync-fk6qb\" (UID: \"da169797-327e-4338-961d-4965dcb70d05\") " pod="openstack/glance-db-sync-fk6qb" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.516350 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zmbp\" (UniqueName: \"kubernetes.io/projected/da169797-327e-4338-961d-4965dcb70d05-kube-api-access-4zmbp\") pod \"glance-db-sync-fk6qb\" (UID: \"da169797-327e-4338-961d-4965dcb70d05\") " pod="openstack/glance-db-sync-fk6qb" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.625162 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fk6qb" Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.740719 4812 generic.go:334] "Generic (PLEG): container finished" podID="b3986d6e-e683-4327-a5cd-db98e23ca287" containerID="fb57fda78afe686a7cc9cc2b74189b0701e0efae994b7b22fa603354146ddb49" exitCode=0 Nov 24 19:35:21 crc kubenswrapper[4812]: I1124 19:35:21.741041 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-99q2n" event={"ID":"b3986d6e-e683-4327-a5cd-db98e23ca287","Type":"ContainerDied","Data":"fb57fda78afe686a7cc9cc2b74189b0701e0efae994b7b22fa603354146ddb49"} Nov 24 19:35:22 crc kubenswrapper[4812]: I1124 19:35:22.142224 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-fk6qb"] Nov 24 19:35:22 crc kubenswrapper[4812]: W1124 19:35:22.145636 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda169797_327e_4338_961d_4965dcb70d05.slice/crio-d9f7f3950bcede9b034f7f843e3ffbefe5d70e610d59138fab1b3130b7420c71 WatchSource:0}: Error finding container d9f7f3950bcede9b034f7f843e3ffbefe5d70e610d59138fab1b3130b7420c71: Status 404 returned error can't find the container with id d9f7f3950bcede9b034f7f843e3ffbefe5d70e610d59138fab1b3130b7420c71 Nov 24 19:35:22 crc kubenswrapper[4812]: I1124 19:35:22.758036 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fk6qb" event={"ID":"da169797-327e-4338-961d-4965dcb70d05","Type":"ContainerStarted","Data":"d9f7f3950bcede9b034f7f843e3ffbefe5d70e610d59138fab1b3130b7420c71"} Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.183807 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.212090 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26tc6\" (UniqueName: \"kubernetes.io/projected/b3986d6e-e683-4327-a5cd-db98e23ca287-kube-api-access-26tc6\") pod \"b3986d6e-e683-4327-a5cd-db98e23ca287\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.212206 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3986d6e-e683-4327-a5cd-db98e23ca287-combined-ca-bundle\") pod \"b3986d6e-e683-4327-a5cd-db98e23ca287\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.212289 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3986d6e-e683-4327-a5cd-db98e23ca287-scripts\") pod \"b3986d6e-e683-4327-a5cd-db98e23ca287\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.212966 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b3986d6e-e683-4327-a5cd-db98e23ca287-etc-swift\") pod \"b3986d6e-e683-4327-a5cd-db98e23ca287\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.213984 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3986d6e-e683-4327-a5cd-db98e23ca287-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b3986d6e-e683-4327-a5cd-db98e23ca287" (UID: "b3986d6e-e683-4327-a5cd-db98e23ca287"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.214091 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b3986d6e-e683-4327-a5cd-db98e23ca287-dispersionconf\") pod \"b3986d6e-e683-4327-a5cd-db98e23ca287\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.214148 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b3986d6e-e683-4327-a5cd-db98e23ca287-swiftconf\") pod \"b3986d6e-e683-4327-a5cd-db98e23ca287\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.214172 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b3986d6e-e683-4327-a5cd-db98e23ca287-ring-data-devices\") pod \"b3986d6e-e683-4327-a5cd-db98e23ca287\" (UID: \"b3986d6e-e683-4327-a5cd-db98e23ca287\") " Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.214740 4812 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b3986d6e-e683-4327-a5cd-db98e23ca287-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.216093 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3986d6e-e683-4327-a5cd-db98e23ca287-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b3986d6e-e683-4327-a5cd-db98e23ca287" (UID: "b3986d6e-e683-4327-a5cd-db98e23ca287"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.216767 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3986d6e-e683-4327-a5cd-db98e23ca287-kube-api-access-26tc6" (OuterVolumeSpecName: "kube-api-access-26tc6") pod "b3986d6e-e683-4327-a5cd-db98e23ca287" (UID: "b3986d6e-e683-4327-a5cd-db98e23ca287"). InnerVolumeSpecName "kube-api-access-26tc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.223653 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3986d6e-e683-4327-a5cd-db98e23ca287-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b3986d6e-e683-4327-a5cd-db98e23ca287" (UID: "b3986d6e-e683-4327-a5cd-db98e23ca287"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.236936 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3986d6e-e683-4327-a5cd-db98e23ca287-scripts" (OuterVolumeSpecName: "scripts") pod "b3986d6e-e683-4327-a5cd-db98e23ca287" (UID: "b3986d6e-e683-4327-a5cd-db98e23ca287"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.246761 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3986d6e-e683-4327-a5cd-db98e23ca287-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3986d6e-e683-4327-a5cd-db98e23ca287" (UID: "b3986d6e-e683-4327-a5cd-db98e23ca287"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.250049 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3986d6e-e683-4327-a5cd-db98e23ca287-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b3986d6e-e683-4327-a5cd-db98e23ca287" (UID: "b3986d6e-e683-4327-a5cd-db98e23ca287"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.316295 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3986d6e-e683-4327-a5cd-db98e23ca287-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.316335 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3986d6e-e683-4327-a5cd-db98e23ca287-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.316347 4812 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b3986d6e-e683-4327-a5cd-db98e23ca287-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.316370 4812 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b3986d6e-e683-4327-a5cd-db98e23ca287-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.316381 4812 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b3986d6e-e683-4327-a5cd-db98e23ca287-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.316392 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26tc6\" (UniqueName: \"kubernetes.io/projected/b3986d6e-e683-4327-a5cd-db98e23ca287-kube-api-access-26tc6\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.766088 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-99q2n" event={"ID":"b3986d6e-e683-4327-a5cd-db98e23ca287","Type":"ContainerDied","Data":"15a627f2459f718b8ef690c4286681bf625760b158b30d2e1de4f89014e7ab22"} Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.766416 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15a627f2459f718b8ef690c4286681bf625760b158b30d2e1de4f89014e7ab22" Nov 24 19:35:23 crc kubenswrapper[4812]: I1124 19:35:23.766142 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-99q2n" Nov 24 19:35:24 crc kubenswrapper[4812]: I1124 19:35:24.213600 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 24 19:35:28 crc kubenswrapper[4812]: I1124 19:35:28.853983 4812 generic.go:334] "Generic (PLEG): container finished" podID="7cb8ede6-6163-4906-89f7-7fe6458edc36" containerID="19fddee3a853c3fd532fe650fc31347397a2376049b8e783055c2e331f93470a" exitCode=0 Nov 24 19:35:28 crc kubenswrapper[4812]: I1124 19:35:28.854063 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7cb8ede6-6163-4906-89f7-7fe6458edc36","Type":"ContainerDied","Data":"19fddee3a853c3fd532fe650fc31347397a2376049b8e783055c2e331f93470a"} Nov 24 19:35:28 crc kubenswrapper[4812]: I1124 19:35:28.861153 4812 generic.go:334] "Generic (PLEG): container finished" podID="db2cd03e-b999-48ea-b540-7fd35356ba8b" containerID="0a9548dd5692b6153f3561ecedbb50d79cca6e5b8420150e9bcf2b64a40747c0" exitCode=0 Nov 24 19:35:28 crc kubenswrapper[4812]: I1124 19:35:28.861237 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db2cd03e-b999-48ea-b540-7fd35356ba8b","Type":"ContainerDied","Data":"0a9548dd5692b6153f3561ecedbb50d79cca6e5b8420150e9bcf2b64a40747c0"} Nov 24 19:35:31 crc kubenswrapper[4812]: I1124 19:35:31.658022 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-d48t8" podUID="4f3dca2d-7c6c-428d-9789-1463444fe46f" containerName="ovn-controller" probeResult="failure" output=< Nov 24 19:35:31 crc kubenswrapper[4812]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 19:35:31 crc kubenswrapper[4812]: > Nov 24 19:35:31 crc kubenswrapper[4812]: I1124 19:35:31.675600 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:35:35 crc kubenswrapper[4812]: I1124 19:35:35.048839 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " pod="openstack/swift-storage-0" Nov 24 19:35:35 crc kubenswrapper[4812]: I1124 19:35:35.056866 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift\") pod \"swift-storage-0\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " pod="openstack/swift-storage-0" Nov 24 19:35:35 crc kubenswrapper[4812]: I1124 19:35:35.342975 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 19:35:35 crc kubenswrapper[4812]: I1124 19:35:35.914839 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 24 19:35:35 crc kubenswrapper[4812]: I1124 19:35:35.928588 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db2cd03e-b999-48ea-b540-7fd35356ba8b","Type":"ContainerStarted","Data":"f9a279d42e1d81ebcc809ef24fb24a103a629e0087d30c570c28a4d054c09cc4"} Nov 24 19:35:35 crc kubenswrapper[4812]: I1124 19:35:35.929083 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:35:35 crc kubenswrapper[4812]: I1124 19:35:35.930897 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fk6qb" event={"ID":"da169797-327e-4338-961d-4965dcb70d05","Type":"ContainerStarted","Data":"6784b207fae1272e4e64281366725f6ba901ff3f865eac1a5c7530617134de08"} Nov 24 19:35:35 crc kubenswrapper[4812]: I1124 19:35:35.934599 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerStarted","Data":"f8448a8883db6320f67d86b35577d4e30a265e0d61e35756c50f18419d8bdb74"} Nov 24 19:35:35 crc kubenswrapper[4812]: I1124 19:35:35.936251 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7cb8ede6-6163-4906-89f7-7fe6458edc36","Type":"ContainerStarted","Data":"608679d453b4a44e83990b6c799b689d9c67bad4d515f6fee46f773957c810bc"} Nov 24 19:35:35 crc kubenswrapper[4812]: I1124 19:35:35.936520 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 19:35:35 crc kubenswrapper[4812]: I1124 19:35:35.957736 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371966.89706 podStartE2EDuration="1m9.957715944s" podCreationTimestamp="2025-11-24 19:34:26 +0000 UTC" firstStartedPulling="2025-11-24 19:34:28.385926124 +0000 UTC m=+1062.174878495" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:35:35.950934612 +0000 UTC m=+1129.739887003" watchObservedRunningTime="2025-11-24 19:35:35.957715944 +0000 UTC m=+1129.746668315" Nov 24 19:35:35 crc kubenswrapper[4812]: I1124 19:35:35.982510 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=45.953010307 podStartE2EDuration="1m10.982487357s" podCreationTimestamp="2025-11-24 19:34:25 +0000 UTC" firstStartedPulling="2025-11-24 19:34:28.261328939 +0000 UTC m=+1062.050281310" lastFinishedPulling="2025-11-24 19:34:53.290805949 +0000 UTC m=+1087.079758360" observedRunningTime="2025-11-24 19:35:35.97837603 +0000 UTC m=+1129.767328401" watchObservedRunningTime="2025-11-24 19:35:35.982487357 +0000 UTC m=+1129.771439728" Nov 24 19:35:36 crc kubenswrapper[4812]: I1124 19:35:36.005806 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-fk6qb" podStartSLOduration=2.233583634 podStartE2EDuration="15.005787488s" podCreationTimestamp="2025-11-24 19:35:21 +0000 UTC" firstStartedPulling="2025-11-24 19:35:22.147978622 +0000 UTC m=+1115.936930993" lastFinishedPulling="2025-11-24 19:35:34.920182456 +0000 UTC m=+1128.709134847" observedRunningTime="2025-11-24 19:35:35.995942829 +0000 UTC m=+1129.784895200" watchObservedRunningTime="2025-11-24 19:35:36.005787488 +0000 UTC m=+1129.794739849" Nov 24 19:35:36 crc kubenswrapper[4812]: I1124 19:35:36.687261 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-d48t8" podUID="4f3dca2d-7c6c-428d-9789-1463444fe46f" containerName="ovn-controller" probeResult="failure" output=< Nov 24 19:35:36 crc kubenswrapper[4812]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 19:35:36 crc kubenswrapper[4812]: > Nov 24 19:35:36 crc kubenswrapper[4812]: I1124 19:35:36.688847 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:35:36 crc kubenswrapper[4812]: I1124 19:35:36.952115 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-d48t8-config-hlztt"] Nov 24 19:35:36 crc kubenswrapper[4812]: E1124 19:35:36.952699 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3986d6e-e683-4327-a5cd-db98e23ca287" containerName="swift-ring-rebalance" Nov 24 19:35:36 crc kubenswrapper[4812]: I1124 19:35:36.952770 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3986d6e-e683-4327-a5cd-db98e23ca287" containerName="swift-ring-rebalance" Nov 24 19:35:36 crc kubenswrapper[4812]: I1124 19:35:36.952974 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3986d6e-e683-4327-a5cd-db98e23ca287" containerName="swift-ring-rebalance" Nov 24 19:35:36 crc kubenswrapper[4812]: I1124 19:35:36.953536 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:36 crc kubenswrapper[4812]: I1124 19:35:36.958948 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 19:35:36 crc kubenswrapper[4812]: I1124 19:35:36.964010 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-d48t8-config-hlztt"] Nov 24 19:35:37 crc kubenswrapper[4812]: I1124 19:35:37.085224 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d901a847-bef4-473e-8597-9e27432a82f7-scripts\") pod \"ovn-controller-d48t8-config-hlztt\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:37 crc kubenswrapper[4812]: I1124 19:35:37.085537 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d901a847-bef4-473e-8597-9e27432a82f7-var-run-ovn\") pod \"ovn-controller-d48t8-config-hlztt\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:37 crc kubenswrapper[4812]: I1124 19:35:37.085806 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d901a847-bef4-473e-8597-9e27432a82f7-var-log-ovn\") pod \"ovn-controller-d48t8-config-hlztt\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:37 crc kubenswrapper[4812]: I1124 19:35:37.085922 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd8xt\" (UniqueName: \"kubernetes.io/projected/d901a847-bef4-473e-8597-9e27432a82f7-kube-api-access-xd8xt\") pod \"ovn-controller-d48t8-config-hlztt\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:37 crc kubenswrapper[4812]: I1124 19:35:37.086227 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d901a847-bef4-473e-8597-9e27432a82f7-additional-scripts\") pod \"ovn-controller-d48t8-config-hlztt\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:37 crc kubenswrapper[4812]: I1124 19:35:37.086369 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d901a847-bef4-473e-8597-9e27432a82f7-var-run\") pod \"ovn-controller-d48t8-config-hlztt\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:37 crc kubenswrapper[4812]: I1124 19:35:37.187859 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d901a847-bef4-473e-8597-9e27432a82f7-var-log-ovn\") pod \"ovn-controller-d48t8-config-hlztt\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:37 crc kubenswrapper[4812]: I1124 19:35:37.187933 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd8xt\" (UniqueName: \"kubernetes.io/projected/d901a847-bef4-473e-8597-9e27432a82f7-kube-api-access-xd8xt\") pod \"ovn-controller-d48t8-config-hlztt\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:37 crc kubenswrapper[4812]: I1124 19:35:37.188131 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d901a847-bef4-473e-8597-9e27432a82f7-additional-scripts\") pod \"ovn-controller-d48t8-config-hlztt\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:37 crc kubenswrapper[4812]: I1124 19:35:37.188179 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d901a847-bef4-473e-8597-9e27432a82f7-var-run\") pod \"ovn-controller-d48t8-config-hlztt\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:37 crc kubenswrapper[4812]: I1124 19:35:37.188195 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d901a847-bef4-473e-8597-9e27432a82f7-var-log-ovn\") pod \"ovn-controller-d48t8-config-hlztt\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:37 crc kubenswrapper[4812]: I1124 19:35:37.188262 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d901a847-bef4-473e-8597-9e27432a82f7-scripts\") pod \"ovn-controller-d48t8-config-hlztt\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:37 crc kubenswrapper[4812]: I1124 19:35:37.188298 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d901a847-bef4-473e-8597-9e27432a82f7-var-run-ovn\") pod \"ovn-controller-d48t8-config-hlztt\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:37 crc kubenswrapper[4812]: I1124 19:35:37.188414 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d901a847-bef4-473e-8597-9e27432a82f7-var-run\") pod \"ovn-controller-d48t8-config-hlztt\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:37 crc kubenswrapper[4812]: I1124 19:35:37.188600 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d901a847-bef4-473e-8597-9e27432a82f7-var-run-ovn\") pod \"ovn-controller-d48t8-config-hlztt\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:37 crc kubenswrapper[4812]: I1124 19:35:37.189049 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d901a847-bef4-473e-8597-9e27432a82f7-additional-scripts\") pod \"ovn-controller-d48t8-config-hlztt\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:37 crc kubenswrapper[4812]: I1124 19:35:37.191768 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d901a847-bef4-473e-8597-9e27432a82f7-scripts\") pod \"ovn-controller-d48t8-config-hlztt\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:37 crc kubenswrapper[4812]: I1124 19:35:37.218262 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd8xt\" (UniqueName: \"kubernetes.io/projected/d901a847-bef4-473e-8597-9e27432a82f7-kube-api-access-xd8xt\") pod \"ovn-controller-d48t8-config-hlztt\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:37 crc kubenswrapper[4812]: I1124 19:35:37.270686 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:37 crc kubenswrapper[4812]: I1124 19:35:37.735504 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-d48t8-config-hlztt"] Nov 24 19:35:37 crc kubenswrapper[4812]: I1124 19:35:37.952427 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-d48t8-config-hlztt" event={"ID":"d901a847-bef4-473e-8597-9e27432a82f7","Type":"ContainerStarted","Data":"5bf72c03459bf12fcbf72b2fa9fbe55b50927861e3601e017c202a4a0509102a"} Nov 24 19:35:38 crc kubenswrapper[4812]: I1124 19:35:38.961247 4812 generic.go:334] "Generic (PLEG): container finished" podID="d901a847-bef4-473e-8597-9e27432a82f7" containerID="2f8101fdc30ddfa6c4c114cd05ac79aa0d1d7716e24bd8673c1568ea942f93c1" exitCode=0 Nov 24 19:35:38 crc kubenswrapper[4812]: I1124 19:35:38.961281 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-d48t8-config-hlztt" event={"ID":"d901a847-bef4-473e-8597-9e27432a82f7","Type":"ContainerDied","Data":"2f8101fdc30ddfa6c4c114cd05ac79aa0d1d7716e24bd8673c1568ea942f93c1"} Nov 24 19:35:38 crc kubenswrapper[4812]: I1124 19:35:38.963777 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerStarted","Data":"95ee721a93b130c2b838cad7d2589d756c66158d110fc387dea0a070f5cab34b"} Nov 24 19:35:38 crc kubenswrapper[4812]: I1124 19:35:38.963815 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerStarted","Data":"85c687b61ffed21faaf0d22af2f315067e640622918bcd4af610650579b3e1ea"} Nov 24 19:35:38 crc kubenswrapper[4812]: I1124 19:35:38.963826 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerStarted","Data":"d1f7937bf24142a9d61506b61ddf9aa74aef7f7791c9f2de08d17609dc7039ce"} Nov 24 19:35:38 crc kubenswrapper[4812]: I1124 19:35:38.963834 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerStarted","Data":"ec513a4be128ca8f5f9f823a774f668bfdc27e8b505dfc23a7f565ab756b025f"} Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.340652 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.443781 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d901a847-bef4-473e-8597-9e27432a82f7-var-log-ovn\") pod \"d901a847-bef4-473e-8597-9e27432a82f7\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.443872 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d901a847-bef4-473e-8597-9e27432a82f7-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d901a847-bef4-473e-8597-9e27432a82f7" (UID: "d901a847-bef4-473e-8597-9e27432a82f7"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.443878 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d901a847-bef4-473e-8597-9e27432a82f7-scripts\") pod \"d901a847-bef4-473e-8597-9e27432a82f7\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.443956 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d901a847-bef4-473e-8597-9e27432a82f7-var-run-ovn\") pod \"d901a847-bef4-473e-8597-9e27432a82f7\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.444056 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd8xt\" (UniqueName: \"kubernetes.io/projected/d901a847-bef4-473e-8597-9e27432a82f7-kube-api-access-xd8xt\") pod \"d901a847-bef4-473e-8597-9e27432a82f7\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.444124 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d901a847-bef4-473e-8597-9e27432a82f7-additional-scripts\") pod \"d901a847-bef4-473e-8597-9e27432a82f7\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.444258 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d901a847-bef4-473e-8597-9e27432a82f7-var-run\") pod \"d901a847-bef4-473e-8597-9e27432a82f7\" (UID: \"d901a847-bef4-473e-8597-9e27432a82f7\") " Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.444635 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d901a847-bef4-473e-8597-9e27432a82f7-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d901a847-bef4-473e-8597-9e27432a82f7" (UID: "d901a847-bef4-473e-8597-9e27432a82f7"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.444674 4812 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d901a847-bef4-473e-8597-9e27432a82f7-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.444697 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d901a847-bef4-473e-8597-9e27432a82f7-var-run" (OuterVolumeSpecName: "var-run") pod "d901a847-bef4-473e-8597-9e27432a82f7" (UID: "d901a847-bef4-473e-8597-9e27432a82f7"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.444843 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d901a847-bef4-473e-8597-9e27432a82f7-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "d901a847-bef4-473e-8597-9e27432a82f7" (UID: "d901a847-bef4-473e-8597-9e27432a82f7"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.445220 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d901a847-bef4-473e-8597-9e27432a82f7-scripts" (OuterVolumeSpecName: "scripts") pod "d901a847-bef4-473e-8597-9e27432a82f7" (UID: "d901a847-bef4-473e-8597-9e27432a82f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.450507 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d901a847-bef4-473e-8597-9e27432a82f7-kube-api-access-xd8xt" (OuterVolumeSpecName: "kube-api-access-xd8xt") pod "d901a847-bef4-473e-8597-9e27432a82f7" (UID: "d901a847-bef4-473e-8597-9e27432a82f7"). InnerVolumeSpecName "kube-api-access-xd8xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.546158 4812 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d901a847-bef4-473e-8597-9e27432a82f7-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.546377 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d901a847-bef4-473e-8597-9e27432a82f7-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.546387 4812 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d901a847-bef4-473e-8597-9e27432a82f7-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.546400 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd8xt\" (UniqueName: \"kubernetes.io/projected/d901a847-bef4-473e-8597-9e27432a82f7-kube-api-access-xd8xt\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.546410 4812 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d901a847-bef4-473e-8597-9e27432a82f7-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.985892 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-d48t8-config-hlztt" event={"ID":"d901a847-bef4-473e-8597-9e27432a82f7","Type":"ContainerDied","Data":"5bf72c03459bf12fcbf72b2fa9fbe55b50927861e3601e017c202a4a0509102a"} Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.985927 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bf72c03459bf12fcbf72b2fa9fbe55b50927861e3601e017c202a4a0509102a" Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.985976 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-d48t8-config-hlztt" Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.995357 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerStarted","Data":"5c72f1df163389e7bb1686785b1e834a0fcaae9c25f9b0e0118c50ce389f8124"} Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.995407 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerStarted","Data":"0f2648d680c181c55b3d5b4aac28a54db4a0cc1f041a8502edaf196dec2f5d2b"} Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.995419 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerStarted","Data":"f42cebfcb67a0c82f8b5ea0093df23a7f3db4c4d55c27eba7281fffcdffb9145"} Nov 24 19:35:40 crc kubenswrapper[4812]: I1124 19:35:40.995428 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerStarted","Data":"3737b1e379814063ab73fb73d8cadf66e16906093ac70c6122f3dd28c3ef1165"} Nov 24 19:35:41 crc kubenswrapper[4812]: I1124 19:35:41.446438 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-d48t8-config-hlztt"] Nov 24 19:35:41 crc kubenswrapper[4812]: I1124 19:35:41.454380 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-d48t8-config-hlztt"] Nov 24 19:35:41 crc kubenswrapper[4812]: I1124 19:35:41.670024 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-d48t8" Nov 24 19:35:42 crc kubenswrapper[4812]: I1124 19:35:42.980387 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d901a847-bef4-473e-8597-9e27432a82f7" path="/var/lib/kubelet/pods/d901a847-bef4-473e-8597-9e27432a82f7/volumes" Nov 24 19:35:43 crc kubenswrapper[4812]: I1124 19:35:43.015738 4812 generic.go:334] "Generic (PLEG): container finished" podID="da169797-327e-4338-961d-4965dcb70d05" containerID="6784b207fae1272e4e64281366725f6ba901ff3f865eac1a5c7530617134de08" exitCode=0 Nov 24 19:35:43 crc kubenswrapper[4812]: I1124 19:35:43.015836 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fk6qb" event={"ID":"da169797-327e-4338-961d-4965dcb70d05","Type":"ContainerDied","Data":"6784b207fae1272e4e64281366725f6ba901ff3f865eac1a5c7530617134de08"} Nov 24 19:35:43 crc kubenswrapper[4812]: I1124 19:35:43.021441 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerStarted","Data":"369329005ba708cb3da74c4eb5f269c7a9bc67780612f12579a184b149f8f6a3"} Nov 24 19:35:44 crc kubenswrapper[4812]: I1124 19:35:44.042508 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerStarted","Data":"7dfb7bb5f541d3e6a7b34ba4b2765ad1a014b13d9acd5887236f22389f857686"} Nov 24 19:35:44 crc kubenswrapper[4812]: I1124 19:35:44.042854 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerStarted","Data":"583024e52dfa2d0198e9e008ff828f760750f1c77c129940527456b669651af8"} Nov 24 19:35:44 crc kubenswrapper[4812]: I1124 19:35:44.042865 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerStarted","Data":"e7a4a9c734691227c7726486773c640c7222fb5fb72a6dfe0e67b5f73d49779c"} Nov 24 19:35:44 crc kubenswrapper[4812]: I1124 19:35:44.042874 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerStarted","Data":"6f8586c7b6379e41339cf4003e181ac17b8000c4b6cf916c10b8bd5974063830"} Nov 24 19:35:44 crc kubenswrapper[4812]: I1124 19:35:44.470066 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fk6qb" Nov 24 19:35:44 crc kubenswrapper[4812]: I1124 19:35:44.611778 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/da169797-327e-4338-961d-4965dcb70d05-db-sync-config-data\") pod \"da169797-327e-4338-961d-4965dcb70d05\" (UID: \"da169797-327e-4338-961d-4965dcb70d05\") " Nov 24 19:35:44 crc kubenswrapper[4812]: I1124 19:35:44.611844 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da169797-327e-4338-961d-4965dcb70d05-combined-ca-bundle\") pod \"da169797-327e-4338-961d-4965dcb70d05\" (UID: \"da169797-327e-4338-961d-4965dcb70d05\") " Nov 24 19:35:44 crc kubenswrapper[4812]: I1124 19:35:44.612034 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zmbp\" (UniqueName: \"kubernetes.io/projected/da169797-327e-4338-961d-4965dcb70d05-kube-api-access-4zmbp\") pod \"da169797-327e-4338-961d-4965dcb70d05\" (UID: \"da169797-327e-4338-961d-4965dcb70d05\") " Nov 24 19:35:44 crc kubenswrapper[4812]: I1124 19:35:44.612068 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da169797-327e-4338-961d-4965dcb70d05-config-data\") pod \"da169797-327e-4338-961d-4965dcb70d05\" (UID: \"da169797-327e-4338-961d-4965dcb70d05\") " Nov 24 19:35:44 crc kubenswrapper[4812]: I1124 19:35:44.617309 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da169797-327e-4338-961d-4965dcb70d05-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "da169797-327e-4338-961d-4965dcb70d05" (UID: "da169797-327e-4338-961d-4965dcb70d05"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:35:44 crc kubenswrapper[4812]: I1124 19:35:44.618580 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da169797-327e-4338-961d-4965dcb70d05-kube-api-access-4zmbp" (OuterVolumeSpecName: "kube-api-access-4zmbp") pod "da169797-327e-4338-961d-4965dcb70d05" (UID: "da169797-327e-4338-961d-4965dcb70d05"). InnerVolumeSpecName "kube-api-access-4zmbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:44 crc kubenswrapper[4812]: I1124 19:35:44.638288 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da169797-327e-4338-961d-4965dcb70d05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da169797-327e-4338-961d-4965dcb70d05" (UID: "da169797-327e-4338-961d-4965dcb70d05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:35:44 crc kubenswrapper[4812]: I1124 19:35:44.654584 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da169797-327e-4338-961d-4965dcb70d05-config-data" (OuterVolumeSpecName: "config-data") pod "da169797-327e-4338-961d-4965dcb70d05" (UID: "da169797-327e-4338-961d-4965dcb70d05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:35:44 crc kubenswrapper[4812]: I1124 19:35:44.713482 4812 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/da169797-327e-4338-961d-4965dcb70d05-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:44 crc kubenswrapper[4812]: I1124 19:35:44.713514 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da169797-327e-4338-961d-4965dcb70d05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:44 crc kubenswrapper[4812]: I1124 19:35:44.713526 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zmbp\" (UniqueName: \"kubernetes.io/projected/da169797-327e-4338-961d-4965dcb70d05-kube-api-access-4zmbp\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:44 crc kubenswrapper[4812]: I1124 19:35:44.713538 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da169797-327e-4338-961d-4965dcb70d05-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.064134 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fk6qb" event={"ID":"da169797-327e-4338-961d-4965dcb70d05","Type":"ContainerDied","Data":"d9f7f3950bcede9b034f7f843e3ffbefe5d70e610d59138fab1b3130b7420c71"} Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.064239 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9f7f3950bcede9b034f7f843e3ffbefe5d70e610d59138fab1b3130b7420c71" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.064304 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fk6qb" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.506399 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84ddf475bf-dvmhs"] Nov 24 19:35:45 crc kubenswrapper[4812]: E1124 19:35:45.507046 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d901a847-bef4-473e-8597-9e27432a82f7" containerName="ovn-config" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.507068 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d901a847-bef4-473e-8597-9e27432a82f7" containerName="ovn-config" Nov 24 19:35:45 crc kubenswrapper[4812]: E1124 19:35:45.507096 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da169797-327e-4338-961d-4965dcb70d05" containerName="glance-db-sync" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.507104 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="da169797-327e-4338-961d-4965dcb70d05" containerName="glance-db-sync" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.507299 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="da169797-327e-4338-961d-4965dcb70d05" containerName="glance-db-sync" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.507346 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d901a847-bef4-473e-8597-9e27432a82f7" containerName="ovn-config" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.514906 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.536742 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84ddf475bf-dvmhs"] Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.630041 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-dns-svc\") pod \"dnsmasq-dns-84ddf475bf-dvmhs\" (UID: \"32ccb340-a030-436e-9484-c6021ee28bbe\") " pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.630278 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-config\") pod \"dnsmasq-dns-84ddf475bf-dvmhs\" (UID: \"32ccb340-a030-436e-9484-c6021ee28bbe\") " pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.630309 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-ovsdbserver-sb\") pod \"dnsmasq-dns-84ddf475bf-dvmhs\" (UID: \"32ccb340-a030-436e-9484-c6021ee28bbe\") " pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.630348 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2trk\" (UniqueName: \"kubernetes.io/projected/32ccb340-a030-436e-9484-c6021ee28bbe-kube-api-access-m2trk\") pod \"dnsmasq-dns-84ddf475bf-dvmhs\" (UID: \"32ccb340-a030-436e-9484-c6021ee28bbe\") " pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.630394 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-ovsdbserver-nb\") pod \"dnsmasq-dns-84ddf475bf-dvmhs\" (UID: \"32ccb340-a030-436e-9484-c6021ee28bbe\") " pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.732105 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-ovsdbserver-nb\") pod \"dnsmasq-dns-84ddf475bf-dvmhs\" (UID: \"32ccb340-a030-436e-9484-c6021ee28bbe\") " pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.732223 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-dns-svc\") pod \"dnsmasq-dns-84ddf475bf-dvmhs\" (UID: \"32ccb340-a030-436e-9484-c6021ee28bbe\") " pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.732251 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-config\") pod \"dnsmasq-dns-84ddf475bf-dvmhs\" (UID: \"32ccb340-a030-436e-9484-c6021ee28bbe\") " pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.732275 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-ovsdbserver-sb\") pod \"dnsmasq-dns-84ddf475bf-dvmhs\" (UID: \"32ccb340-a030-436e-9484-c6021ee28bbe\") " pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.732292 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2trk\" (UniqueName: \"kubernetes.io/projected/32ccb340-a030-436e-9484-c6021ee28bbe-kube-api-access-m2trk\") pod \"dnsmasq-dns-84ddf475bf-dvmhs\" (UID: \"32ccb340-a030-436e-9484-c6021ee28bbe\") " pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.733076 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-ovsdbserver-nb\") pod \"dnsmasq-dns-84ddf475bf-dvmhs\" (UID: \"32ccb340-a030-436e-9484-c6021ee28bbe\") " pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.733240 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-config\") pod \"dnsmasq-dns-84ddf475bf-dvmhs\" (UID: \"32ccb340-a030-436e-9484-c6021ee28bbe\") " pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.733362 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-ovsdbserver-sb\") pod \"dnsmasq-dns-84ddf475bf-dvmhs\" (UID: \"32ccb340-a030-436e-9484-c6021ee28bbe\") " pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.733653 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-dns-svc\") pod \"dnsmasq-dns-84ddf475bf-dvmhs\" (UID: \"32ccb340-a030-436e-9484-c6021ee28bbe\") " pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.749158 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2trk\" (UniqueName: \"kubernetes.io/projected/32ccb340-a030-436e-9484-c6021ee28bbe-kube-api-access-m2trk\") pod \"dnsmasq-dns-84ddf475bf-dvmhs\" (UID: \"32ccb340-a030-436e-9484-c6021ee28bbe\") " pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" Nov 24 19:35:45 crc kubenswrapper[4812]: I1124 19:35:45.830480 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" Nov 24 19:35:46 crc kubenswrapper[4812]: I1124 19:35:46.080089 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerStarted","Data":"52a6492fd3d0deacb1f3de4099c2af1b8e13196fcc94e5057991492ec70dceb1"} Nov 24 19:35:47 crc kubenswrapper[4812]: I1124 19:35:47.655522 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 19:35:47 crc kubenswrapper[4812]: I1124 19:35:47.731521 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:35:47 crc kubenswrapper[4812]: I1124 19:35:47.825048 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84ddf475bf-dvmhs"] Nov 24 19:35:47 crc kubenswrapper[4812]: I1124 19:35:47.922135 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-7l8kj"] Nov 24 19:35:47 crc kubenswrapper[4812]: I1124 19:35:47.923141 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7l8kj" Nov 24 19:35:47 crc kubenswrapper[4812]: I1124 19:35:47.942062 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7l8kj"] Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.010836 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-vjwqm"] Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.012497 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vjwqm" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.020759 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vjwqm"] Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.070895 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg4lv\" (UniqueName: \"kubernetes.io/projected/cfb137c7-3a42-4a4a-b463-f27806805277-kube-api-access-gg4lv\") pod \"cinder-db-create-7l8kj\" (UID: \"cfb137c7-3a42-4a4a-b463-f27806805277\") " pod="openstack/cinder-db-create-7l8kj" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.070949 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a190769b-2cd0-4c64-944c-3a66e6b61e95-operator-scripts\") pod \"barbican-db-create-vjwqm\" (UID: \"a190769b-2cd0-4c64-944c-3a66e6b61e95\") " pod="openstack/barbican-db-create-vjwqm" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.071079 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfb137c7-3a42-4a4a-b463-f27806805277-operator-scripts\") pod \"cinder-db-create-7l8kj\" (UID: \"cfb137c7-3a42-4a4a-b463-f27806805277\") " pod="openstack/cinder-db-create-7l8kj" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.071139 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mk8r\" (UniqueName: \"kubernetes.io/projected/a190769b-2cd0-4c64-944c-3a66e6b61e95-kube-api-access-9mk8r\") pod \"barbican-db-create-vjwqm\" (UID: \"a190769b-2cd0-4c64-944c-3a66e6b61e95\") " pod="openstack/barbican-db-create-vjwqm" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.099858 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" event={"ID":"32ccb340-a030-436e-9484-c6021ee28bbe","Type":"ContainerStarted","Data":"68077d8c2ffcd8e9903e93459e9cefc862edac21fb0c673959ea188779299d2b"} Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.123488 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-fac2-account-create-szxkh"] Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.124526 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fac2-account-create-szxkh" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.127809 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.146383 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fac2-account-create-szxkh"] Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.198717 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg4lv\" (UniqueName: \"kubernetes.io/projected/cfb137c7-3a42-4a4a-b463-f27806805277-kube-api-access-gg4lv\") pod \"cinder-db-create-7l8kj\" (UID: \"cfb137c7-3a42-4a4a-b463-f27806805277\") " pod="openstack/cinder-db-create-7l8kj" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.198775 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a190769b-2cd0-4c64-944c-3a66e6b61e95-operator-scripts\") pod \"barbican-db-create-vjwqm\" (UID: \"a190769b-2cd0-4c64-944c-3a66e6b61e95\") " pod="openstack/barbican-db-create-vjwqm" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.198830 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfb137c7-3a42-4a4a-b463-f27806805277-operator-scripts\") pod \"cinder-db-create-7l8kj\" (UID: \"cfb137c7-3a42-4a4a-b463-f27806805277\") " pod="openstack/cinder-db-create-7l8kj" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.198859 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mk8r\" (UniqueName: \"kubernetes.io/projected/a190769b-2cd0-4c64-944c-3a66e6b61e95-kube-api-access-9mk8r\") pod \"barbican-db-create-vjwqm\" (UID: \"a190769b-2cd0-4c64-944c-3a66e6b61e95\") " pod="openstack/barbican-db-create-vjwqm" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.199605 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a190769b-2cd0-4c64-944c-3a66e6b61e95-operator-scripts\") pod \"barbican-db-create-vjwqm\" (UID: \"a190769b-2cd0-4c64-944c-3a66e6b61e95\") " pod="openstack/barbican-db-create-vjwqm" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.201734 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfb137c7-3a42-4a4a-b463-f27806805277-operator-scripts\") pod \"cinder-db-create-7l8kj\" (UID: \"cfb137c7-3a42-4a4a-b463-f27806805277\") " pod="openstack/cinder-db-create-7l8kj" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.225760 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5926-account-create-lsmsd"] Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.226786 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5926-account-create-lsmsd" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.229744 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.234146 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mk8r\" (UniqueName: \"kubernetes.io/projected/a190769b-2cd0-4c64-944c-3a66e6b61e95-kube-api-access-9mk8r\") pod \"barbican-db-create-vjwqm\" (UID: \"a190769b-2cd0-4c64-944c-3a66e6b61e95\") " pod="openstack/barbican-db-create-vjwqm" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.244784 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5926-account-create-lsmsd"] Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.247825 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg4lv\" (UniqueName: \"kubernetes.io/projected/cfb137c7-3a42-4a4a-b463-f27806805277-kube-api-access-gg4lv\") pod \"cinder-db-create-7l8kj\" (UID: \"cfb137c7-3a42-4a4a-b463-f27806805277\") " pod="openstack/cinder-db-create-7l8kj" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.300491 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jd6c\" (UniqueName: \"kubernetes.io/projected/61c60661-872c-43f4-8830-7d96fd575bd8-kube-api-access-9jd6c\") pod \"barbican-fac2-account-create-szxkh\" (UID: \"61c60661-872c-43f4-8830-7d96fd575bd8\") " pod="openstack/barbican-fac2-account-create-szxkh" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.300556 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61c60661-872c-43f4-8830-7d96fd575bd8-operator-scripts\") pod \"barbican-fac2-account-create-szxkh\" (UID: \"61c60661-872c-43f4-8830-7d96fd575bd8\") " pod="openstack/barbican-fac2-account-create-szxkh" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.314012 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-62xwh"] Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.315091 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-62xwh" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.322155 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-b5j9w" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.322388 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.322492 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.322600 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.324743 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vjwqm" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.333935 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-62xwh"] Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.404051 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jd6c\" (UniqueName: \"kubernetes.io/projected/61c60661-872c-43f4-8830-7d96fd575bd8-kube-api-access-9jd6c\") pod \"barbican-fac2-account-create-szxkh\" (UID: \"61c60661-872c-43f4-8830-7d96fd575bd8\") " pod="openstack/barbican-fac2-account-create-szxkh" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.404104 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf8ae591-fb65-4513-b7eb-fd8f13c22761-combined-ca-bundle\") pod \"keystone-db-sync-62xwh\" (UID: \"cf8ae591-fb65-4513-b7eb-fd8f13c22761\") " pod="openstack/keystone-db-sync-62xwh" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.404160 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61c60661-872c-43f4-8830-7d96fd575bd8-operator-scripts\") pod \"barbican-fac2-account-create-szxkh\" (UID: \"61c60661-872c-43f4-8830-7d96fd575bd8\") " pod="openstack/barbican-fac2-account-create-szxkh" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.404179 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf8ae591-fb65-4513-b7eb-fd8f13c22761-config-data\") pod \"keystone-db-sync-62xwh\" (UID: \"cf8ae591-fb65-4513-b7eb-fd8f13c22761\") " pod="openstack/keystone-db-sync-62xwh" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.404214 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87g5s\" (UniqueName: \"kubernetes.io/projected/cb65fa33-2178-4d99-9279-ace1b50e4089-kube-api-access-87g5s\") pod \"cinder-5926-account-create-lsmsd\" (UID: \"cb65fa33-2178-4d99-9279-ace1b50e4089\") " pod="openstack/cinder-5926-account-create-lsmsd" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.404486 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mftzj\" (UniqueName: \"kubernetes.io/projected/cf8ae591-fb65-4513-b7eb-fd8f13c22761-kube-api-access-mftzj\") pod \"keystone-db-sync-62xwh\" (UID: \"cf8ae591-fb65-4513-b7eb-fd8f13c22761\") " pod="openstack/keystone-db-sync-62xwh" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.404525 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb65fa33-2178-4d99-9279-ace1b50e4089-operator-scripts\") pod \"cinder-5926-account-create-lsmsd\" (UID: \"cb65fa33-2178-4d99-9279-ace1b50e4089\") " pod="openstack/cinder-5926-account-create-lsmsd" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.405750 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61c60661-872c-43f4-8830-7d96fd575bd8-operator-scripts\") pod \"barbican-fac2-account-create-szxkh\" (UID: \"61c60661-872c-43f4-8830-7d96fd575bd8\") " pod="openstack/barbican-fac2-account-create-szxkh" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.408718 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-2w8kz"] Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.409849 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2w8kz" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.433702 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jd6c\" (UniqueName: \"kubernetes.io/projected/61c60661-872c-43f4-8830-7d96fd575bd8-kube-api-access-9jd6c\") pod \"barbican-fac2-account-create-szxkh\" (UID: \"61c60661-872c-43f4-8830-7d96fd575bd8\") " pod="openstack/barbican-fac2-account-create-szxkh" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.435716 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-448d-account-create-z7k4c"] Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.436817 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-448d-account-create-z7k4c" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.438307 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.443352 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fac2-account-create-szxkh" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.458235 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2w8kz"] Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.485092 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-448d-account-create-z7k4c"] Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.505621 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb65fa33-2178-4d99-9279-ace1b50e4089-operator-scripts\") pod \"cinder-5926-account-create-lsmsd\" (UID: \"cb65fa33-2178-4d99-9279-ace1b50e4089\") " pod="openstack/cinder-5926-account-create-lsmsd" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.505678 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf8ae591-fb65-4513-b7eb-fd8f13c22761-combined-ca-bundle\") pod \"keystone-db-sync-62xwh\" (UID: \"cf8ae591-fb65-4513-b7eb-fd8f13c22761\") " pod="openstack/keystone-db-sync-62xwh" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.505721 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf8ae591-fb65-4513-b7eb-fd8f13c22761-config-data\") pod \"keystone-db-sync-62xwh\" (UID: \"cf8ae591-fb65-4513-b7eb-fd8f13c22761\") " pod="openstack/keystone-db-sync-62xwh" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.505756 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87g5s\" (UniqueName: \"kubernetes.io/projected/cb65fa33-2178-4d99-9279-ace1b50e4089-kube-api-access-87g5s\") pod \"cinder-5926-account-create-lsmsd\" (UID: \"cb65fa33-2178-4d99-9279-ace1b50e4089\") " pod="openstack/cinder-5926-account-create-lsmsd" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.505779 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49981ca5-ad04-4746-8900-a440bb82bc36-operator-scripts\") pod \"neutron-db-create-2w8kz\" (UID: \"49981ca5-ad04-4746-8900-a440bb82bc36\") " pod="openstack/neutron-db-create-2w8kz" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.505821 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7nrt\" (UniqueName: \"kubernetes.io/projected/49981ca5-ad04-4746-8900-a440bb82bc36-kube-api-access-l7nrt\") pod \"neutron-db-create-2w8kz\" (UID: \"49981ca5-ad04-4746-8900-a440bb82bc36\") " pod="openstack/neutron-db-create-2w8kz" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.505838 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mftzj\" (UniqueName: \"kubernetes.io/projected/cf8ae591-fb65-4513-b7eb-fd8f13c22761-kube-api-access-mftzj\") pod \"keystone-db-sync-62xwh\" (UID: \"cf8ae591-fb65-4513-b7eb-fd8f13c22761\") " pod="openstack/keystone-db-sync-62xwh" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.506904 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb65fa33-2178-4d99-9279-ace1b50e4089-operator-scripts\") pod \"cinder-5926-account-create-lsmsd\" (UID: \"cb65fa33-2178-4d99-9279-ace1b50e4089\") " pod="openstack/cinder-5926-account-create-lsmsd" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.512487 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf8ae591-fb65-4513-b7eb-fd8f13c22761-config-data\") pod \"keystone-db-sync-62xwh\" (UID: \"cf8ae591-fb65-4513-b7eb-fd8f13c22761\") " pod="openstack/keystone-db-sync-62xwh" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.519973 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf8ae591-fb65-4513-b7eb-fd8f13c22761-combined-ca-bundle\") pod \"keystone-db-sync-62xwh\" (UID: \"cf8ae591-fb65-4513-b7eb-fd8f13c22761\") " pod="openstack/keystone-db-sync-62xwh" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.524732 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87g5s\" (UniqueName: \"kubernetes.io/projected/cb65fa33-2178-4d99-9279-ace1b50e4089-kube-api-access-87g5s\") pod \"cinder-5926-account-create-lsmsd\" (UID: \"cb65fa33-2178-4d99-9279-ace1b50e4089\") " pod="openstack/cinder-5926-account-create-lsmsd" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.527615 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mftzj\" (UniqueName: \"kubernetes.io/projected/cf8ae591-fb65-4513-b7eb-fd8f13c22761-kube-api-access-mftzj\") pod \"keystone-db-sync-62xwh\" (UID: \"cf8ae591-fb65-4513-b7eb-fd8f13c22761\") " pod="openstack/keystone-db-sync-62xwh" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.540582 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5926-account-create-lsmsd" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.545426 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7l8kj" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.575248 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-62xwh" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.610001 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhwrp\" (UniqueName: \"kubernetes.io/projected/5530b75c-ecf7-4774-8d4e-1acd7c14d729-kube-api-access-bhwrp\") pod \"neutron-448d-account-create-z7k4c\" (UID: \"5530b75c-ecf7-4774-8d4e-1acd7c14d729\") " pod="openstack/neutron-448d-account-create-z7k4c" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.610071 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49981ca5-ad04-4746-8900-a440bb82bc36-operator-scripts\") pod \"neutron-db-create-2w8kz\" (UID: \"49981ca5-ad04-4746-8900-a440bb82bc36\") " pod="openstack/neutron-db-create-2w8kz" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.610112 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5530b75c-ecf7-4774-8d4e-1acd7c14d729-operator-scripts\") pod \"neutron-448d-account-create-z7k4c\" (UID: \"5530b75c-ecf7-4774-8d4e-1acd7c14d729\") " pod="openstack/neutron-448d-account-create-z7k4c" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.610142 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7nrt\" (UniqueName: \"kubernetes.io/projected/49981ca5-ad04-4746-8900-a440bb82bc36-kube-api-access-l7nrt\") pod \"neutron-db-create-2w8kz\" (UID: \"49981ca5-ad04-4746-8900-a440bb82bc36\") " pod="openstack/neutron-db-create-2w8kz" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.611087 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49981ca5-ad04-4746-8900-a440bb82bc36-operator-scripts\") pod \"neutron-db-create-2w8kz\" (UID: \"49981ca5-ad04-4746-8900-a440bb82bc36\") " pod="openstack/neutron-db-create-2w8kz" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.629683 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7nrt\" (UniqueName: \"kubernetes.io/projected/49981ca5-ad04-4746-8900-a440bb82bc36-kube-api-access-l7nrt\") pod \"neutron-db-create-2w8kz\" (UID: \"49981ca5-ad04-4746-8900-a440bb82bc36\") " pod="openstack/neutron-db-create-2w8kz" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.711285 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5530b75c-ecf7-4774-8d4e-1acd7c14d729-operator-scripts\") pod \"neutron-448d-account-create-z7k4c\" (UID: \"5530b75c-ecf7-4774-8d4e-1acd7c14d729\") " pod="openstack/neutron-448d-account-create-z7k4c" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.711719 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhwrp\" (UniqueName: \"kubernetes.io/projected/5530b75c-ecf7-4774-8d4e-1acd7c14d729-kube-api-access-bhwrp\") pod \"neutron-448d-account-create-z7k4c\" (UID: \"5530b75c-ecf7-4774-8d4e-1acd7c14d729\") " pod="openstack/neutron-448d-account-create-z7k4c" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.712843 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5530b75c-ecf7-4774-8d4e-1acd7c14d729-operator-scripts\") pod \"neutron-448d-account-create-z7k4c\" (UID: \"5530b75c-ecf7-4774-8d4e-1acd7c14d729\") " pod="openstack/neutron-448d-account-create-z7k4c" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.731516 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhwrp\" (UniqueName: \"kubernetes.io/projected/5530b75c-ecf7-4774-8d4e-1acd7c14d729-kube-api-access-bhwrp\") pod \"neutron-448d-account-create-z7k4c\" (UID: \"5530b75c-ecf7-4774-8d4e-1acd7c14d729\") " pod="openstack/neutron-448d-account-create-z7k4c" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.857061 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vjwqm"] Nov 24 19:35:48 crc kubenswrapper[4812]: W1124 19:35:48.869351 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda190769b_2cd0_4c64_944c_3a66e6b61e95.slice/crio-83f346b775c4fcb118893ef49de091608babd66331942159f97cea6eb657cce4 WatchSource:0}: Error finding container 83f346b775c4fcb118893ef49de091608babd66331942159f97cea6eb657cce4: Status 404 returned error can't find the container with id 83f346b775c4fcb118893ef49de091608babd66331942159f97cea6eb657cce4 Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.918126 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2w8kz" Nov 24 19:35:48 crc kubenswrapper[4812]: I1124 19:35:48.940213 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-448d-account-create-z7k4c" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.011565 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fac2-account-create-szxkh"] Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.136264 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerStarted","Data":"5da49ed3bd43bccf7c1c7a762249c0b9563d0a34713a10ddabb4f29f3bb2d6c4"} Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.147568 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vjwqm" event={"ID":"a190769b-2cd0-4c64-944c-3a66e6b61e95","Type":"ContainerStarted","Data":"b703ec77a0ae9156427e8cc74e3ce3d598c22e004a5eb55415fde3f16621e23f"} Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.147618 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vjwqm" event={"ID":"a190769b-2cd0-4c64-944c-3a66e6b61e95","Type":"ContainerStarted","Data":"83f346b775c4fcb118893ef49de091608babd66331942159f97cea6eb657cce4"} Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.159720 4812 generic.go:334] "Generic (PLEG): container finished" podID="32ccb340-a030-436e-9484-c6021ee28bbe" containerID="d5fc7a271a2628e618e5cb7c750e95f0448fef4eb6ac7d1956390eefe8e62dfe" exitCode=0 Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.159781 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" event={"ID":"32ccb340-a030-436e-9484-c6021ee28bbe","Type":"ContainerDied","Data":"d5fc7a271a2628e618e5cb7c750e95f0448fef4eb6ac7d1956390eefe8e62dfe"} Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.171053 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fac2-account-create-szxkh" event={"ID":"61c60661-872c-43f4-8830-7d96fd575bd8","Type":"ContainerStarted","Data":"92642266d9c30108ff16e4ff2c02c026e62f39a75b0095b3c10073e118d0c417"} Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.186077 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=40.406858945 podStartE2EDuration="47.186056681s" podCreationTimestamp="2025-11-24 19:35:02 +0000 UTC" firstStartedPulling="2025-11-24 19:35:35.922957738 +0000 UTC m=+1129.711910109" lastFinishedPulling="2025-11-24 19:35:42.702155434 +0000 UTC m=+1136.491107845" observedRunningTime="2025-11-24 19:35:49.176466729 +0000 UTC m=+1142.965419120" watchObservedRunningTime="2025-11-24 19:35:49.186056681 +0000 UTC m=+1142.975009042" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.259719 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-vjwqm" podStartSLOduration=2.25970294 podStartE2EDuration="2.25970294s" podCreationTimestamp="2025-11-24 19:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:35:49.257739395 +0000 UTC m=+1143.046691776" watchObservedRunningTime="2025-11-24 19:35:49.25970294 +0000 UTC m=+1143.048655311" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.389160 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-7l8kj"] Nov 24 19:35:49 crc kubenswrapper[4812]: W1124 19:35:49.394347 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfb137c7_3a42_4a4a_b463_f27806805277.slice/crio-865188c6a1baf29348f9e501d3997477e456699ce67604c635e4fa6d64a51402 WatchSource:0}: Error finding container 865188c6a1baf29348f9e501d3997477e456699ce67604c635e4fa6d64a51402: Status 404 returned error can't find the container with id 865188c6a1baf29348f9e501d3997477e456699ce67604c635e4fa6d64a51402 Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.402762 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-62xwh"] Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.454368 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5926-account-create-lsmsd"] Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.475769 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2w8kz"] Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.514287 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84ddf475bf-dvmhs"] Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.544101 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6856c564b9-rfzx9"] Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.545410 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.549730 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.571836 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6856c564b9-rfzx9"] Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.599455 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-448d-account-create-z7k4c"] Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.633916 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-config\") pod \"dnsmasq-dns-6856c564b9-rfzx9\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.634165 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq7dn\" (UniqueName: \"kubernetes.io/projected/79f12eac-94ed-4787-b2ce-92c02040e8f0-kube-api-access-lq7dn\") pod \"dnsmasq-dns-6856c564b9-rfzx9\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.634251 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-ovsdbserver-sb\") pod \"dnsmasq-dns-6856c564b9-rfzx9\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.634523 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-dns-swift-storage-0\") pod \"dnsmasq-dns-6856c564b9-rfzx9\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.634580 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-dns-svc\") pod \"dnsmasq-dns-6856c564b9-rfzx9\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.634645 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-ovsdbserver-nb\") pod \"dnsmasq-dns-6856c564b9-rfzx9\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.736246 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-config\") pod \"dnsmasq-dns-6856c564b9-rfzx9\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.736322 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq7dn\" (UniqueName: \"kubernetes.io/projected/79f12eac-94ed-4787-b2ce-92c02040e8f0-kube-api-access-lq7dn\") pod \"dnsmasq-dns-6856c564b9-rfzx9\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.736636 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-ovsdbserver-sb\") pod \"dnsmasq-dns-6856c564b9-rfzx9\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.737064 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-config\") pod \"dnsmasq-dns-6856c564b9-rfzx9\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.737411 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-ovsdbserver-sb\") pod \"dnsmasq-dns-6856c564b9-rfzx9\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.737555 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-dns-swift-storage-0\") pod \"dnsmasq-dns-6856c564b9-rfzx9\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.738156 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-dns-swift-storage-0\") pod \"dnsmasq-dns-6856c564b9-rfzx9\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.738191 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-dns-svc\") pod \"dnsmasq-dns-6856c564b9-rfzx9\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.738260 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-ovsdbserver-nb\") pod \"dnsmasq-dns-6856c564b9-rfzx9\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.738409 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-dns-svc\") pod \"dnsmasq-dns-6856c564b9-rfzx9\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.738821 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-ovsdbserver-nb\") pod \"dnsmasq-dns-6856c564b9-rfzx9\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:35:49 crc kubenswrapper[4812]: I1124 19:35:49.762292 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq7dn\" (UniqueName: \"kubernetes.io/projected/79f12eac-94ed-4787-b2ce-92c02040e8f0-kube-api-access-lq7dn\") pod \"dnsmasq-dns-6856c564b9-rfzx9\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:35:50 crc kubenswrapper[4812]: I1124 19:35:50.008365 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:35:50 crc kubenswrapper[4812]: I1124 19:35:50.190877 4812 generic.go:334] "Generic (PLEG): container finished" podID="cb65fa33-2178-4d99-9279-ace1b50e4089" containerID="4ae81d337940329ca754c7061c6455b5bfd2988945b51436bb58b67fc8c4cf02" exitCode=0 Nov 24 19:35:50 crc kubenswrapper[4812]: I1124 19:35:50.190950 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5926-account-create-lsmsd" event={"ID":"cb65fa33-2178-4d99-9279-ace1b50e4089","Type":"ContainerDied","Data":"4ae81d337940329ca754c7061c6455b5bfd2988945b51436bb58b67fc8c4cf02"} Nov 24 19:35:50 crc kubenswrapper[4812]: I1124 19:35:50.191282 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5926-account-create-lsmsd" event={"ID":"cb65fa33-2178-4d99-9279-ace1b50e4089","Type":"ContainerStarted","Data":"e03239b07c6820fc3b3a12a73c4e358a3f952c0ea81a9daef077f5867f1ef913"} Nov 24 19:35:50 crc kubenswrapper[4812]: I1124 19:35:50.201391 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" event={"ID":"32ccb340-a030-436e-9484-c6021ee28bbe","Type":"ContainerStarted","Data":"2301d34462c4fb86fecee9214e84f9fcc38f011588435acd2e18477e440e89b1"} Nov 24 19:35:50 crc kubenswrapper[4812]: I1124 19:35:50.201442 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" Nov 24 19:35:50 crc kubenswrapper[4812]: I1124 19:35:50.211823 4812 generic.go:334] "Generic (PLEG): container finished" podID="5530b75c-ecf7-4774-8d4e-1acd7c14d729" containerID="35c1eabbdd36c309a8caf2aee002a0c9f518678113009e4ab89110691a9d1f18" exitCode=0 Nov 24 19:35:50 crc kubenswrapper[4812]: I1124 19:35:50.211967 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-448d-account-create-z7k4c" event={"ID":"5530b75c-ecf7-4774-8d4e-1acd7c14d729","Type":"ContainerDied","Data":"35c1eabbdd36c309a8caf2aee002a0c9f518678113009e4ab89110691a9d1f18"} Nov 24 19:35:50 crc kubenswrapper[4812]: I1124 19:35:50.211992 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-448d-account-create-z7k4c" event={"ID":"5530b75c-ecf7-4774-8d4e-1acd7c14d729","Type":"ContainerStarted","Data":"d0fb753ba53ec7b152404e9c733589e12d03c6169ec7037258c522701936c8ca"} Nov 24 19:35:50 crc kubenswrapper[4812]: I1124 19:35:50.215598 4812 generic.go:334] "Generic (PLEG): container finished" podID="49981ca5-ad04-4746-8900-a440bb82bc36" containerID="a0e49ffe5998442991d74da802e4da1613968b0b01164fd062d312a1654b88c3" exitCode=0 Nov 24 19:35:50 crc kubenswrapper[4812]: I1124 19:35:50.215676 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2w8kz" event={"ID":"49981ca5-ad04-4746-8900-a440bb82bc36","Type":"ContainerDied","Data":"a0e49ffe5998442991d74da802e4da1613968b0b01164fd062d312a1654b88c3"} Nov 24 19:35:50 crc kubenswrapper[4812]: I1124 19:35:50.215706 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2w8kz" event={"ID":"49981ca5-ad04-4746-8900-a440bb82bc36","Type":"ContainerStarted","Data":"b840d3dac6e5dcf6f4a7473a32a80edb9a73d411d20ba2b7b1d31ea6fe9ecd8e"} Nov 24 19:35:50 crc kubenswrapper[4812]: I1124 19:35:50.216969 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-62xwh" event={"ID":"cf8ae591-fb65-4513-b7eb-fd8f13c22761","Type":"ContainerStarted","Data":"f82a81cf191dfd970354b71de4e613f31a3b06eded86d439ef97f5f75e4fff7d"} Nov 24 19:35:50 crc kubenswrapper[4812]: I1124 19:35:50.218578 4812 generic.go:334] "Generic (PLEG): container finished" podID="61c60661-872c-43f4-8830-7d96fd575bd8" containerID="46b5e8995de160649c855995e4c4380c56e1024966f5c97fad37845a75a8e823" exitCode=0 Nov 24 19:35:50 crc kubenswrapper[4812]: I1124 19:35:50.218652 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fac2-account-create-szxkh" event={"ID":"61c60661-872c-43f4-8830-7d96fd575bd8","Type":"ContainerDied","Data":"46b5e8995de160649c855995e4c4380c56e1024966f5c97fad37845a75a8e823"} Nov 24 19:35:50 crc kubenswrapper[4812]: I1124 19:35:50.221260 4812 generic.go:334] "Generic (PLEG): container finished" podID="a190769b-2cd0-4c64-944c-3a66e6b61e95" containerID="b703ec77a0ae9156427e8cc74e3ce3d598c22e004a5eb55415fde3f16621e23f" exitCode=0 Nov 24 19:35:50 crc kubenswrapper[4812]: I1124 19:35:50.221444 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vjwqm" event={"ID":"a190769b-2cd0-4c64-944c-3a66e6b61e95","Type":"ContainerDied","Data":"b703ec77a0ae9156427e8cc74e3ce3d598c22e004a5eb55415fde3f16621e23f"} Nov 24 19:35:50 crc kubenswrapper[4812]: I1124 19:35:50.223254 4812 generic.go:334] "Generic (PLEG): container finished" podID="cfb137c7-3a42-4a4a-b463-f27806805277" containerID="3013595fb4bd60e42e57392ce0200dcf65e129516355580bf500376553432846" exitCode=0 Nov 24 19:35:50 crc kubenswrapper[4812]: I1124 19:35:50.223342 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7l8kj" event={"ID":"cfb137c7-3a42-4a4a-b463-f27806805277","Type":"ContainerDied","Data":"3013595fb4bd60e42e57392ce0200dcf65e129516355580bf500376553432846"} Nov 24 19:35:50 crc kubenswrapper[4812]: I1124 19:35:50.223386 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7l8kj" event={"ID":"cfb137c7-3a42-4a4a-b463-f27806805277","Type":"ContainerStarted","Data":"865188c6a1baf29348f9e501d3997477e456699ce67604c635e4fa6d64a51402"} Nov 24 19:35:50 crc kubenswrapper[4812]: I1124 19:35:50.230359 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" podStartSLOduration=5.23031321 podStartE2EDuration="5.23031321s" podCreationTimestamp="2025-11-24 19:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:35:50.228708894 +0000 UTC m=+1144.017661265" watchObservedRunningTime="2025-11-24 19:35:50.23031321 +0000 UTC m=+1144.019265581" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:50.490583 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6856c564b9-rfzx9"] Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.231961 4812 generic.go:334] "Generic (PLEG): container finished" podID="79f12eac-94ed-4787-b2ce-92c02040e8f0" containerID="7233454ac30f895f85b78aa1117a8825c252bbd3e311ab41c2d20b25ed4e1a53" exitCode=0 Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.232748 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" event={"ID":"79f12eac-94ed-4787-b2ce-92c02040e8f0","Type":"ContainerDied","Data":"7233454ac30f895f85b78aa1117a8825c252bbd3e311ab41c2d20b25ed4e1a53"} Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.232779 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" event={"ID":"79f12eac-94ed-4787-b2ce-92c02040e8f0","Type":"ContainerStarted","Data":"aa523295df4d85ae36a7bf60d0bd9edd215436ad649f43a34110b1a1f3e2a5ef"} Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.233037 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" podUID="32ccb340-a030-436e-9484-c6021ee28bbe" containerName="dnsmasq-dns" containerID="cri-o://2301d34462c4fb86fecee9214e84f9fcc38f011588435acd2e18477e440e89b1" gracePeriod=10 Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.766379 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5926-account-create-lsmsd" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.782751 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7l8kj" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.784001 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2w8kz" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.879187 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg4lv\" (UniqueName: \"kubernetes.io/projected/cfb137c7-3a42-4a4a-b463-f27806805277-kube-api-access-gg4lv\") pod \"cfb137c7-3a42-4a4a-b463-f27806805277\" (UID: \"cfb137c7-3a42-4a4a-b463-f27806805277\") " Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.879244 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87g5s\" (UniqueName: \"kubernetes.io/projected/cb65fa33-2178-4d99-9279-ace1b50e4089-kube-api-access-87g5s\") pod \"cb65fa33-2178-4d99-9279-ace1b50e4089\" (UID: \"cb65fa33-2178-4d99-9279-ace1b50e4089\") " Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.879280 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49981ca5-ad04-4746-8900-a440bb82bc36-operator-scripts\") pod \"49981ca5-ad04-4746-8900-a440bb82bc36\" (UID: \"49981ca5-ad04-4746-8900-a440bb82bc36\") " Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.879316 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7nrt\" (UniqueName: \"kubernetes.io/projected/49981ca5-ad04-4746-8900-a440bb82bc36-kube-api-access-l7nrt\") pod \"49981ca5-ad04-4746-8900-a440bb82bc36\" (UID: \"49981ca5-ad04-4746-8900-a440bb82bc36\") " Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.879363 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb65fa33-2178-4d99-9279-ace1b50e4089-operator-scripts\") pod \"cb65fa33-2178-4d99-9279-ace1b50e4089\" (UID: \"cb65fa33-2178-4d99-9279-ace1b50e4089\") " Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.879457 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfb137c7-3a42-4a4a-b463-f27806805277-operator-scripts\") pod \"cfb137c7-3a42-4a4a-b463-f27806805277\" (UID: \"cfb137c7-3a42-4a4a-b463-f27806805277\") " Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.879986 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49981ca5-ad04-4746-8900-a440bb82bc36-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49981ca5-ad04-4746-8900-a440bb82bc36" (UID: "49981ca5-ad04-4746-8900-a440bb82bc36"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.880435 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfb137c7-3a42-4a4a-b463-f27806805277-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cfb137c7-3a42-4a4a-b463-f27806805277" (UID: "cfb137c7-3a42-4a4a-b463-f27806805277"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.880475 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb65fa33-2178-4d99-9279-ace1b50e4089-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb65fa33-2178-4d99-9279-ace1b50e4089" (UID: "cb65fa33-2178-4d99-9279-ace1b50e4089"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.884860 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfb137c7-3a42-4a4a-b463-f27806805277-kube-api-access-gg4lv" (OuterVolumeSpecName: "kube-api-access-gg4lv") pod "cfb137c7-3a42-4a4a-b463-f27806805277" (UID: "cfb137c7-3a42-4a4a-b463-f27806805277"). InnerVolumeSpecName "kube-api-access-gg4lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.885581 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-448d-account-create-z7k4c" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.885591 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49981ca5-ad04-4746-8900-a440bb82bc36-kube-api-access-l7nrt" (OuterVolumeSpecName: "kube-api-access-l7nrt") pod "49981ca5-ad04-4746-8900-a440bb82bc36" (UID: "49981ca5-ad04-4746-8900-a440bb82bc36"). InnerVolumeSpecName "kube-api-access-l7nrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.886760 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb65fa33-2178-4d99-9279-ace1b50e4089-kube-api-access-87g5s" (OuterVolumeSpecName: "kube-api-access-87g5s") pod "cb65fa33-2178-4d99-9279-ace1b50e4089" (UID: "cb65fa33-2178-4d99-9279-ace1b50e4089"). InnerVolumeSpecName "kube-api-access-87g5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.893117 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vjwqm" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.910462 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fac2-account-create-szxkh" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.970166 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.980832 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a190769b-2cd0-4c64-944c-3a66e6b61e95-operator-scripts\") pod \"a190769b-2cd0-4c64-944c-3a66e6b61e95\" (UID: \"a190769b-2cd0-4c64-944c-3a66e6b61e95\") " Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.980959 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jd6c\" (UniqueName: \"kubernetes.io/projected/61c60661-872c-43f4-8830-7d96fd575bd8-kube-api-access-9jd6c\") pod \"61c60661-872c-43f4-8830-7d96fd575bd8\" (UID: \"61c60661-872c-43f4-8830-7d96fd575bd8\") " Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.981007 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mk8r\" (UniqueName: \"kubernetes.io/projected/a190769b-2cd0-4c64-944c-3a66e6b61e95-kube-api-access-9mk8r\") pod \"a190769b-2cd0-4c64-944c-3a66e6b61e95\" (UID: \"a190769b-2cd0-4c64-944c-3a66e6b61e95\") " Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.981059 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5530b75c-ecf7-4774-8d4e-1acd7c14d729-operator-scripts\") pod \"5530b75c-ecf7-4774-8d4e-1acd7c14d729\" (UID: \"5530b75c-ecf7-4774-8d4e-1acd7c14d729\") " Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.981216 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhwrp\" (UniqueName: \"kubernetes.io/projected/5530b75c-ecf7-4774-8d4e-1acd7c14d729-kube-api-access-bhwrp\") pod \"5530b75c-ecf7-4774-8d4e-1acd7c14d729\" (UID: \"5530b75c-ecf7-4774-8d4e-1acd7c14d729\") " Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.981321 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61c60661-872c-43f4-8830-7d96fd575bd8-operator-scripts\") pod \"61c60661-872c-43f4-8830-7d96fd575bd8\" (UID: \"61c60661-872c-43f4-8830-7d96fd575bd8\") " Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.982187 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a190769b-2cd0-4c64-944c-3a66e6b61e95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a190769b-2cd0-4c64-944c-3a66e6b61e95" (UID: "a190769b-2cd0-4c64-944c-3a66e6b61e95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.982460 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5530b75c-ecf7-4774-8d4e-1acd7c14d729-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5530b75c-ecf7-4774-8d4e-1acd7c14d729" (UID: "5530b75c-ecf7-4774-8d4e-1acd7c14d729"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.985640 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61c60661-872c-43f4-8830-7d96fd575bd8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "61c60661-872c-43f4-8830-7d96fd575bd8" (UID: "61c60661-872c-43f4-8830-7d96fd575bd8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.985845 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a190769b-2cd0-4c64-944c-3a66e6b61e95-kube-api-access-9mk8r" (OuterVolumeSpecName: "kube-api-access-9mk8r") pod "a190769b-2cd0-4c64-944c-3a66e6b61e95" (UID: "a190769b-2cd0-4c64-944c-3a66e6b61e95"). InnerVolumeSpecName "kube-api-access-9mk8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.987798 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61c60661-872c-43f4-8830-7d96fd575bd8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.987832 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a190769b-2cd0-4c64-944c-3a66e6b61e95-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.987847 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg4lv\" (UniqueName: \"kubernetes.io/projected/cfb137c7-3a42-4a4a-b463-f27806805277-kube-api-access-gg4lv\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.987863 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mk8r\" (UniqueName: \"kubernetes.io/projected/a190769b-2cd0-4c64-944c-3a66e6b61e95-kube-api-access-9mk8r\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.987875 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87g5s\" (UniqueName: \"kubernetes.io/projected/cb65fa33-2178-4d99-9279-ace1b50e4089-kube-api-access-87g5s\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.987889 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49981ca5-ad04-4746-8900-a440bb82bc36-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.987900 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7nrt\" (UniqueName: \"kubernetes.io/projected/49981ca5-ad04-4746-8900-a440bb82bc36-kube-api-access-l7nrt\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.987909 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb65fa33-2178-4d99-9279-ace1b50e4089-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.987921 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5530b75c-ecf7-4774-8d4e-1acd7c14d729-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.987931 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfb137c7-3a42-4a4a-b463-f27806805277-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:51 crc kubenswrapper[4812]: I1124 19:35:51.993177 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61c60661-872c-43f4-8830-7d96fd575bd8-kube-api-access-9jd6c" (OuterVolumeSpecName: "kube-api-access-9jd6c") pod "61c60661-872c-43f4-8830-7d96fd575bd8" (UID: "61c60661-872c-43f4-8830-7d96fd575bd8"). InnerVolumeSpecName "kube-api-access-9jd6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.000645 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5530b75c-ecf7-4774-8d4e-1acd7c14d729-kube-api-access-bhwrp" (OuterVolumeSpecName: "kube-api-access-bhwrp") pod "5530b75c-ecf7-4774-8d4e-1acd7c14d729" (UID: "5530b75c-ecf7-4774-8d4e-1acd7c14d729"). InnerVolumeSpecName "kube-api-access-bhwrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.089389 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-ovsdbserver-sb\") pod \"32ccb340-a030-436e-9484-c6021ee28bbe\" (UID: \"32ccb340-a030-436e-9484-c6021ee28bbe\") " Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.089444 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-dns-svc\") pod \"32ccb340-a030-436e-9484-c6021ee28bbe\" (UID: \"32ccb340-a030-436e-9484-c6021ee28bbe\") " Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.089490 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-ovsdbserver-nb\") pod \"32ccb340-a030-436e-9484-c6021ee28bbe\" (UID: \"32ccb340-a030-436e-9484-c6021ee28bbe\") " Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.089514 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-config\") pod \"32ccb340-a030-436e-9484-c6021ee28bbe\" (UID: \"32ccb340-a030-436e-9484-c6021ee28bbe\") " Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.089838 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2trk\" (UniqueName: \"kubernetes.io/projected/32ccb340-a030-436e-9484-c6021ee28bbe-kube-api-access-m2trk\") pod \"32ccb340-a030-436e-9484-c6021ee28bbe\" (UID: \"32ccb340-a030-436e-9484-c6021ee28bbe\") " Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.090219 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhwrp\" (UniqueName: \"kubernetes.io/projected/5530b75c-ecf7-4774-8d4e-1acd7c14d729-kube-api-access-bhwrp\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.090230 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jd6c\" (UniqueName: \"kubernetes.io/projected/61c60661-872c-43f4-8830-7d96fd575bd8-kube-api-access-9jd6c\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.107232 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32ccb340-a030-436e-9484-c6021ee28bbe-kube-api-access-m2trk" (OuterVolumeSpecName: "kube-api-access-m2trk") pod "32ccb340-a030-436e-9484-c6021ee28bbe" (UID: "32ccb340-a030-436e-9484-c6021ee28bbe"). InnerVolumeSpecName "kube-api-access-m2trk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.139056 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "32ccb340-a030-436e-9484-c6021ee28bbe" (UID: "32ccb340-a030-436e-9484-c6021ee28bbe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.147591 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "32ccb340-a030-436e-9484-c6021ee28bbe" (UID: "32ccb340-a030-436e-9484-c6021ee28bbe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.157268 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-config" (OuterVolumeSpecName: "config") pod "32ccb340-a030-436e-9484-c6021ee28bbe" (UID: "32ccb340-a030-436e-9484-c6021ee28bbe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.158980 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "32ccb340-a030-436e-9484-c6021ee28bbe" (UID: "32ccb340-a030-436e-9484-c6021ee28bbe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.191718 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2trk\" (UniqueName: \"kubernetes.io/projected/32ccb340-a030-436e-9484-c6021ee28bbe-kube-api-access-m2trk\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.191939 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.192045 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.192134 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.192219 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32ccb340-a030-436e-9484-c6021ee28bbe-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.241244 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-7l8kj" event={"ID":"cfb137c7-3a42-4a4a-b463-f27806805277","Type":"ContainerDied","Data":"865188c6a1baf29348f9e501d3997477e456699ce67604c635e4fa6d64a51402"} Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.241294 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="865188c6a1baf29348f9e501d3997477e456699ce67604c635e4fa6d64a51402" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.241303 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-7l8kj" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.245791 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5926-account-create-lsmsd" event={"ID":"cb65fa33-2178-4d99-9279-ace1b50e4089","Type":"ContainerDied","Data":"e03239b07c6820fc3b3a12a73c4e358a3f952c0ea81a9daef077f5867f1ef913"} Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.245821 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e03239b07c6820fc3b3a12a73c4e358a3f952c0ea81a9daef077f5867f1ef913" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.245874 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5926-account-create-lsmsd" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.259541 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2w8kz" event={"ID":"49981ca5-ad04-4746-8900-a440bb82bc36","Type":"ContainerDied","Data":"b840d3dac6e5dcf6f4a7473a32a80edb9a73d411d20ba2b7b1d31ea6fe9ecd8e"} Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.259586 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b840d3dac6e5dcf6f4a7473a32a80edb9a73d411d20ba2b7b1d31ea6fe9ecd8e" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.259568 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2w8kz" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.261946 4812 generic.go:334] "Generic (PLEG): container finished" podID="32ccb340-a030-436e-9484-c6021ee28bbe" containerID="2301d34462c4fb86fecee9214e84f9fcc38f011588435acd2e18477e440e89b1" exitCode=0 Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.262006 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.262017 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" event={"ID":"32ccb340-a030-436e-9484-c6021ee28bbe","Type":"ContainerDied","Data":"2301d34462c4fb86fecee9214e84f9fcc38f011588435acd2e18477e440e89b1"} Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.262047 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84ddf475bf-dvmhs" event={"ID":"32ccb340-a030-436e-9484-c6021ee28bbe","Type":"ContainerDied","Data":"68077d8c2ffcd8e9903e93459e9cefc862edac21fb0c673959ea188779299d2b"} Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.262072 4812 scope.go:117] "RemoveContainer" containerID="2301d34462c4fb86fecee9214e84f9fcc38f011588435acd2e18477e440e89b1" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.269149 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-448d-account-create-z7k4c" event={"ID":"5530b75c-ecf7-4774-8d4e-1acd7c14d729","Type":"ContainerDied","Data":"d0fb753ba53ec7b152404e9c733589e12d03c6169ec7037258c522701936c8ca"} Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.269182 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0fb753ba53ec7b152404e9c733589e12d03c6169ec7037258c522701936c8ca" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.269242 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-448d-account-create-z7k4c" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.271818 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fac2-account-create-szxkh" event={"ID":"61c60661-872c-43f4-8830-7d96fd575bd8","Type":"ContainerDied","Data":"92642266d9c30108ff16e4ff2c02c026e62f39a75b0095b3c10073e118d0c417"} Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.271851 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92642266d9c30108ff16e4ff2c02c026e62f39a75b0095b3c10073e118d0c417" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.271903 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fac2-account-create-szxkh" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.276065 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" event={"ID":"79f12eac-94ed-4787-b2ce-92c02040e8f0","Type":"ContainerStarted","Data":"62a2d7b375580c8c67d3f9061814f2be06edb499e454fecdadbeb3de5fedee03"} Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.276213 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.278822 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vjwqm" event={"ID":"a190769b-2cd0-4c64-944c-3a66e6b61e95","Type":"ContainerDied","Data":"83f346b775c4fcb118893ef49de091608babd66331942159f97cea6eb657cce4"} Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.278945 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83f346b775c4fcb118893ef49de091608babd66331942159f97cea6eb657cce4" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.279000 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vjwqm" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.300983 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" podStartSLOduration=3.3009598110000002 podStartE2EDuration="3.300959811s" podCreationTimestamp="2025-11-24 19:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:35:52.299621713 +0000 UTC m=+1146.088574114" watchObservedRunningTime="2025-11-24 19:35:52.300959811 +0000 UTC m=+1146.089912192" Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.328938 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84ddf475bf-dvmhs"] Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.335301 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84ddf475bf-dvmhs"] Nov 24 19:35:52 crc kubenswrapper[4812]: I1124 19:35:52.975265 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32ccb340-a030-436e-9484-c6021ee28bbe" path="/var/lib/kubelet/pods/32ccb340-a030-436e-9484-c6021ee28bbe/volumes" Nov 24 19:35:54 crc kubenswrapper[4812]: I1124 19:35:54.745536 4812 scope.go:117] "RemoveContainer" containerID="d5fc7a271a2628e618e5cb7c750e95f0448fef4eb6ac7d1956390eefe8e62dfe" Nov 24 19:35:54 crc kubenswrapper[4812]: I1124 19:35:54.809744 4812 scope.go:117] "RemoveContainer" containerID="2301d34462c4fb86fecee9214e84f9fcc38f011588435acd2e18477e440e89b1" Nov 24 19:35:54 crc kubenswrapper[4812]: E1124 19:35:54.810443 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2301d34462c4fb86fecee9214e84f9fcc38f011588435acd2e18477e440e89b1\": container with ID starting with 2301d34462c4fb86fecee9214e84f9fcc38f011588435acd2e18477e440e89b1 not found: ID does not exist" containerID="2301d34462c4fb86fecee9214e84f9fcc38f011588435acd2e18477e440e89b1" Nov 24 19:35:54 crc kubenswrapper[4812]: I1124 19:35:54.810495 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2301d34462c4fb86fecee9214e84f9fcc38f011588435acd2e18477e440e89b1"} err="failed to get container status \"2301d34462c4fb86fecee9214e84f9fcc38f011588435acd2e18477e440e89b1\": rpc error: code = NotFound desc = could not find container \"2301d34462c4fb86fecee9214e84f9fcc38f011588435acd2e18477e440e89b1\": container with ID starting with 2301d34462c4fb86fecee9214e84f9fcc38f011588435acd2e18477e440e89b1 not found: ID does not exist" Nov 24 19:35:54 crc kubenswrapper[4812]: I1124 19:35:54.810673 4812 scope.go:117] "RemoveContainer" containerID="d5fc7a271a2628e618e5cb7c750e95f0448fef4eb6ac7d1956390eefe8e62dfe" Nov 24 19:35:54 crc kubenswrapper[4812]: E1124 19:35:54.811059 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5fc7a271a2628e618e5cb7c750e95f0448fef4eb6ac7d1956390eefe8e62dfe\": container with ID starting with d5fc7a271a2628e618e5cb7c750e95f0448fef4eb6ac7d1956390eefe8e62dfe not found: ID does not exist" containerID="d5fc7a271a2628e618e5cb7c750e95f0448fef4eb6ac7d1956390eefe8e62dfe" Nov 24 19:35:54 crc kubenswrapper[4812]: I1124 19:35:54.811163 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5fc7a271a2628e618e5cb7c750e95f0448fef4eb6ac7d1956390eefe8e62dfe"} err="failed to get container status \"d5fc7a271a2628e618e5cb7c750e95f0448fef4eb6ac7d1956390eefe8e62dfe\": rpc error: code = NotFound desc = could not find container \"d5fc7a271a2628e618e5cb7c750e95f0448fef4eb6ac7d1956390eefe8e62dfe\": container with ID starting with d5fc7a271a2628e618e5cb7c750e95f0448fef4eb6ac7d1956390eefe8e62dfe not found: ID does not exist" Nov 24 19:35:55 crc kubenswrapper[4812]: I1124 19:35:55.313360 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-62xwh" event={"ID":"cf8ae591-fb65-4513-b7eb-fd8f13c22761","Type":"ContainerStarted","Data":"759efb3419052af308a0f1cfd3e42dcd638d6d2359dad4e5663e9916b25ea007"} Nov 24 19:35:58 crc kubenswrapper[4812]: I1124 19:35:58.352039 4812 generic.go:334] "Generic (PLEG): container finished" podID="cf8ae591-fb65-4513-b7eb-fd8f13c22761" containerID="759efb3419052af308a0f1cfd3e42dcd638d6d2359dad4e5663e9916b25ea007" exitCode=0 Nov 24 19:35:58 crc kubenswrapper[4812]: I1124 19:35:58.352203 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-62xwh" event={"ID":"cf8ae591-fb65-4513-b7eb-fd8f13c22761","Type":"ContainerDied","Data":"759efb3419052af308a0f1cfd3e42dcd638d6d2359dad4e5663e9916b25ea007"} Nov 24 19:35:59 crc kubenswrapper[4812]: I1124 19:35:59.789134 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-62xwh" Nov 24 19:35:59 crc kubenswrapper[4812]: I1124 19:35:59.933159 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mftzj\" (UniqueName: \"kubernetes.io/projected/cf8ae591-fb65-4513-b7eb-fd8f13c22761-kube-api-access-mftzj\") pod \"cf8ae591-fb65-4513-b7eb-fd8f13c22761\" (UID: \"cf8ae591-fb65-4513-b7eb-fd8f13c22761\") " Nov 24 19:35:59 crc kubenswrapper[4812]: I1124 19:35:59.933304 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf8ae591-fb65-4513-b7eb-fd8f13c22761-config-data\") pod \"cf8ae591-fb65-4513-b7eb-fd8f13c22761\" (UID: \"cf8ae591-fb65-4513-b7eb-fd8f13c22761\") " Nov 24 19:35:59 crc kubenswrapper[4812]: I1124 19:35:59.933409 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf8ae591-fb65-4513-b7eb-fd8f13c22761-combined-ca-bundle\") pod \"cf8ae591-fb65-4513-b7eb-fd8f13c22761\" (UID: \"cf8ae591-fb65-4513-b7eb-fd8f13c22761\") " Nov 24 19:35:59 crc kubenswrapper[4812]: I1124 19:35:59.940953 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf8ae591-fb65-4513-b7eb-fd8f13c22761-kube-api-access-mftzj" (OuterVolumeSpecName: "kube-api-access-mftzj") pod "cf8ae591-fb65-4513-b7eb-fd8f13c22761" (UID: "cf8ae591-fb65-4513-b7eb-fd8f13c22761"). InnerVolumeSpecName "kube-api-access-mftzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:35:59 crc kubenswrapper[4812]: I1124 19:35:59.987731 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf8ae591-fb65-4513-b7eb-fd8f13c22761-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf8ae591-fb65-4513-b7eb-fd8f13c22761" (UID: "cf8ae591-fb65-4513-b7eb-fd8f13c22761"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.009935 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.010695 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf8ae591-fb65-4513-b7eb-fd8f13c22761-config-data" (OuterVolumeSpecName: "config-data") pod "cf8ae591-fb65-4513-b7eb-fd8f13c22761" (UID: "cf8ae591-fb65-4513-b7eb-fd8f13c22761"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.036664 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf8ae591-fb65-4513-b7eb-fd8f13c22761-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.036715 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf8ae591-fb65-4513-b7eb-fd8f13c22761-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.036736 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mftzj\" (UniqueName: \"kubernetes.io/projected/cf8ae591-fb65-4513-b7eb-fd8f13c22761-kube-api-access-mftzj\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.080422 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-jjph7"] Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.080715 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" podUID="8b3cbac7-75be-4160-84d2-1dcf750c19de" containerName="dnsmasq-dns" containerID="cri-o://5e92be46f30aebbddd1ccb1217b71c3cbf513d339a1383acb1f4bc96c35bc840" gracePeriod=10 Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.377005 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-62xwh" event={"ID":"cf8ae591-fb65-4513-b7eb-fd8f13c22761","Type":"ContainerDied","Data":"f82a81cf191dfd970354b71de4e613f31a3b06eded86d439ef97f5f75e4fff7d"} Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.377430 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f82a81cf191dfd970354b71de4e613f31a3b06eded86d439ef97f5f75e4fff7d" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.377505 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-62xwh" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.380144 4812 generic.go:334] "Generic (PLEG): container finished" podID="8b3cbac7-75be-4160-84d2-1dcf750c19de" containerID="5e92be46f30aebbddd1ccb1217b71c3cbf513d339a1383acb1f4bc96c35bc840" exitCode=0 Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.380174 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" event={"ID":"8b3cbac7-75be-4160-84d2-1dcf750c19de","Type":"ContainerDied","Data":"5e92be46f30aebbddd1ccb1217b71c3cbf513d339a1383acb1f4bc96c35bc840"} Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.576106 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.659130 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-config\") pod \"8b3cbac7-75be-4160-84d2-1dcf750c19de\" (UID: \"8b3cbac7-75be-4160-84d2-1dcf750c19de\") " Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.659227 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-dns-svc\") pod \"8b3cbac7-75be-4160-84d2-1dcf750c19de\" (UID: \"8b3cbac7-75be-4160-84d2-1dcf750c19de\") " Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.659494 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-ovsdbserver-sb\") pod \"8b3cbac7-75be-4160-84d2-1dcf750c19de\" (UID: \"8b3cbac7-75be-4160-84d2-1dcf750c19de\") " Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.659557 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-ovsdbserver-nb\") pod \"8b3cbac7-75be-4160-84d2-1dcf750c19de\" (UID: \"8b3cbac7-75be-4160-84d2-1dcf750c19de\") " Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.659649 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-982mn\" (UniqueName: \"kubernetes.io/projected/8b3cbac7-75be-4160-84d2-1dcf750c19de-kube-api-access-982mn\") pod \"8b3cbac7-75be-4160-84d2-1dcf750c19de\" (UID: \"8b3cbac7-75be-4160-84d2-1dcf750c19de\") " Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.660445 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7dbf8bff67-qz9jm"] Nov 24 19:36:00 crc kubenswrapper[4812]: E1124 19:36:00.660795 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a190769b-2cd0-4c64-944c-3a66e6b61e95" containerName="mariadb-database-create" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.660945 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a190769b-2cd0-4c64-944c-3a66e6b61e95" containerName="mariadb-database-create" Nov 24 19:36:00 crc kubenswrapper[4812]: E1124 19:36:00.661701 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c60661-872c-43f4-8830-7d96fd575bd8" containerName="mariadb-account-create" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.661771 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c60661-872c-43f4-8830-7d96fd575bd8" containerName="mariadb-account-create" Nov 24 19:36:00 crc kubenswrapper[4812]: E1124 19:36:00.661818 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb137c7-3a42-4a4a-b463-f27806805277" containerName="mariadb-database-create" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.661865 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb137c7-3a42-4a4a-b463-f27806805277" containerName="mariadb-database-create" Nov 24 19:36:00 crc kubenswrapper[4812]: E1124 19:36:00.661914 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3cbac7-75be-4160-84d2-1dcf750c19de" containerName="dnsmasq-dns" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.661956 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3cbac7-75be-4160-84d2-1dcf750c19de" containerName="dnsmasq-dns" Nov 24 19:36:00 crc kubenswrapper[4812]: E1124 19:36:00.662004 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b3cbac7-75be-4160-84d2-1dcf750c19de" containerName="init" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.662056 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b3cbac7-75be-4160-84d2-1dcf750c19de" containerName="init" Nov 24 19:36:00 crc kubenswrapper[4812]: E1124 19:36:00.662108 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49981ca5-ad04-4746-8900-a440bb82bc36" containerName="mariadb-database-create" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.662152 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="49981ca5-ad04-4746-8900-a440bb82bc36" containerName="mariadb-database-create" Nov 24 19:36:00 crc kubenswrapper[4812]: E1124 19:36:00.663968 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ccb340-a030-436e-9484-c6021ee28bbe" containerName="dnsmasq-dns" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.664031 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ccb340-a030-436e-9484-c6021ee28bbe" containerName="dnsmasq-dns" Nov 24 19:36:00 crc kubenswrapper[4812]: E1124 19:36:00.664083 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb65fa33-2178-4d99-9279-ace1b50e4089" containerName="mariadb-account-create" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.664125 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb65fa33-2178-4d99-9279-ace1b50e4089" containerName="mariadb-account-create" Nov 24 19:36:00 crc kubenswrapper[4812]: E1124 19:36:00.664174 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5530b75c-ecf7-4774-8d4e-1acd7c14d729" containerName="mariadb-account-create" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.664218 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5530b75c-ecf7-4774-8d4e-1acd7c14d729" containerName="mariadb-account-create" Nov 24 19:36:00 crc kubenswrapper[4812]: E1124 19:36:00.664285 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf8ae591-fb65-4513-b7eb-fd8f13c22761" containerName="keystone-db-sync" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.664339 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf8ae591-fb65-4513-b7eb-fd8f13c22761" containerName="keystone-db-sync" Nov 24 19:36:00 crc kubenswrapper[4812]: E1124 19:36:00.664397 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ccb340-a030-436e-9484-c6021ee28bbe" containerName="init" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.664581 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ccb340-a030-436e-9484-c6021ee28bbe" containerName="init" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.667622 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfb137c7-3a42-4a4a-b463-f27806805277" containerName="mariadb-database-create" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.667976 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b3cbac7-75be-4160-84d2-1dcf750c19de" containerName="dnsmasq-dns" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.670585 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb65fa33-2178-4d99-9279-ace1b50e4089" containerName="mariadb-account-create" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.670886 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c60661-872c-43f4-8830-7d96fd575bd8" containerName="mariadb-account-create" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.670947 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="49981ca5-ad04-4746-8900-a440bb82bc36" containerName="mariadb-database-create" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.670996 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="32ccb340-a030-436e-9484-c6021ee28bbe" containerName="dnsmasq-dns" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.671051 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf8ae591-fb65-4513-b7eb-fd8f13c22761" containerName="keystone-db-sync" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.671099 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="5530b75c-ecf7-4774-8d4e-1acd7c14d729" containerName="mariadb-account-create" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.671151 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a190769b-2cd0-4c64-944c-3a66e6b61e95" containerName="mariadb-database-create" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.672433 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.677391 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b3cbac7-75be-4160-84d2-1dcf750c19de-kube-api-access-982mn" (OuterVolumeSpecName: "kube-api-access-982mn") pod "8b3cbac7-75be-4160-84d2-1dcf750c19de" (UID: "8b3cbac7-75be-4160-84d2-1dcf750c19de"). InnerVolumeSpecName "kube-api-access-982mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.708870 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dbf8bff67-qz9jm"] Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.744563 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-9hwb5"] Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.752188 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8b3cbac7-75be-4160-84d2-1dcf750c19de" (UID: "8b3cbac7-75be-4160-84d2-1dcf750c19de"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.756546 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.761175 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc9c4\" (UniqueName: \"kubernetes.io/projected/fe0f12a7-b012-43b5-a79d-5745b3c02590-kube-api-access-tc9c4\") pod \"dnsmasq-dns-7dbf8bff67-qz9jm\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.761669 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-config" (OuterVolumeSpecName: "config") pod "8b3cbac7-75be-4160-84d2-1dcf750c19de" (UID: "8b3cbac7-75be-4160-84d2-1dcf750c19de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.761680 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-dns-svc\") pod \"dnsmasq-dns-7dbf8bff67-qz9jm\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.761815 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-ovsdbserver-nb\") pod \"dnsmasq-dns-7dbf8bff67-qz9jm\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.762022 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-config\") pod \"dnsmasq-dns-7dbf8bff67-qz9jm\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.762169 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-ovsdbserver-sb\") pod \"dnsmasq-dns-7dbf8bff67-qz9jm\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.763173 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-b5j9w" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.769123 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-dns-swift-storage-0\") pod \"dnsmasq-dns-7dbf8bff67-qz9jm\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.772118 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.772202 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-982mn\" (UniqueName: \"kubernetes.io/projected/8b3cbac7-75be-4160-84d2-1dcf750c19de-kube-api-access-982mn\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.772272 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.769893 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9hwb5"] Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.769770 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.769928 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.769975 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.776518 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.800755 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8b3cbac7-75be-4160-84d2-1dcf750c19de" (UID: "8b3cbac7-75be-4160-84d2-1dcf750c19de"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.857751 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8b3cbac7-75be-4160-84d2-1dcf750c19de" (UID: "8b3cbac7-75be-4160-84d2-1dcf750c19de"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.874180 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-config\") pod \"dnsmasq-dns-7dbf8bff67-qz9jm\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.875072 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-config\") pod \"dnsmasq-dns-7dbf8bff67-qz9jm\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.875138 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-ovsdbserver-sb\") pod \"dnsmasq-dns-7dbf8bff67-qz9jm\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.875177 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-scripts\") pod \"keystone-bootstrap-9hwb5\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.875200 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgxx8\" (UniqueName: \"kubernetes.io/projected/77e04404-63c3-4c13-8d11-5ec3953956ea-kube-api-access-tgxx8\") pod \"keystone-bootstrap-9hwb5\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.875216 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-config-data\") pod \"keystone-bootstrap-9hwb5\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.875231 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-combined-ca-bundle\") pod \"keystone-bootstrap-9hwb5\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.875254 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-credential-keys\") pod \"keystone-bootstrap-9hwb5\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.875270 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-dns-swift-storage-0\") pod \"dnsmasq-dns-7dbf8bff67-qz9jm\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.875314 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc9c4\" (UniqueName: \"kubernetes.io/projected/fe0f12a7-b012-43b5-a79d-5745b3c02590-kube-api-access-tc9c4\") pod \"dnsmasq-dns-7dbf8bff67-qz9jm\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.875344 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-dns-svc\") pod \"dnsmasq-dns-7dbf8bff67-qz9jm\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.875362 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-ovsdbserver-nb\") pod \"dnsmasq-dns-7dbf8bff67-qz9jm\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.875396 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-fernet-keys\") pod \"keystone-bootstrap-9hwb5\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.875436 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.875446 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b3cbac7-75be-4160-84d2-1dcf750c19de-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.875934 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-ovsdbserver-sb\") pod \"dnsmasq-dns-7dbf8bff67-qz9jm\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.876474 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-dns-svc\") pod \"dnsmasq-dns-7dbf8bff67-qz9jm\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.876715 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-dns-swift-storage-0\") pod \"dnsmasq-dns-7dbf8bff67-qz9jm\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.876960 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-ovsdbserver-nb\") pod \"dnsmasq-dns-7dbf8bff67-qz9jm\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.924040 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-wv8xf"] Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.962104 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wv8xf" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.965233 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-shbmd" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.965475 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.965727 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc9c4\" (UniqueName: \"kubernetes.io/projected/fe0f12a7-b012-43b5-a79d-5745b3c02590-kube-api-access-tc9c4\") pod \"dnsmasq-dns-7dbf8bff67-qz9jm\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.967904 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.986920 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-scripts\") pod \"keystone-bootstrap-9hwb5\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.987005 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgxx8\" (UniqueName: \"kubernetes.io/projected/77e04404-63c3-4c13-8d11-5ec3953956ea-kube-api-access-tgxx8\") pod \"keystone-bootstrap-9hwb5\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.987036 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-config-data\") pod \"keystone-bootstrap-9hwb5\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.987085 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-combined-ca-bundle\") pod \"keystone-bootstrap-9hwb5\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.987154 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-credential-keys\") pod \"keystone-bootstrap-9hwb5\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:00 crc kubenswrapper[4812]: I1124 19:36:00.987443 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-fernet-keys\") pod \"keystone-bootstrap-9hwb5\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.009534 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-fernet-keys\") pod \"keystone-bootstrap-9hwb5\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.009945 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-credential-keys\") pod \"keystone-bootstrap-9hwb5\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.010581 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-config-data\") pod \"keystone-bootstrap-9hwb5\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.011205 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-scripts\") pod \"keystone-bootstrap-9hwb5\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.026272 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-combined-ca-bundle\") pod \"keystone-bootstrap-9hwb5\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.040346 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-wv8xf"] Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.040390 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-nl6jz"] Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.041870 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.044153 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgxx8\" (UniqueName: \"kubernetes.io/projected/77e04404-63c3-4c13-8d11-5ec3953956ea-kube-api-access-tgxx8\") pod \"keystone-bootstrap-9hwb5\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.051243 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.051407 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.051433 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hnc55" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.054741 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nl6jz"] Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.093180 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f-combined-ca-bundle\") pod \"neutron-db-sync-wv8xf\" (UID: \"e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f\") " pod="openstack/neutron-db-sync-wv8xf" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.093256 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f-config\") pod \"neutron-db-sync-wv8xf\" (UID: \"e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f\") " pod="openstack/neutron-db-sync-wv8xf" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.093398 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwxxb\" (UniqueName: \"kubernetes.io/projected/e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f-kube-api-access-gwxxb\") pod \"neutron-db-sync-wv8xf\" (UID: \"e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f\") " pod="openstack/neutron-db-sync-wv8xf" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.114947 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-pq5zx"] Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.116230 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pq5zx" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.120128 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-8xlhm" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.120298 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.146376 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-j8glp"] Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.147312 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j8glp" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.149082 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.149269 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.149459 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-w955g" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.165616 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dbf8bff67-qz9jm"] Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.166239 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.186946 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pq5zx"] Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.197107 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-j8glp"] Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.199329 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-combined-ca-bundle\") pod \"cinder-db-sync-nl6jz\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.199374 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-config-data\") pod \"cinder-db-sync-nl6jz\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.199397 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd280c7-5e1d-4106-af02-86eee8c72f62-combined-ca-bundle\") pod \"barbican-db-sync-pq5zx\" (UID: \"7fd280c7-5e1d-4106-af02-86eee8c72f62\") " pod="openstack/barbican-db-sync-pq5zx" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.199429 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f-combined-ca-bundle\") pod \"neutron-db-sync-wv8xf\" (UID: \"e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f\") " pod="openstack/neutron-db-sync-wv8xf" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.199451 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-db-sync-config-data\") pod \"cinder-db-sync-nl6jz\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.199480 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f-config\") pod \"neutron-db-sync-wv8xf\" (UID: \"e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f\") " pod="openstack/neutron-db-sync-wv8xf" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.199505 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gpjd\" (UniqueName: \"kubernetes.io/projected/7fd280c7-5e1d-4106-af02-86eee8c72f62-kube-api-access-7gpjd\") pod \"barbican-db-sync-pq5zx\" (UID: \"7fd280c7-5e1d-4106-af02-86eee8c72f62\") " pod="openstack/barbican-db-sync-pq5zx" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.199527 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a5a2d87-4746-4254-9f63-10fe73a4001f-etc-machine-id\") pod \"cinder-db-sync-nl6jz\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.199544 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-scripts\") pod \"cinder-db-sync-nl6jz\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.199579 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7fd280c7-5e1d-4106-af02-86eee8c72f62-db-sync-config-data\") pod \"barbican-db-sync-pq5zx\" (UID: \"7fd280c7-5e1d-4106-af02-86eee8c72f62\") " pod="openstack/barbican-db-sync-pq5zx" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.199596 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6cc9\" (UniqueName: \"kubernetes.io/projected/5a5a2d87-4746-4254-9f63-10fe73a4001f-kube-api-access-s6cc9\") pod \"cinder-db-sync-nl6jz\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.199617 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwxxb\" (UniqueName: \"kubernetes.io/projected/e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f-kube-api-access-gwxxb\") pod \"neutron-db-sync-wv8xf\" (UID: \"e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f\") " pod="openstack/neutron-db-sync-wv8xf" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.201057 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.202887 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f-combined-ca-bundle\") pod \"neutron-db-sync-wv8xf\" (UID: \"e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f\") " pod="openstack/neutron-db-sync-wv8xf" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.209973 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f-config\") pod \"neutron-db-sync-wv8xf\" (UID: \"e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f\") " pod="openstack/neutron-db-sync-wv8xf" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.216859 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwxxb\" (UniqueName: \"kubernetes.io/projected/e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f-kube-api-access-gwxxb\") pod \"neutron-db-sync-wv8xf\" (UID: \"e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f\") " pod="openstack/neutron-db-sync-wv8xf" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.231423 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76c58b6d97-gfqnn"] Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.232909 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.240877 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.245185 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.247372 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.247465 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.249855 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76c58b6d97-gfqnn"] Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.258540 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.300992 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-db-sync-config-data\") pod \"cinder-db-sync-nl6jz\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.301050 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gpjd\" (UniqueName: \"kubernetes.io/projected/7fd280c7-5e1d-4106-af02-86eee8c72f62-kube-api-access-7gpjd\") pod \"barbican-db-sync-pq5zx\" (UID: \"7fd280c7-5e1d-4106-af02-86eee8c72f62\") " pod="openstack/barbican-db-sync-pq5zx" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.301076 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e22c06f-3e52-4943-9a00-b964d62d8cab-combined-ca-bundle\") pod \"placement-db-sync-j8glp\" (UID: \"2e22c06f-3e52-4943-9a00-b964d62d8cab\") " pod="openstack/placement-db-sync-j8glp" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.301095 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a5a2d87-4746-4254-9f63-10fe73a4001f-etc-machine-id\") pod \"cinder-db-sync-nl6jz\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.301113 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-scripts\") pod \"cinder-db-sync-nl6jz\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.301132 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmcwh\" (UniqueName: \"kubernetes.io/projected/2e22c06f-3e52-4943-9a00-b964d62d8cab-kube-api-access-wmcwh\") pod \"placement-db-sync-j8glp\" (UID: \"2e22c06f-3e52-4943-9a00-b964d62d8cab\") " pod="openstack/placement-db-sync-j8glp" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.301156 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e22c06f-3e52-4943-9a00-b964d62d8cab-config-data\") pod \"placement-db-sync-j8glp\" (UID: \"2e22c06f-3e52-4943-9a00-b964d62d8cab\") " pod="openstack/placement-db-sync-j8glp" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.301175 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e22c06f-3e52-4943-9a00-b964d62d8cab-logs\") pod \"placement-db-sync-j8glp\" (UID: \"2e22c06f-3e52-4943-9a00-b964d62d8cab\") " pod="openstack/placement-db-sync-j8glp" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.301202 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7fd280c7-5e1d-4106-af02-86eee8c72f62-db-sync-config-data\") pod \"barbican-db-sync-pq5zx\" (UID: \"7fd280c7-5e1d-4106-af02-86eee8c72f62\") " pod="openstack/barbican-db-sync-pq5zx" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.301219 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6cc9\" (UniqueName: \"kubernetes.io/projected/5a5a2d87-4746-4254-9f63-10fe73a4001f-kube-api-access-s6cc9\") pod \"cinder-db-sync-nl6jz\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.301240 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e22c06f-3e52-4943-9a00-b964d62d8cab-scripts\") pod \"placement-db-sync-j8glp\" (UID: \"2e22c06f-3e52-4943-9a00-b964d62d8cab\") " pod="openstack/placement-db-sync-j8glp" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.301292 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-combined-ca-bundle\") pod \"cinder-db-sync-nl6jz\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.301308 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-config-data\") pod \"cinder-db-sync-nl6jz\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.301328 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd280c7-5e1d-4106-af02-86eee8c72f62-combined-ca-bundle\") pod \"barbican-db-sync-pq5zx\" (UID: \"7fd280c7-5e1d-4106-af02-86eee8c72f62\") " pod="openstack/barbican-db-sync-pq5zx" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.302431 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a5a2d87-4746-4254-9f63-10fe73a4001f-etc-machine-id\") pod \"cinder-db-sync-nl6jz\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.308301 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-combined-ca-bundle\") pod \"cinder-db-sync-nl6jz\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.308634 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-scripts\") pod \"cinder-db-sync-nl6jz\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.310223 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-db-sync-config-data\") pod \"cinder-db-sync-nl6jz\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.310308 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-config-data\") pod \"cinder-db-sync-nl6jz\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.310783 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7fd280c7-5e1d-4106-af02-86eee8c72f62-db-sync-config-data\") pod \"barbican-db-sync-pq5zx\" (UID: \"7fd280c7-5e1d-4106-af02-86eee8c72f62\") " pod="openstack/barbican-db-sync-pq5zx" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.310814 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd280c7-5e1d-4106-af02-86eee8c72f62-combined-ca-bundle\") pod \"barbican-db-sync-pq5zx\" (UID: \"7fd280c7-5e1d-4106-af02-86eee8c72f62\") " pod="openstack/barbican-db-sync-pq5zx" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.319506 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gpjd\" (UniqueName: \"kubernetes.io/projected/7fd280c7-5e1d-4106-af02-86eee8c72f62-kube-api-access-7gpjd\") pod \"barbican-db-sync-pq5zx\" (UID: \"7fd280c7-5e1d-4106-af02-86eee8c72f62\") " pod="openstack/barbican-db-sync-pq5zx" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.319576 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6cc9\" (UniqueName: \"kubernetes.io/projected/5a5a2d87-4746-4254-9f63-10fe73a4001f-kube-api-access-s6cc9\") pod \"cinder-db-sync-nl6jz\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.324635 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wv8xf" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.372894 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.392123 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" event={"ID":"8b3cbac7-75be-4160-84d2-1dcf750c19de","Type":"ContainerDied","Data":"bdf4f34e772286332d0daf1d9542e3368bf9d7413c133d4cb4d8ea98d846acd8"} Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.392173 4812 scope.go:117] "RemoveContainer" containerID="5e92be46f30aebbddd1ccb1217b71c3cbf513d339a1383acb1f4bc96c35bc840" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.392357 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fdb784c-jjph7" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.408601 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e22c06f-3e52-4943-9a00-b964d62d8cab-combined-ca-bundle\") pod \"placement-db-sync-j8glp\" (UID: \"2e22c06f-3e52-4943-9a00-b964d62d8cab\") " pod="openstack/placement-db-sync-j8glp" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.408658 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-config\") pod \"dnsmasq-dns-76c58b6d97-gfqnn\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.408680 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmcwh\" (UniqueName: \"kubernetes.io/projected/2e22c06f-3e52-4943-9a00-b964d62d8cab-kube-api-access-wmcwh\") pod \"placement-db-sync-j8glp\" (UID: \"2e22c06f-3e52-4943-9a00-b964d62d8cab\") " pod="openstack/placement-db-sync-j8glp" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.408702 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jxwq\" (UniqueName: \"kubernetes.io/projected/9788b094-1c08-4c10-be9f-a22f3087c814-kube-api-access-6jxwq\") pod \"dnsmasq-dns-76c58b6d97-gfqnn\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.408732 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e22c06f-3e52-4943-9a00-b964d62d8cab-config-data\") pod \"placement-db-sync-j8glp\" (UID: \"2e22c06f-3e52-4943-9a00-b964d62d8cab\") " pod="openstack/placement-db-sync-j8glp" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.408752 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e22c06f-3e52-4943-9a00-b964d62d8cab-logs\") pod \"placement-db-sync-j8glp\" (UID: \"2e22c06f-3e52-4943-9a00-b964d62d8cab\") " pod="openstack/placement-db-sync-j8glp" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.408773 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/169bdfbe-91a0-476c-a315-e9ea82f10ca5-run-httpd\") pod \"ceilometer-0\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.408798 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-ovsdbserver-sb\") pod \"dnsmasq-dns-76c58b6d97-gfqnn\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.408816 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-ovsdbserver-nb\") pod \"dnsmasq-dns-76c58b6d97-gfqnn\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.408847 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e22c06f-3e52-4943-9a00-b964d62d8cab-scripts\") pod \"placement-db-sync-j8glp\" (UID: \"2e22c06f-3e52-4943-9a00-b964d62d8cab\") " pod="openstack/placement-db-sync-j8glp" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.408867 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.408899 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/169bdfbe-91a0-476c-a315-e9ea82f10ca5-log-httpd\") pod \"ceilometer-0\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.408924 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-dns-svc\") pod \"dnsmasq-dns-76c58b6d97-gfqnn\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.408946 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-config-data\") pod \"ceilometer-0\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.408961 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-dns-swift-storage-0\") pod \"dnsmasq-dns-76c58b6d97-gfqnn\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.408990 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-scripts\") pod \"ceilometer-0\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.409010 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtwkz\" (UniqueName: \"kubernetes.io/projected/169bdfbe-91a0-476c-a315-e9ea82f10ca5-kube-api-access-wtwkz\") pod \"ceilometer-0\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.409028 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.411042 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e22c06f-3e52-4943-9a00-b964d62d8cab-logs\") pod \"placement-db-sync-j8glp\" (UID: \"2e22c06f-3e52-4943-9a00-b964d62d8cab\") " pod="openstack/placement-db-sync-j8glp" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.413492 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e22c06f-3e52-4943-9a00-b964d62d8cab-config-data\") pod \"placement-db-sync-j8glp\" (UID: \"2e22c06f-3e52-4943-9a00-b964d62d8cab\") " pod="openstack/placement-db-sync-j8glp" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.421044 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e22c06f-3e52-4943-9a00-b964d62d8cab-combined-ca-bundle\") pod \"placement-db-sync-j8glp\" (UID: \"2e22c06f-3e52-4943-9a00-b964d62d8cab\") " pod="openstack/placement-db-sync-j8glp" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.425534 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e22c06f-3e52-4943-9a00-b964d62d8cab-scripts\") pod \"placement-db-sync-j8glp\" (UID: \"2e22c06f-3e52-4943-9a00-b964d62d8cab\") " pod="openstack/placement-db-sync-j8glp" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.428769 4812 scope.go:117] "RemoveContainer" containerID="691fe36d3615693669c1adbf5fba5712842f50a099bce01b19207fa150039f3f" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.428912 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-jjph7"] Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.438942 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pq5zx" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.441144 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmcwh\" (UniqueName: \"kubernetes.io/projected/2e22c06f-3e52-4943-9a00-b964d62d8cab-kube-api-access-wmcwh\") pod \"placement-db-sync-j8glp\" (UID: \"2e22c06f-3e52-4943-9a00-b964d62d8cab\") " pod="openstack/placement-db-sync-j8glp" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.446387 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-jjph7"] Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.471014 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j8glp" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.511020 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-config-data\") pod \"ceilometer-0\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.511091 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-dns-swift-storage-0\") pod \"dnsmasq-dns-76c58b6d97-gfqnn\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.511165 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-scripts\") pod \"ceilometer-0\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.511186 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtwkz\" (UniqueName: \"kubernetes.io/projected/169bdfbe-91a0-476c-a315-e9ea82f10ca5-kube-api-access-wtwkz\") pod \"ceilometer-0\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.511206 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.511266 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-config\") pod \"dnsmasq-dns-76c58b6d97-gfqnn\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.511288 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jxwq\" (UniqueName: \"kubernetes.io/projected/9788b094-1c08-4c10-be9f-a22f3087c814-kube-api-access-6jxwq\") pod \"dnsmasq-dns-76c58b6d97-gfqnn\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.511309 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/169bdfbe-91a0-476c-a315-e9ea82f10ca5-run-httpd\") pod \"ceilometer-0\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.511348 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-ovsdbserver-sb\") pod \"dnsmasq-dns-76c58b6d97-gfqnn\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.511362 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-ovsdbserver-nb\") pod \"dnsmasq-dns-76c58b6d97-gfqnn\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.511393 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.511422 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/169bdfbe-91a0-476c-a315-e9ea82f10ca5-log-httpd\") pod \"ceilometer-0\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.511447 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-dns-svc\") pod \"dnsmasq-dns-76c58b6d97-gfqnn\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.512154 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-dns-svc\") pod \"dnsmasq-dns-76c58b6d97-gfqnn\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.515461 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-ovsdbserver-sb\") pod \"dnsmasq-dns-76c58b6d97-gfqnn\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.515971 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.516055 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-config\") pod \"dnsmasq-dns-76c58b6d97-gfqnn\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.516301 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/169bdfbe-91a0-476c-a315-e9ea82f10ca5-log-httpd\") pod \"ceilometer-0\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.517437 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-scripts\") pod \"ceilometer-0\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.518266 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/169bdfbe-91a0-476c-a315-e9ea82f10ca5-run-httpd\") pod \"ceilometer-0\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.518397 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-dns-swift-storage-0\") pod \"dnsmasq-dns-76c58b6d97-gfqnn\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.518638 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-ovsdbserver-nb\") pod \"dnsmasq-dns-76c58b6d97-gfqnn\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.529735 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-config-data\") pod \"ceilometer-0\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.530385 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.530597 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jxwq\" (UniqueName: \"kubernetes.io/projected/9788b094-1c08-4c10-be9f-a22f3087c814-kube-api-access-6jxwq\") pod \"dnsmasq-dns-76c58b6d97-gfqnn\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.531438 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtwkz\" (UniqueName: \"kubernetes.io/projected/169bdfbe-91a0-476c-a315-e9ea82f10ca5-kube-api-access-wtwkz\") pod \"ceilometer-0\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.555243 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.569941 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.667993 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dbf8bff67-qz9jm"] Nov 24 19:36:01 crc kubenswrapper[4812]: W1124 19:36:01.678989 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe0f12a7_b012_43b5_a79d_5745b3c02590.slice/crio-3fcbfce2409d2ec7d8f3574038485f567680e05fb40c71f257785d5af9fed7e7 WatchSource:0}: Error finding container 3fcbfce2409d2ec7d8f3574038485f567680e05fb40c71f257785d5af9fed7e7: Status 404 returned error can't find the container with id 3fcbfce2409d2ec7d8f3574038485f567680e05fb40c71f257785d5af9fed7e7 Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.751036 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9hwb5"] Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.824389 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-wv8xf"] Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.833030 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.834384 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.836033 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.836442 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.836823 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-z5qr5" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.837894 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.846020 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.925610 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.925662 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f916654c-9eac-44ed-a9f7-c013b5a99345-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.925721 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-config-data\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.925822 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.925847 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-scripts\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.925888 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.925914 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f916654c-9eac-44ed-a9f7-c013b5a99345-logs\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.925974 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jldjj\" (UniqueName: \"kubernetes.io/projected/f916654c-9eac-44ed-a9f7-c013b5a99345-kube-api-access-jldjj\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.929475 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nl6jz"] Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.941024 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.942392 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.945078 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.945211 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 19:36:01 crc kubenswrapper[4812]: I1124 19:36:01.950455 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.029260 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.029298 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.029350 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.029381 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.029402 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-scripts\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.029422 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/009058ea-bc3f-4d45-af05-f9ebde87dad6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.029442 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.029462 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.029482 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f916654c-9eac-44ed-a9f7-c013b5a99345-logs\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.029500 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch5d5\" (UniqueName: \"kubernetes.io/projected/009058ea-bc3f-4d45-af05-f9ebde87dad6-kube-api-access-ch5d5\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.029539 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jldjj\" (UniqueName: \"kubernetes.io/projected/f916654c-9eac-44ed-a9f7-c013b5a99345-kube-api-access-jldjj\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.029568 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.029586 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.029605 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f916654c-9eac-44ed-a9f7-c013b5a99345-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.029627 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/009058ea-bc3f-4d45-af05-f9ebde87dad6-logs\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.029655 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-config-data\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.030487 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.031567 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f916654c-9eac-44ed-a9f7-c013b5a99345-logs\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.032202 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f916654c-9eac-44ed-a9f7-c013b5a99345-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.037123 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-scripts\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.037221 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.038360 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-config-data\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.040776 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.074737 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jldjj\" (UniqueName: \"kubernetes.io/projected/f916654c-9eac-44ed-a9f7-c013b5a99345-kube-api-access-jldjj\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.095167 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.100308 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-pq5zx"] Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.107062 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-j8glp"] Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.112458 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:36:02 crc kubenswrapper[4812]: W1124 19:36:02.122380 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod169bdfbe_91a0_476c_a315_e9ea82f10ca5.slice/crio-84dd7da0e7355a6c3fb1d9d1c6cbafb4f561ff71a4654c0c2abc04d1dff1547a WatchSource:0}: Error finding container 84dd7da0e7355a6c3fb1d9d1c6cbafb4f561ff71a4654c0c2abc04d1dff1547a: Status 404 returned error can't find the container with id 84dd7da0e7355a6c3fb1d9d1c6cbafb4f561ff71a4654c0c2abc04d1dff1547a Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.132565 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.132600 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.132652 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.133899 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.135078 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/009058ea-bc3f-4d45-af05-f9ebde87dad6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.135196 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.135395 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch5d5\" (UniqueName: \"kubernetes.io/projected/009058ea-bc3f-4d45-af05-f9ebde87dad6-kube-api-access-ch5d5\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.135601 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.135706 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/009058ea-bc3f-4d45-af05-f9ebde87dad6-logs\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.135933 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/009058ea-bc3f-4d45-af05-f9ebde87dad6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.136538 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/009058ea-bc3f-4d45-af05-f9ebde87dad6-logs\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.146407 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.146611 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.146855 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.152219 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.155034 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch5d5\" (UniqueName: \"kubernetes.io/projected/009058ea-bc3f-4d45-af05-f9ebde87dad6-kube-api-access-ch5d5\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.169955 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.222687 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76c58b6d97-gfqnn"] Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.363779 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.400742 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.444392 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"169bdfbe-91a0-476c-a315-e9ea82f10ca5","Type":"ContainerStarted","Data":"84dd7da0e7355a6c3fb1d9d1c6cbafb4f561ff71a4654c0c2abc04d1dff1547a"} Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.448248 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9hwb5" event={"ID":"77e04404-63c3-4c13-8d11-5ec3953956ea","Type":"ContainerStarted","Data":"bff8b6d9f9e59a43bca8cd6f4630a2ffe5d31fd7e1ec2ffc6c7cc3e9a1a6f6ac"} Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.448575 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9hwb5" event={"ID":"77e04404-63c3-4c13-8d11-5ec3953956ea","Type":"ContainerStarted","Data":"a058a050288786b771d00fb35686230c882688b180215d9532fd01df2e80c212"} Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.450169 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nl6jz" event={"ID":"5a5a2d87-4746-4254-9f63-10fe73a4001f","Type":"ContainerStarted","Data":"91d1d1002d53d0bbc84cd1e8faecccc6beb4f0059a4cefa6dffa9f2f75890729"} Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.451457 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pq5zx" event={"ID":"7fd280c7-5e1d-4106-af02-86eee8c72f62","Type":"ContainerStarted","Data":"1fa5eefa229f841feb68b4ed0e15bbbac65a0f7465ab5b7c6b4abb5225684926"} Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.452522 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wv8xf" event={"ID":"e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f","Type":"ContainerStarted","Data":"6ecefa45e298955bae6aa3a3705807ec3dc02991ab6d484fb90e5a7431b808c5"} Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.452546 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wv8xf" event={"ID":"e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f","Type":"ContainerStarted","Data":"53e3b280248ef935044052e12244eedb9d0055d38cfb3934b582fe8a60d70ad3"} Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.455326 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" event={"ID":"9788b094-1c08-4c10-be9f-a22f3087c814","Type":"ContainerStarted","Data":"bd7e31fbce8fe57d6c064aaf6d13adb838b7f4bc793229d83d8d4769e8cfb7fe"} Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.456543 4812 generic.go:334] "Generic (PLEG): container finished" podID="fe0f12a7-b012-43b5-a79d-5745b3c02590" containerID="bbbb686c5fbad5671ec1a70d18c63c2428e1a97c3d7de8e644013fab5f1f64ce" exitCode=0 Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.456595 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" event={"ID":"fe0f12a7-b012-43b5-a79d-5745b3c02590","Type":"ContainerDied","Data":"bbbb686c5fbad5671ec1a70d18c63c2428e1a97c3d7de8e644013fab5f1f64ce"} Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.456612 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" event={"ID":"fe0f12a7-b012-43b5-a79d-5745b3c02590","Type":"ContainerStarted","Data":"3fcbfce2409d2ec7d8f3574038485f567680e05fb40c71f257785d5af9fed7e7"} Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.459388 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j8glp" event={"ID":"2e22c06f-3e52-4943-9a00-b964d62d8cab","Type":"ContainerStarted","Data":"6a822e219ca30b04ee0e27d81e2eabb90937aad9d56135fedda1dd5f41c11b03"} Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.494304 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-9hwb5" podStartSLOduration=2.494287995 podStartE2EDuration="2.494287995s" podCreationTimestamp="2025-11-24 19:36:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:36:02.471657083 +0000 UTC m=+1156.260609454" watchObservedRunningTime="2025-11-24 19:36:02.494287995 +0000 UTC m=+1156.283240366" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.516800 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-wv8xf" podStartSLOduration=2.516775743 podStartE2EDuration="2.516775743s" podCreationTimestamp="2025-11-24 19:36:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:36:02.499951926 +0000 UTC m=+1156.288904307" watchObservedRunningTime="2025-11-24 19:36:02.516775743 +0000 UTC m=+1156.305728114" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.952207 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.985188 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b3cbac7-75be-4160-84d2-1dcf750c19de" path="/var/lib/kubelet/pods/8b3cbac7-75be-4160-84d2-1dcf750c19de/volumes" Nov 24 19:36:02 crc kubenswrapper[4812]: I1124 19:36:02.989561 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.049228 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc9c4\" (UniqueName: \"kubernetes.io/projected/fe0f12a7-b012-43b5-a79d-5745b3c02590-kube-api-access-tc9c4\") pod \"fe0f12a7-b012-43b5-a79d-5745b3c02590\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.049375 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-ovsdbserver-nb\") pod \"fe0f12a7-b012-43b5-a79d-5745b3c02590\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.049414 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-dns-svc\") pod \"fe0f12a7-b012-43b5-a79d-5745b3c02590\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.049499 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-dns-swift-storage-0\") pod \"fe0f12a7-b012-43b5-a79d-5745b3c02590\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.049579 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-ovsdbserver-sb\") pod \"fe0f12a7-b012-43b5-a79d-5745b3c02590\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.049665 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-config\") pod \"fe0f12a7-b012-43b5-a79d-5745b3c02590\" (UID: \"fe0f12a7-b012-43b5-a79d-5745b3c02590\") " Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.082722 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe0f12a7-b012-43b5-a79d-5745b3c02590-kube-api-access-tc9c4" (OuterVolumeSpecName: "kube-api-access-tc9c4") pod "fe0f12a7-b012-43b5-a79d-5745b3c02590" (UID: "fe0f12a7-b012-43b5-a79d-5745b3c02590"). InnerVolumeSpecName "kube-api-access-tc9c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.087104 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fe0f12a7-b012-43b5-a79d-5745b3c02590" (UID: "fe0f12a7-b012-43b5-a79d-5745b3c02590"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.092157 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fe0f12a7-b012-43b5-a79d-5745b3c02590" (UID: "fe0f12a7-b012-43b5-a79d-5745b3c02590"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.092578 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fe0f12a7-b012-43b5-a79d-5745b3c02590" (UID: "fe0f12a7-b012-43b5-a79d-5745b3c02590"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.108973 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 19:36:03 crc kubenswrapper[4812]: W1124 19:36:03.118498 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod009058ea_bc3f_4d45_af05_f9ebde87dad6.slice/crio-a5a07d9b4266f9dd2fff1a3d723ade60258a0d782b1637b78f7000662811ca14 WatchSource:0}: Error finding container a5a07d9b4266f9dd2fff1a3d723ade60258a0d782b1637b78f7000662811ca14: Status 404 returned error can't find the container with id a5a07d9b4266f9dd2fff1a3d723ade60258a0d782b1637b78f7000662811ca14 Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.120879 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fe0f12a7-b012-43b5-a79d-5745b3c02590" (UID: "fe0f12a7-b012-43b5-a79d-5745b3c02590"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.121135 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-config" (OuterVolumeSpecName: "config") pod "fe0f12a7-b012-43b5-a79d-5745b3c02590" (UID: "fe0f12a7-b012-43b5-a79d-5745b3c02590"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.151582 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.151613 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.151624 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.151633 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.151644 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe0f12a7-b012-43b5-a79d-5745b3c02590-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.151652 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc9c4\" (UniqueName: \"kubernetes.io/projected/fe0f12a7-b012-43b5-a79d-5745b3c02590-kube-api-access-tc9c4\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.485898 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"009058ea-bc3f-4d45-af05-f9ebde87dad6","Type":"ContainerStarted","Data":"a5a07d9b4266f9dd2fff1a3d723ade60258a0d782b1637b78f7000662811ca14"} Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.490216 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" event={"ID":"fe0f12a7-b012-43b5-a79d-5745b3c02590","Type":"ContainerDied","Data":"3fcbfce2409d2ec7d8f3574038485f567680e05fb40c71f257785d5af9fed7e7"} Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.490260 4812 scope.go:117] "RemoveContainer" containerID="bbbb686c5fbad5671ec1a70d18c63c2428e1a97c3d7de8e644013fab5f1f64ce" Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.490410 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dbf8bff67-qz9jm" Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.495420 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f916654c-9eac-44ed-a9f7-c013b5a99345","Type":"ContainerStarted","Data":"a46a1b56bb38011cb860fc589897bac96f04484f5f1c55de15fc745f10e07972"} Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.498267 4812 generic.go:334] "Generic (PLEG): container finished" podID="9788b094-1c08-4c10-be9f-a22f3087c814" containerID="ff8070c545474c3f5bf799fc34b73e0f121d4a68b5d19b4273fb40c3a21c9151" exitCode=0 Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.498349 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" event={"ID":"9788b094-1c08-4c10-be9f-a22f3087c814","Type":"ContainerDied","Data":"ff8070c545474c3f5bf799fc34b73e0f121d4a68b5d19b4273fb40c3a21c9151"} Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.589752 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dbf8bff67-qz9jm"] Nov 24 19:36:03 crc kubenswrapper[4812]: I1124 19:36:03.610496 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7dbf8bff67-qz9jm"] Nov 24 19:36:04 crc kubenswrapper[4812]: I1124 19:36:04.109543 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 19:36:04 crc kubenswrapper[4812]: I1124 19:36:04.204712 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:36:04 crc kubenswrapper[4812]: I1124 19:36:04.257841 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 19:36:04 crc kubenswrapper[4812]: I1124 19:36:04.518971 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f916654c-9eac-44ed-a9f7-c013b5a99345","Type":"ContainerStarted","Data":"94459c598e3ea2371bb7e784dbc1ba529eee3868a4c3c05903f538edca9152da"} Nov 24 19:36:04 crc kubenswrapper[4812]: I1124 19:36:04.522860 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" event={"ID":"9788b094-1c08-4c10-be9f-a22f3087c814","Type":"ContainerStarted","Data":"82a663ac9b08a960ac4fa8a9f60ad5df010a1bfdcd690d82d8e1c5195f06d79f"} Nov 24 19:36:04 crc kubenswrapper[4812]: I1124 19:36:04.523094 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:04 crc kubenswrapper[4812]: I1124 19:36:04.527693 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"009058ea-bc3f-4d45-af05-f9ebde87dad6","Type":"ContainerStarted","Data":"d34dac02fe66dec06d00c965faad7817346e3f910c23dc6d95f71582d20268ce"} Nov 24 19:36:04 crc kubenswrapper[4812]: I1124 19:36:04.559626 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" podStartSLOduration=3.559609484 podStartE2EDuration="3.559609484s" podCreationTimestamp="2025-11-24 19:36:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:36:04.545691989 +0000 UTC m=+1158.334644360" watchObservedRunningTime="2025-11-24 19:36:04.559609484 +0000 UTC m=+1158.348561855" Nov 24 19:36:04 crc kubenswrapper[4812]: I1124 19:36:04.975088 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe0f12a7-b012-43b5-a79d-5745b3c02590" path="/var/lib/kubelet/pods/fe0f12a7-b012-43b5-a79d-5745b3c02590/volumes" Nov 24 19:36:05 crc kubenswrapper[4812]: I1124 19:36:05.548828 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"009058ea-bc3f-4d45-af05-f9ebde87dad6","Type":"ContainerStarted","Data":"01b85ca6a5036b88a561491e73fad7b65775ce5e7670c0307ef2cc7741dd7576"} Nov 24 19:36:05 crc kubenswrapper[4812]: I1124 19:36:05.548931 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="009058ea-bc3f-4d45-af05-f9ebde87dad6" containerName="glance-log" containerID="cri-o://d34dac02fe66dec06d00c965faad7817346e3f910c23dc6d95f71582d20268ce" gracePeriod=30 Nov 24 19:36:05 crc kubenswrapper[4812]: I1124 19:36:05.548989 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="009058ea-bc3f-4d45-af05-f9ebde87dad6" containerName="glance-httpd" containerID="cri-o://01b85ca6a5036b88a561491e73fad7b65775ce5e7670c0307ef2cc7741dd7576" gracePeriod=30 Nov 24 19:36:05 crc kubenswrapper[4812]: I1124 19:36:05.551379 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f916654c-9eac-44ed-a9f7-c013b5a99345","Type":"ContainerStarted","Data":"ffb81762bf19edd6eb17dad97155e64d97bb9845adcc73cf4714552c386780aa"} Nov 24 19:36:05 crc kubenswrapper[4812]: I1124 19:36:05.551609 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f916654c-9eac-44ed-a9f7-c013b5a99345" containerName="glance-log" containerID="cri-o://94459c598e3ea2371bb7e784dbc1ba529eee3868a4c3c05903f538edca9152da" gracePeriod=30 Nov 24 19:36:05 crc kubenswrapper[4812]: I1124 19:36:05.551714 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f916654c-9eac-44ed-a9f7-c013b5a99345" containerName="glance-httpd" containerID="cri-o://ffb81762bf19edd6eb17dad97155e64d97bb9845adcc73cf4714552c386780aa" gracePeriod=30 Nov 24 19:36:05 crc kubenswrapper[4812]: I1124 19:36:05.571152 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.571132984 podStartE2EDuration="5.571132984s" podCreationTimestamp="2025-11-24 19:36:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:36:05.570412934 +0000 UTC m=+1159.359365305" watchObservedRunningTime="2025-11-24 19:36:05.571132984 +0000 UTC m=+1159.360085355" Nov 24 19:36:05 crc kubenswrapper[4812]: I1124 19:36:05.598922 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.598887332 podStartE2EDuration="5.598887332s" podCreationTimestamp="2025-11-24 19:36:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:36:05.592443169 +0000 UTC m=+1159.381395560" watchObservedRunningTime="2025-11-24 19:36:05.598887332 +0000 UTC m=+1159.387839703" Nov 24 19:36:06 crc kubenswrapper[4812]: I1124 19:36:06.564375 4812 generic.go:334] "Generic (PLEG): container finished" podID="009058ea-bc3f-4d45-af05-f9ebde87dad6" containerID="01b85ca6a5036b88a561491e73fad7b65775ce5e7670c0307ef2cc7741dd7576" exitCode=0 Nov 24 19:36:06 crc kubenswrapper[4812]: I1124 19:36:06.564410 4812 generic.go:334] "Generic (PLEG): container finished" podID="009058ea-bc3f-4d45-af05-f9ebde87dad6" containerID="d34dac02fe66dec06d00c965faad7817346e3f910c23dc6d95f71582d20268ce" exitCode=143 Nov 24 19:36:06 crc kubenswrapper[4812]: I1124 19:36:06.564447 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"009058ea-bc3f-4d45-af05-f9ebde87dad6","Type":"ContainerDied","Data":"01b85ca6a5036b88a561491e73fad7b65775ce5e7670c0307ef2cc7741dd7576"} Nov 24 19:36:06 crc kubenswrapper[4812]: I1124 19:36:06.564473 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"009058ea-bc3f-4d45-af05-f9ebde87dad6","Type":"ContainerDied","Data":"d34dac02fe66dec06d00c965faad7817346e3f910c23dc6d95f71582d20268ce"} Nov 24 19:36:06 crc kubenswrapper[4812]: I1124 19:36:06.567679 4812 generic.go:334] "Generic (PLEG): container finished" podID="f916654c-9eac-44ed-a9f7-c013b5a99345" containerID="ffb81762bf19edd6eb17dad97155e64d97bb9845adcc73cf4714552c386780aa" exitCode=0 Nov 24 19:36:06 crc kubenswrapper[4812]: I1124 19:36:06.567704 4812 generic.go:334] "Generic (PLEG): container finished" podID="f916654c-9eac-44ed-a9f7-c013b5a99345" containerID="94459c598e3ea2371bb7e784dbc1ba529eee3868a4c3c05903f538edca9152da" exitCode=143 Nov 24 19:36:06 crc kubenswrapper[4812]: I1124 19:36:06.567750 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f916654c-9eac-44ed-a9f7-c013b5a99345","Type":"ContainerDied","Data":"ffb81762bf19edd6eb17dad97155e64d97bb9845adcc73cf4714552c386780aa"} Nov 24 19:36:06 crc kubenswrapper[4812]: I1124 19:36:06.567786 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f916654c-9eac-44ed-a9f7-c013b5a99345","Type":"ContainerDied","Data":"94459c598e3ea2371bb7e784dbc1ba529eee3868a4c3c05903f538edca9152da"} Nov 24 19:36:06 crc kubenswrapper[4812]: I1124 19:36:06.570988 4812 generic.go:334] "Generic (PLEG): container finished" podID="77e04404-63c3-4c13-8d11-5ec3953956ea" containerID="bff8b6d9f9e59a43bca8cd6f4630a2ffe5d31fd7e1ec2ffc6c7cc3e9a1a6f6ac" exitCode=0 Nov 24 19:36:06 crc kubenswrapper[4812]: I1124 19:36:06.571070 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9hwb5" event={"ID":"77e04404-63c3-4c13-8d11-5ec3953956ea","Type":"ContainerDied","Data":"bff8b6d9f9e59a43bca8cd6f4630a2ffe5d31fd7e1ec2ffc6c7cc3e9a1a6f6ac"} Nov 24 19:36:11 crc kubenswrapper[4812]: I1124 19:36:11.556950 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:11 crc kubenswrapper[4812]: I1124 19:36:11.627092 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6856c564b9-rfzx9"] Nov 24 19:36:11 crc kubenswrapper[4812]: I1124 19:36:11.627393 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" podUID="79f12eac-94ed-4787-b2ce-92c02040e8f0" containerName="dnsmasq-dns" containerID="cri-o://62a2d7b375580c8c67d3f9061814f2be06edb499e454fecdadbeb3de5fedee03" gracePeriod=10 Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.480277 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.638890 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgxx8\" (UniqueName: \"kubernetes.io/projected/77e04404-63c3-4c13-8d11-5ec3953956ea-kube-api-access-tgxx8\") pod \"77e04404-63c3-4c13-8d11-5ec3953956ea\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.639366 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-combined-ca-bundle\") pod \"77e04404-63c3-4c13-8d11-5ec3953956ea\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.639414 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-config-data\") pod \"77e04404-63c3-4c13-8d11-5ec3953956ea\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.639499 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-fernet-keys\") pod \"77e04404-63c3-4c13-8d11-5ec3953956ea\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.639552 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-scripts\") pod \"77e04404-63c3-4c13-8d11-5ec3953956ea\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.639586 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-credential-keys\") pod \"77e04404-63c3-4c13-8d11-5ec3953956ea\" (UID: \"77e04404-63c3-4c13-8d11-5ec3953956ea\") " Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.645460 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77e04404-63c3-4c13-8d11-5ec3953956ea-kube-api-access-tgxx8" (OuterVolumeSpecName: "kube-api-access-tgxx8") pod "77e04404-63c3-4c13-8d11-5ec3953956ea" (UID: "77e04404-63c3-4c13-8d11-5ec3953956ea"). InnerVolumeSpecName "kube-api-access-tgxx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.654932 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-scripts" (OuterVolumeSpecName: "scripts") pod "77e04404-63c3-4c13-8d11-5ec3953956ea" (UID: "77e04404-63c3-4c13-8d11-5ec3953956ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.655719 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "77e04404-63c3-4c13-8d11-5ec3953956ea" (UID: "77e04404-63c3-4c13-8d11-5ec3953956ea"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.668157 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "77e04404-63c3-4c13-8d11-5ec3953956ea" (UID: "77e04404-63c3-4c13-8d11-5ec3953956ea"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.670175 4812 generic.go:334] "Generic (PLEG): container finished" podID="79f12eac-94ed-4787-b2ce-92c02040e8f0" containerID="62a2d7b375580c8c67d3f9061814f2be06edb499e454fecdadbeb3de5fedee03" exitCode=0 Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.670232 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" event={"ID":"79f12eac-94ed-4787-b2ce-92c02040e8f0","Type":"ContainerDied","Data":"62a2d7b375580c8c67d3f9061814f2be06edb499e454fecdadbeb3de5fedee03"} Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.673738 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9hwb5" event={"ID":"77e04404-63c3-4c13-8d11-5ec3953956ea","Type":"ContainerDied","Data":"a058a050288786b771d00fb35686230c882688b180215d9532fd01df2e80c212"} Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.673773 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a058a050288786b771d00fb35686230c882688b180215d9532fd01df2e80c212" Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.673835 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9hwb5" Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.681434 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77e04404-63c3-4c13-8d11-5ec3953956ea" (UID: "77e04404-63c3-4c13-8d11-5ec3953956ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.687616 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-config-data" (OuterVolumeSpecName: "config-data") pod "77e04404-63c3-4c13-8d11-5ec3953956ea" (UID: "77e04404-63c3-4c13-8d11-5ec3953956ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.741788 4812 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.741825 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.741838 4812 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.741854 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgxx8\" (UniqueName: \"kubernetes.io/projected/77e04404-63c3-4c13-8d11-5ec3953956ea-kube-api-access-tgxx8\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.741867 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.741878 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77e04404-63c3-4c13-8d11-5ec3953956ea-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.943292 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.945449 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-scripts\") pod \"f916654c-9eac-44ed-a9f7-c013b5a99345\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.945502 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"f916654c-9eac-44ed-a9f7-c013b5a99345\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.948285 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-scripts" (OuterVolumeSpecName: "scripts") pod "f916654c-9eac-44ed-a9f7-c013b5a99345" (UID: "f916654c-9eac-44ed-a9f7-c013b5a99345"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:12 crc kubenswrapper[4812]: I1124 19:36:12.952696 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "f916654c-9eac-44ed-a9f7-c013b5a99345" (UID: "f916654c-9eac-44ed-a9f7-c013b5a99345"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.046714 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-combined-ca-bundle\") pod \"f916654c-9eac-44ed-a9f7-c013b5a99345\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.046856 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jldjj\" (UniqueName: \"kubernetes.io/projected/f916654c-9eac-44ed-a9f7-c013b5a99345-kube-api-access-jldjj\") pod \"f916654c-9eac-44ed-a9f7-c013b5a99345\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.046894 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f916654c-9eac-44ed-a9f7-c013b5a99345-httpd-run\") pod \"f916654c-9eac-44ed-a9f7-c013b5a99345\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.046935 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-config-data\") pod \"f916654c-9eac-44ed-a9f7-c013b5a99345\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.046985 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f916654c-9eac-44ed-a9f7-c013b5a99345-logs\") pod \"f916654c-9eac-44ed-a9f7-c013b5a99345\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.047041 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-public-tls-certs\") pod \"f916654c-9eac-44ed-a9f7-c013b5a99345\" (UID: \"f916654c-9eac-44ed-a9f7-c013b5a99345\") " Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.047434 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.047464 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.048061 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f916654c-9eac-44ed-a9f7-c013b5a99345-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f916654c-9eac-44ed-a9f7-c013b5a99345" (UID: "f916654c-9eac-44ed-a9f7-c013b5a99345"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.049027 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f916654c-9eac-44ed-a9f7-c013b5a99345-logs" (OuterVolumeSpecName: "logs") pod "f916654c-9eac-44ed-a9f7-c013b5a99345" (UID: "f916654c-9eac-44ed-a9f7-c013b5a99345"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.058795 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f916654c-9eac-44ed-a9f7-c013b5a99345-kube-api-access-jldjj" (OuterVolumeSpecName: "kube-api-access-jldjj") pod "f916654c-9eac-44ed-a9f7-c013b5a99345" (UID: "f916654c-9eac-44ed-a9f7-c013b5a99345"). InnerVolumeSpecName "kube-api-access-jldjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.085545 4812 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.092376 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f916654c-9eac-44ed-a9f7-c013b5a99345" (UID: "f916654c-9eac-44ed-a9f7-c013b5a99345"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.111038 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f916654c-9eac-44ed-a9f7-c013b5a99345" (UID: "f916654c-9eac-44ed-a9f7-c013b5a99345"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.117512 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-config-data" (OuterVolumeSpecName: "config-data") pod "f916654c-9eac-44ed-a9f7-c013b5a99345" (UID: "f916654c-9eac-44ed-a9f7-c013b5a99345"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.148913 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.148946 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f916654c-9eac-44ed-a9f7-c013b5a99345-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.148956 4812 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.148983 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f916654c-9eac-44ed-a9f7-c013b5a99345-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.148994 4812 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.149005 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jldjj\" (UniqueName: \"kubernetes.io/projected/f916654c-9eac-44ed-a9f7-c013b5a99345-kube-api-access-jldjj\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.149013 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f916654c-9eac-44ed-a9f7-c013b5a99345-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.576910 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-9hwb5"] Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.582772 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-9hwb5"] Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.652392 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-84l42"] Nov 24 19:36:13 crc kubenswrapper[4812]: E1124 19:36:13.652805 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe0f12a7-b012-43b5-a79d-5745b3c02590" containerName="init" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.652820 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe0f12a7-b012-43b5-a79d-5745b3c02590" containerName="init" Nov 24 19:36:13 crc kubenswrapper[4812]: E1124 19:36:13.652842 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77e04404-63c3-4c13-8d11-5ec3953956ea" containerName="keystone-bootstrap" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.652851 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="77e04404-63c3-4c13-8d11-5ec3953956ea" containerName="keystone-bootstrap" Nov 24 19:36:13 crc kubenswrapper[4812]: E1124 19:36:13.652872 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f916654c-9eac-44ed-a9f7-c013b5a99345" containerName="glance-httpd" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.652881 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f916654c-9eac-44ed-a9f7-c013b5a99345" containerName="glance-httpd" Nov 24 19:36:13 crc kubenswrapper[4812]: E1124 19:36:13.652902 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f916654c-9eac-44ed-a9f7-c013b5a99345" containerName="glance-log" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.652911 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f916654c-9eac-44ed-a9f7-c013b5a99345" containerName="glance-log" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.653141 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f916654c-9eac-44ed-a9f7-c013b5a99345" containerName="glance-log" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.653182 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="77e04404-63c3-4c13-8d11-5ec3953956ea" containerName="keystone-bootstrap" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.653206 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe0f12a7-b012-43b5-a79d-5745b3c02590" containerName="init" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.653223 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f916654c-9eac-44ed-a9f7-c013b5a99345" containerName="glance-httpd" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.654387 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.657343 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.657676 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.657793 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.660268 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-b5j9w" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.661182 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.663068 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-84l42"] Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.711981 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f916654c-9eac-44ed-a9f7-c013b5a99345","Type":"ContainerDied","Data":"a46a1b56bb38011cb860fc589897bac96f04484f5f1c55de15fc745f10e07972"} Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.712047 4812 scope.go:117] "RemoveContainer" containerID="ffb81762bf19edd6eb17dad97155e64d97bb9845adcc73cf4714552c386780aa" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.712204 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.744355 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.749771 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.762617 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-fernet-keys\") pod \"keystone-bootstrap-84l42\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.762699 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-credential-keys\") pod \"keystone-bootstrap-84l42\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.762725 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-combined-ca-bundle\") pod \"keystone-bootstrap-84l42\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.762758 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-scripts\") pod \"keystone-bootstrap-84l42\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.762826 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvfxf\" (UniqueName: \"kubernetes.io/projected/da6aa924-f282-445c-8aed-71f9e3282b55-kube-api-access-jvfxf\") pod \"keystone-bootstrap-84l42\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.762864 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-config-data\") pod \"keystone-bootstrap-84l42\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.768718 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.771663 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.773656 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.773958 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.779399 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.864690 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvfxf\" (UniqueName: \"kubernetes.io/projected/da6aa924-f282-445c-8aed-71f9e3282b55-kube-api-access-jvfxf\") pod \"keystone-bootstrap-84l42\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.864956 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-config-data\") pod \"keystone-bootstrap-84l42\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.865129 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-fernet-keys\") pod \"keystone-bootstrap-84l42\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.865276 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-credential-keys\") pod \"keystone-bootstrap-84l42\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.865397 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-combined-ca-bundle\") pod \"keystone-bootstrap-84l42\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.865532 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-scripts\") pod \"keystone-bootstrap-84l42\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.869733 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-credential-keys\") pod \"keystone-bootstrap-84l42\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.869792 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-fernet-keys\") pod \"keystone-bootstrap-84l42\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.870302 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-config-data\") pod \"keystone-bootstrap-84l42\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.870671 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-combined-ca-bundle\") pod \"keystone-bootstrap-84l42\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.875673 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-scripts\") pod \"keystone-bootstrap-84l42\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.883716 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvfxf\" (UniqueName: \"kubernetes.io/projected/da6aa924-f282-445c-8aed-71f9e3282b55-kube-api-access-jvfxf\") pod \"keystone-bootstrap-84l42\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.966720 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61c84290-c737-4cee-8ac2-cfe1bf02e656-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.966839 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-config-data\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.966911 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.966941 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-scripts\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.966969 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61c84290-c737-4cee-8ac2-cfe1bf02e656-logs\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.967010 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.967072 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc7k7\" (UniqueName: \"kubernetes.io/projected/61c84290-c737-4cee-8ac2-cfe1bf02e656-kube-api-access-wc7k7\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:13 crc kubenswrapper[4812]: I1124 19:36:13.967138 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:14 crc kubenswrapper[4812]: I1124 19:36:14.019558 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:14 crc kubenswrapper[4812]: I1124 19:36:14.068419 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:14 crc kubenswrapper[4812]: I1124 19:36:14.068792 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Nov 24 19:36:14 crc kubenswrapper[4812]: I1124 19:36:14.068858 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-scripts\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:14 crc kubenswrapper[4812]: I1124 19:36:14.071831 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61c84290-c737-4cee-8ac2-cfe1bf02e656-logs\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:14 crc kubenswrapper[4812]: I1124 19:36:14.071878 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:14 crc kubenswrapper[4812]: I1124 19:36:14.071945 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc7k7\" (UniqueName: \"kubernetes.io/projected/61c84290-c737-4cee-8ac2-cfe1bf02e656-kube-api-access-wc7k7\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:14 crc kubenswrapper[4812]: I1124 19:36:14.072038 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:14 crc kubenswrapper[4812]: I1124 19:36:14.072121 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61c84290-c737-4cee-8ac2-cfe1bf02e656-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:14 crc kubenswrapper[4812]: I1124 19:36:14.072290 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-config-data\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:14 crc kubenswrapper[4812]: I1124 19:36:14.072414 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61c84290-c737-4cee-8ac2-cfe1bf02e656-logs\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:14 crc kubenswrapper[4812]: I1124 19:36:14.073694 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-scripts\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:14 crc kubenswrapper[4812]: I1124 19:36:14.074395 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61c84290-c737-4cee-8ac2-cfe1bf02e656-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:14 crc kubenswrapper[4812]: I1124 19:36:14.078479 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:14 crc kubenswrapper[4812]: I1124 19:36:14.078639 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-config-data\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:14 crc kubenswrapper[4812]: I1124 19:36:14.078929 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:14 crc kubenswrapper[4812]: I1124 19:36:14.098445 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:14 crc kubenswrapper[4812]: I1124 19:36:14.100551 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc7k7\" (UniqueName: \"kubernetes.io/projected/61c84290-c737-4cee-8ac2-cfe1bf02e656-kube-api-access-wc7k7\") pod \"glance-default-external-api-0\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " pod="openstack/glance-default-external-api-0" Nov 24 19:36:14 crc kubenswrapper[4812]: I1124 19:36:14.401329 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 19:36:14 crc kubenswrapper[4812]: I1124 19:36:14.981456 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77e04404-63c3-4c13-8d11-5ec3953956ea" path="/var/lib/kubelet/pods/77e04404-63c3-4c13-8d11-5ec3953956ea/volumes" Nov 24 19:36:14 crc kubenswrapper[4812]: I1124 19:36:14.983190 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f916654c-9eac-44ed-a9f7-c013b5a99345" path="/var/lib/kubelet/pods/f916654c-9eac-44ed-a9f7-c013b5a99345/volumes" Nov 24 19:36:15 crc kubenswrapper[4812]: I1124 19:36:15.009513 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" podUID="79f12eac-94ed-4787-b2ce-92c02040e8f0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Nov 24 19:36:20 crc kubenswrapper[4812]: I1124 19:36:20.009604 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" podUID="79f12eac-94ed-4787-b2ce-92c02040e8f0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.211106 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.301192 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-config-data\") pod \"009058ea-bc3f-4d45-af05-f9ebde87dad6\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.301455 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-combined-ca-bundle\") pod \"009058ea-bc3f-4d45-af05-f9ebde87dad6\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.301525 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"009058ea-bc3f-4d45-af05-f9ebde87dad6\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.301638 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/009058ea-bc3f-4d45-af05-f9ebde87dad6-logs\") pod \"009058ea-bc3f-4d45-af05-f9ebde87dad6\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.301861 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-scripts\") pod \"009058ea-bc3f-4d45-af05-f9ebde87dad6\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.302260 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/009058ea-bc3f-4d45-af05-f9ebde87dad6-logs" (OuterVolumeSpecName: "logs") pod "009058ea-bc3f-4d45-af05-f9ebde87dad6" (UID: "009058ea-bc3f-4d45-af05-f9ebde87dad6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.302793 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/009058ea-bc3f-4d45-af05-f9ebde87dad6-httpd-run\") pod \"009058ea-bc3f-4d45-af05-f9ebde87dad6\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.302854 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch5d5\" (UniqueName: \"kubernetes.io/projected/009058ea-bc3f-4d45-af05-f9ebde87dad6-kube-api-access-ch5d5\") pod \"009058ea-bc3f-4d45-af05-f9ebde87dad6\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.302949 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-internal-tls-certs\") pod \"009058ea-bc3f-4d45-af05-f9ebde87dad6\" (UID: \"009058ea-bc3f-4d45-af05-f9ebde87dad6\") " Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.303160 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/009058ea-bc3f-4d45-af05-f9ebde87dad6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "009058ea-bc3f-4d45-af05-f9ebde87dad6" (UID: "009058ea-bc3f-4d45-af05-f9ebde87dad6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.304136 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/009058ea-bc3f-4d45-af05-f9ebde87dad6-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.304165 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/009058ea-bc3f-4d45-af05-f9ebde87dad6-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.308614 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-scripts" (OuterVolumeSpecName: "scripts") pod "009058ea-bc3f-4d45-af05-f9ebde87dad6" (UID: "009058ea-bc3f-4d45-af05-f9ebde87dad6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.349358 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "009058ea-bc3f-4d45-af05-f9ebde87dad6" (UID: "009058ea-bc3f-4d45-af05-f9ebde87dad6"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.350256 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/009058ea-bc3f-4d45-af05-f9ebde87dad6-kube-api-access-ch5d5" (OuterVolumeSpecName: "kube-api-access-ch5d5") pod "009058ea-bc3f-4d45-af05-f9ebde87dad6" (UID: "009058ea-bc3f-4d45-af05-f9ebde87dad6"). InnerVolumeSpecName "kube-api-access-ch5d5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.366304 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "009058ea-bc3f-4d45-af05-f9ebde87dad6" (UID: "009058ea-bc3f-4d45-af05-f9ebde87dad6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.384812 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-config-data" (OuterVolumeSpecName: "config-data") pod "009058ea-bc3f-4d45-af05-f9ebde87dad6" (UID: "009058ea-bc3f-4d45-af05-f9ebde87dad6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.389112 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "009058ea-bc3f-4d45-af05-f9ebde87dad6" (UID: "009058ea-bc3f-4d45-af05-f9ebde87dad6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.405838 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.405869 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch5d5\" (UniqueName: \"kubernetes.io/projected/009058ea-bc3f-4d45-af05-f9ebde87dad6-kube-api-access-ch5d5\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.405880 4812 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.405889 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.405899 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/009058ea-bc3f-4d45-af05-f9ebde87dad6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.405947 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.424347 4812 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.507243 4812 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.791473 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"009058ea-bc3f-4d45-af05-f9ebde87dad6","Type":"ContainerDied","Data":"a5a07d9b4266f9dd2fff1a3d723ade60258a0d782b1637b78f7000662811ca14"} Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.791590 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.831806 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.839012 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.866994 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 19:36:21 crc kubenswrapper[4812]: E1124 19:36:21.867526 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="009058ea-bc3f-4d45-af05-f9ebde87dad6" containerName="glance-log" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.867548 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="009058ea-bc3f-4d45-af05-f9ebde87dad6" containerName="glance-log" Nov 24 19:36:21 crc kubenswrapper[4812]: E1124 19:36:21.867574 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="009058ea-bc3f-4d45-af05-f9ebde87dad6" containerName="glance-httpd" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.867605 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="009058ea-bc3f-4d45-af05-f9ebde87dad6" containerName="glance-httpd" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.867856 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="009058ea-bc3f-4d45-af05-f9ebde87dad6" containerName="glance-httpd" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.867883 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="009058ea-bc3f-4d45-af05-f9ebde87dad6" containerName="glance-log" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.869016 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.875948 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.876035 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 19:36:21 crc kubenswrapper[4812]: I1124 19:36:21.876476 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.016296 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.016468 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.016513 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-logs\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.016622 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.016758 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.016813 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.016891 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bp2l\" (UniqueName: \"kubernetes.io/projected/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-kube-api-access-8bp2l\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.016953 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.118186 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.118233 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.118253 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-logs\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.118284 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.118370 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.118395 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.118423 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bp2l\" (UniqueName: \"kubernetes.io/projected/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-kube-api-access-8bp2l\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.118445 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.118827 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-logs\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.118875 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.118978 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.124842 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.124871 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.125970 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.131663 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.134500 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bp2l\" (UniqueName: \"kubernetes.io/projected/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-kube-api-access-8bp2l\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.145817 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.195459 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.388747 4812 scope.go:117] "RemoveContainer" containerID="94459c598e3ea2371bb7e784dbc1ba529eee3868a4c3c05903f538edca9152da" Nov 24 19:36:22 crc kubenswrapper[4812]: E1124 19:36:22.413055 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879" Nov 24 19:36:22 crc kubenswrapper[4812]: E1124 19:36:22.413550 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s6cc9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-nl6jz_openstack(5a5a2d87-4746-4254-9f63-10fe73a4001f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 19:36:22 crc kubenswrapper[4812]: E1124 19:36:22.414867 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-nl6jz" podUID="5a5a2d87-4746-4254-9f63-10fe73a4001f" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.569885 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.584861 4812 scope.go:117] "RemoveContainer" containerID="01b85ca6a5036b88a561491e73fad7b65775ce5e7670c0307ef2cc7741dd7576" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.636578 4812 scope.go:117] "RemoveContainer" containerID="d34dac02fe66dec06d00c965faad7817346e3f910c23dc6d95f71582d20268ce" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.733325 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-ovsdbserver-sb\") pod \"79f12eac-94ed-4787-b2ce-92c02040e8f0\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.733449 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq7dn\" (UniqueName: \"kubernetes.io/projected/79f12eac-94ed-4787-b2ce-92c02040e8f0-kube-api-access-lq7dn\") pod \"79f12eac-94ed-4787-b2ce-92c02040e8f0\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.733519 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-dns-swift-storage-0\") pod \"79f12eac-94ed-4787-b2ce-92c02040e8f0\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.733584 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-ovsdbserver-nb\") pod \"79f12eac-94ed-4787-b2ce-92c02040e8f0\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.733632 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-config\") pod \"79f12eac-94ed-4787-b2ce-92c02040e8f0\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.733704 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-dns-svc\") pod \"79f12eac-94ed-4787-b2ce-92c02040e8f0\" (UID: \"79f12eac-94ed-4787-b2ce-92c02040e8f0\") " Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.748386 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79f12eac-94ed-4787-b2ce-92c02040e8f0-kube-api-access-lq7dn" (OuterVolumeSpecName: "kube-api-access-lq7dn") pod "79f12eac-94ed-4787-b2ce-92c02040e8f0" (UID: "79f12eac-94ed-4787-b2ce-92c02040e8f0"). InnerVolumeSpecName "kube-api-access-lq7dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.782817 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "79f12eac-94ed-4787-b2ce-92c02040e8f0" (UID: "79f12eac-94ed-4787-b2ce-92c02040e8f0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.788039 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "79f12eac-94ed-4787-b2ce-92c02040e8f0" (UID: "79f12eac-94ed-4787-b2ce-92c02040e8f0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.795990 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "79f12eac-94ed-4787-b2ce-92c02040e8f0" (UID: "79f12eac-94ed-4787-b2ce-92c02040e8f0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.796322 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "79f12eac-94ed-4787-b2ce-92c02040e8f0" (UID: "79f12eac-94ed-4787-b2ce-92c02040e8f0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.800305 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-config" (OuterVolumeSpecName: "config") pod "79f12eac-94ed-4787-b2ce-92c02040e8f0" (UID: "79f12eac-94ed-4787-b2ce-92c02040e8f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.805468 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j8glp" event={"ID":"2e22c06f-3e52-4943-9a00-b964d62d8cab","Type":"ContainerStarted","Data":"c8722eb6dbfb9224875a2ab675d16b71bae4375b94338205f71c4fe08e703374"} Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.811924 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pq5zx" event={"ID":"7fd280c7-5e1d-4106-af02-86eee8c72f62","Type":"ContainerStarted","Data":"f5c4371e8bb52bf127f9a713ce008c36c8f59703bdb0516d218edf25f0da6d3c"} Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.814561 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" event={"ID":"79f12eac-94ed-4787-b2ce-92c02040e8f0","Type":"ContainerDied","Data":"aa523295df4d85ae36a7bf60d0bd9edd215436ad649f43a34110b1a1f3e2a5ef"} Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.814599 4812 scope.go:117] "RemoveContainer" containerID="62a2d7b375580c8c67d3f9061814f2be06edb499e454fecdadbeb3de5fedee03" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.815077 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6856c564b9-rfzx9" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.820461 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"169bdfbe-91a0-476c-a315-e9ea82f10ca5","Type":"ContainerStarted","Data":"4a8d5ec6f0abd52f143fd632d0c822d757220d37a6bc65af0f49cd7ba26f3bb7"} Nov 24 19:36:22 crc kubenswrapper[4812]: E1124 19:36:22.821793 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879\\\"\"" pod="openstack/cinder-db-sync-nl6jz" podUID="5a5a2d87-4746-4254-9f63-10fe73a4001f" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.822472 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-j8glp" podStartSLOduration=1.540149808 podStartE2EDuration="21.822450433s" podCreationTimestamp="2025-11-24 19:36:01 +0000 UTC" firstStartedPulling="2025-11-24 19:36:02.109171008 +0000 UTC m=+1155.898123379" lastFinishedPulling="2025-11-24 19:36:22.391471603 +0000 UTC m=+1176.180424004" observedRunningTime="2025-11-24 19:36:22.817566213 +0000 UTC m=+1176.606518584" watchObservedRunningTime="2025-11-24 19:36:22.822450433 +0000 UTC m=+1176.611402804" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.840172 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.840202 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.840212 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.840222 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.840231 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79f12eac-94ed-4787-b2ce-92c02040e8f0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.840239 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq7dn\" (UniqueName: \"kubernetes.io/projected/79f12eac-94ed-4787-b2ce-92c02040e8f0-kube-api-access-lq7dn\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.840262 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-pq5zx" podStartSLOduration=1.568103668 podStartE2EDuration="21.840245485s" podCreationTimestamp="2025-11-24 19:36:01 +0000 UTC" firstStartedPulling="2025-11-24 19:36:02.119421309 +0000 UTC m=+1155.908373670" lastFinishedPulling="2025-11-24 19:36:22.391563076 +0000 UTC m=+1176.180515487" observedRunningTime="2025-11-24 19:36:22.833302985 +0000 UTC m=+1176.622255356" watchObservedRunningTime="2025-11-24 19:36:22.840245485 +0000 UTC m=+1176.629197856" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.847399 4812 scope.go:117] "RemoveContainer" containerID="7233454ac30f895f85b78aa1117a8825c252bbd3e311ab41c2d20b25ed4e1a53" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.890704 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6856c564b9-rfzx9"] Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.909276 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6856c564b9-rfzx9"] Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.920178 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-84l42"] Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.979579 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="009058ea-bc3f-4d45-af05-f9ebde87dad6" path="/var/lib/kubelet/pods/009058ea-bc3f-4d45-af05-f9ebde87dad6/volumes" Nov 24 19:36:22 crc kubenswrapper[4812]: I1124 19:36:22.980491 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79f12eac-94ed-4787-b2ce-92c02040e8f0" path="/var/lib/kubelet/pods/79f12eac-94ed-4787-b2ce-92c02040e8f0/volumes" Nov 24 19:36:23 crc kubenswrapper[4812]: I1124 19:36:23.057001 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 19:36:23 crc kubenswrapper[4812]: I1124 19:36:23.147069 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 19:36:23 crc kubenswrapper[4812]: W1124 19:36:23.149633 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61c84290_c737_4cee_8ac2_cfe1bf02e656.slice/crio-2286e9664e0ad72046feb8ee264e75f1896ca113b6536b04af1cb77ba8d0a413 WatchSource:0}: Error finding container 2286e9664e0ad72046feb8ee264e75f1896ca113b6536b04af1cb77ba8d0a413: Status 404 returned error can't find the container with id 2286e9664e0ad72046feb8ee264e75f1896ca113b6536b04af1cb77ba8d0a413 Nov 24 19:36:23 crc kubenswrapper[4812]: I1124 19:36:23.841381 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905","Type":"ContainerStarted","Data":"d58f24f98b6547ef728cf80650b0bb8c34d954282d7cfc59ab58fb5f9fd62005"} Nov 24 19:36:23 crc kubenswrapper[4812]: I1124 19:36:23.841908 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905","Type":"ContainerStarted","Data":"b83ed651766287ec7bf8b5d5165367a7b5529e1dd1e57e8c4691a9967e828824"} Nov 24 19:36:23 crc kubenswrapper[4812]: I1124 19:36:23.845637 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-84l42" event={"ID":"da6aa924-f282-445c-8aed-71f9e3282b55","Type":"ContainerStarted","Data":"a8144b0e721573926c3a34a9404c880cf7fb2982f4529eb1e7122567c155687e"} Nov 24 19:36:23 crc kubenswrapper[4812]: I1124 19:36:23.845688 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-84l42" event={"ID":"da6aa924-f282-445c-8aed-71f9e3282b55","Type":"ContainerStarted","Data":"3f0812e40a9cb022829c5bbecbdea02fb91fbeee8b3455bb8e92a239a22454f2"} Nov 24 19:36:23 crc kubenswrapper[4812]: I1124 19:36:23.849227 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"61c84290-c737-4cee-8ac2-cfe1bf02e656","Type":"ContainerStarted","Data":"8bdc38b71291931cd55ce60ff4be4f9cb9e8f79ff63fb83d416f76649df38c59"} Nov 24 19:36:23 crc kubenswrapper[4812]: I1124 19:36:23.849274 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"61c84290-c737-4cee-8ac2-cfe1bf02e656","Type":"ContainerStarted","Data":"2286e9664e0ad72046feb8ee264e75f1896ca113b6536b04af1cb77ba8d0a413"} Nov 24 19:36:23 crc kubenswrapper[4812]: I1124 19:36:23.863666 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-84l42" podStartSLOduration=10.863645092 podStartE2EDuration="10.863645092s" podCreationTimestamp="2025-11-24 19:36:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:36:23.862553261 +0000 UTC m=+1177.651505642" watchObservedRunningTime="2025-11-24 19:36:23.863645092 +0000 UTC m=+1177.652597463" Nov 24 19:36:24 crc kubenswrapper[4812]: I1124 19:36:24.864203 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"61c84290-c737-4cee-8ac2-cfe1bf02e656","Type":"ContainerStarted","Data":"9901b266eef29a4a8c9eb6aa553ed705c00a6fa7715ab7158d9f2421450a1193"} Nov 24 19:36:24 crc kubenswrapper[4812]: I1124 19:36:24.866972 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"169bdfbe-91a0-476c-a315-e9ea82f10ca5","Type":"ContainerStarted","Data":"1562bd89b357fa537917bd8bf2fc7254e76fcce3becc7965c45d1585426c27f5"} Nov 24 19:36:24 crc kubenswrapper[4812]: I1124 19:36:24.869466 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905","Type":"ContainerStarted","Data":"9ba3157ec54faa9b0fe3a70b70f8e9b99036f5aa81cab267d90afa91ac91cfc9"} Nov 24 19:36:24 crc kubenswrapper[4812]: I1124 19:36:24.894606 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=11.894589295 podStartE2EDuration="11.894589295s" podCreationTimestamp="2025-11-24 19:36:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:36:24.881017176 +0000 UTC m=+1178.669969547" watchObservedRunningTime="2025-11-24 19:36:24.894589295 +0000 UTC m=+1178.683541666" Nov 24 19:36:26 crc kubenswrapper[4812]: I1124 19:36:26.891192 4812 generic.go:334] "Generic (PLEG): container finished" podID="2e22c06f-3e52-4943-9a00-b964d62d8cab" containerID="c8722eb6dbfb9224875a2ab675d16b71bae4375b94338205f71c4fe08e703374" exitCode=0 Nov 24 19:36:26 crc kubenswrapper[4812]: I1124 19:36:26.891293 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j8glp" event={"ID":"2e22c06f-3e52-4943-9a00-b964d62d8cab","Type":"ContainerDied","Data":"c8722eb6dbfb9224875a2ab675d16b71bae4375b94338205f71c4fe08e703374"} Nov 24 19:36:26 crc kubenswrapper[4812]: I1124 19:36:26.895085 4812 generic.go:334] "Generic (PLEG): container finished" podID="da6aa924-f282-445c-8aed-71f9e3282b55" containerID="a8144b0e721573926c3a34a9404c880cf7fb2982f4529eb1e7122567c155687e" exitCode=0 Nov 24 19:36:26 crc kubenswrapper[4812]: I1124 19:36:26.895113 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-84l42" event={"ID":"da6aa924-f282-445c-8aed-71f9e3282b55","Type":"ContainerDied","Data":"a8144b0e721573926c3a34a9404c880cf7fb2982f4529eb1e7122567c155687e"} Nov 24 19:36:26 crc kubenswrapper[4812]: I1124 19:36:26.896992 4812 generic.go:334] "Generic (PLEG): container finished" podID="7fd280c7-5e1d-4106-af02-86eee8c72f62" containerID="f5c4371e8bb52bf127f9a713ce008c36c8f59703bdb0516d218edf25f0da6d3c" exitCode=0 Nov 24 19:36:26 crc kubenswrapper[4812]: I1124 19:36:26.897020 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pq5zx" event={"ID":"7fd280c7-5e1d-4106-af02-86eee8c72f62","Type":"ContainerDied","Data":"f5c4371e8bb52bf127f9a713ce008c36c8f59703bdb0516d218edf25f0da6d3c"} Nov 24 19:36:26 crc kubenswrapper[4812]: I1124 19:36:26.909866 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.909849255 podStartE2EDuration="5.909849255s" podCreationTimestamp="2025-11-24 19:36:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:36:24.915870417 +0000 UTC m=+1178.704822798" watchObservedRunningTime="2025-11-24 19:36:26.909849255 +0000 UTC m=+1180.698801626" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.796463 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pq5zx" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.810097 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j8glp" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.814187 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.921398 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-credential-keys\") pod \"da6aa924-f282-445c-8aed-71f9e3282b55\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.921649 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-scripts\") pod \"da6aa924-f282-445c-8aed-71f9e3282b55\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.921672 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gpjd\" (UniqueName: \"kubernetes.io/projected/7fd280c7-5e1d-4106-af02-86eee8c72f62-kube-api-access-7gpjd\") pod \"7fd280c7-5e1d-4106-af02-86eee8c72f62\" (UID: \"7fd280c7-5e1d-4106-af02-86eee8c72f62\") " Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.921716 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e22c06f-3e52-4943-9a00-b964d62d8cab-logs\") pod \"2e22c06f-3e52-4943-9a00-b964d62d8cab\" (UID: \"2e22c06f-3e52-4943-9a00-b964d62d8cab\") " Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.921732 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvfxf\" (UniqueName: \"kubernetes.io/projected/da6aa924-f282-445c-8aed-71f9e3282b55-kube-api-access-jvfxf\") pod \"da6aa924-f282-445c-8aed-71f9e3282b55\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.922196 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e22c06f-3e52-4943-9a00-b964d62d8cab-config-data\") pod \"2e22c06f-3e52-4943-9a00-b964d62d8cab\" (UID: \"2e22c06f-3e52-4943-9a00-b964d62d8cab\") " Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.922231 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-config-data\") pod \"da6aa924-f282-445c-8aed-71f9e3282b55\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.922391 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd280c7-5e1d-4106-af02-86eee8c72f62-combined-ca-bundle\") pod \"7fd280c7-5e1d-4106-af02-86eee8c72f62\" (UID: \"7fd280c7-5e1d-4106-af02-86eee8c72f62\") " Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.922410 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-combined-ca-bundle\") pod \"da6aa924-f282-445c-8aed-71f9e3282b55\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.922467 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7fd280c7-5e1d-4106-af02-86eee8c72f62-db-sync-config-data\") pod \"7fd280c7-5e1d-4106-af02-86eee8c72f62\" (UID: \"7fd280c7-5e1d-4106-af02-86eee8c72f62\") " Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.922535 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-fernet-keys\") pod \"da6aa924-f282-445c-8aed-71f9e3282b55\" (UID: \"da6aa924-f282-445c-8aed-71f9e3282b55\") " Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.922623 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e22c06f-3e52-4943-9a00-b964d62d8cab-scripts\") pod \"2e22c06f-3e52-4943-9a00-b964d62d8cab\" (UID: \"2e22c06f-3e52-4943-9a00-b964d62d8cab\") " Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.922651 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmcwh\" (UniqueName: \"kubernetes.io/projected/2e22c06f-3e52-4943-9a00-b964d62d8cab-kube-api-access-wmcwh\") pod \"2e22c06f-3e52-4943-9a00-b964d62d8cab\" (UID: \"2e22c06f-3e52-4943-9a00-b964d62d8cab\") " Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.922710 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e22c06f-3e52-4943-9a00-b964d62d8cab-combined-ca-bundle\") pod \"2e22c06f-3e52-4943-9a00-b964d62d8cab\" (UID: \"2e22c06f-3e52-4943-9a00-b964d62d8cab\") " Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.926022 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "da6aa924-f282-445c-8aed-71f9e3282b55" (UID: "da6aa924-f282-445c-8aed-71f9e3282b55"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.929918 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e22c06f-3e52-4943-9a00-b964d62d8cab-scripts" (OuterVolumeSpecName: "scripts") pod "2e22c06f-3e52-4943-9a00-b964d62d8cab" (UID: "2e22c06f-3e52-4943-9a00-b964d62d8cab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.930112 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"169bdfbe-91a0-476c-a315-e9ea82f10ca5","Type":"ContainerStarted","Data":"42fc49c75b8cd7333bb583c37b53c2a2a3f1da4ba6d342977c3330998c3ed4d8"} Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.930327 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e22c06f-3e52-4943-9a00-b964d62d8cab-logs" (OuterVolumeSpecName: "logs") pod "2e22c06f-3e52-4943-9a00-b964d62d8cab" (UID: "2e22c06f-3e52-4943-9a00-b964d62d8cab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.930325 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da6aa924-f282-445c-8aed-71f9e3282b55-kube-api-access-jvfxf" (OuterVolumeSpecName: "kube-api-access-jvfxf") pod "da6aa924-f282-445c-8aed-71f9e3282b55" (UID: "da6aa924-f282-445c-8aed-71f9e3282b55"). InnerVolumeSpecName "kube-api-access-jvfxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.934255 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fd280c7-5e1d-4106-af02-86eee8c72f62-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "7fd280c7-5e1d-4106-af02-86eee8c72f62" (UID: "7fd280c7-5e1d-4106-af02-86eee8c72f62"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.936501 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "da6aa924-f282-445c-8aed-71f9e3282b55" (UID: "da6aa924-f282-445c-8aed-71f9e3282b55"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.940845 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e22c06f-3e52-4943-9a00-b964d62d8cab-kube-api-access-wmcwh" (OuterVolumeSpecName: "kube-api-access-wmcwh") pod "2e22c06f-3e52-4943-9a00-b964d62d8cab" (UID: "2e22c06f-3e52-4943-9a00-b964d62d8cab"). InnerVolumeSpecName "kube-api-access-wmcwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.948219 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j8glp" event={"ID":"2e22c06f-3e52-4943-9a00-b964d62d8cab","Type":"ContainerDied","Data":"6a822e219ca30b04ee0e27d81e2eabb90937aad9d56135fedda1dd5f41c11b03"} Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.948262 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a822e219ca30b04ee0e27d81e2eabb90937aad9d56135fedda1dd5f41c11b03" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.948367 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j8glp" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.961220 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-scripts" (OuterVolumeSpecName: "scripts") pod "da6aa924-f282-445c-8aed-71f9e3282b55" (UID: "da6aa924-f282-445c-8aed-71f9e3282b55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.966678 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-84l42" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.966824 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fd280c7-5e1d-4106-af02-86eee8c72f62-kube-api-access-7gpjd" (OuterVolumeSpecName: "kube-api-access-7gpjd") pod "7fd280c7-5e1d-4106-af02-86eee8c72f62" (UID: "7fd280c7-5e1d-4106-af02-86eee8c72f62"). InnerVolumeSpecName "kube-api-access-7gpjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.968193 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-config-data" (OuterVolumeSpecName: "config-data") pod "da6aa924-f282-445c-8aed-71f9e3282b55" (UID: "da6aa924-f282-445c-8aed-71f9e3282b55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.978439 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-pq5zx" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.978875 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e22c06f-3e52-4943-9a00-b964d62d8cab-config-data" (OuterVolumeSpecName: "config-data") pod "2e22c06f-3e52-4943-9a00-b964d62d8cab" (UID: "2e22c06f-3e52-4943-9a00-b964d62d8cab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.985208 4812 generic.go:334] "Generic (PLEG): container finished" podID="e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f" containerID="6ecefa45e298955bae6aa3a3705807ec3dc02991ab6d484fb90e5a7431b808c5" exitCode=0 Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.988089 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fd280c7-5e1d-4106-af02-86eee8c72f62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fd280c7-5e1d-4106-af02-86eee8c72f62" (UID: "7fd280c7-5e1d-4106-af02-86eee8c72f62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.992888 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-84l42" event={"ID":"da6aa924-f282-445c-8aed-71f9e3282b55","Type":"ContainerDied","Data":"3f0812e40a9cb022829c5bbecbdea02fb91fbeee8b3455bb8e92a239a22454f2"} Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.992930 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f0812e40a9cb022829c5bbecbdea02fb91fbeee8b3455bb8e92a239a22454f2" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.992940 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-pq5zx" event={"ID":"7fd280c7-5e1d-4106-af02-86eee8c72f62","Type":"ContainerDied","Data":"1fa5eefa229f841feb68b4ed0e15bbbac65a0f7465ab5b7c6b4abb5225684926"} Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.992951 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fa5eefa229f841feb68b4ed0e15bbbac65a0f7465ab5b7c6b4abb5225684926" Nov 24 19:36:28 crc kubenswrapper[4812]: I1124 19:36:28.992958 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wv8xf" event={"ID":"e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f","Type":"ContainerDied","Data":"6ecefa45e298955bae6aa3a3705807ec3dc02991ab6d484fb90e5a7431b808c5"} Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.012656 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da6aa924-f282-445c-8aed-71f9e3282b55" (UID: "da6aa924-f282-445c-8aed-71f9e3282b55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.025372 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-574f454648-szvd4"] Nov 24 19:36:29 crc kubenswrapper[4812]: E1124 19:36:29.025727 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79f12eac-94ed-4787-b2ce-92c02040e8f0" containerName="dnsmasq-dns" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.025746 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="79f12eac-94ed-4787-b2ce-92c02040e8f0" containerName="dnsmasq-dns" Nov 24 19:36:29 crc kubenswrapper[4812]: E1124 19:36:29.025768 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e22c06f-3e52-4943-9a00-b964d62d8cab" containerName="placement-db-sync" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.025775 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e22c06f-3e52-4943-9a00-b964d62d8cab" containerName="placement-db-sync" Nov 24 19:36:29 crc kubenswrapper[4812]: E1124 19:36:29.025784 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd280c7-5e1d-4106-af02-86eee8c72f62" containerName="barbican-db-sync" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.025789 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd280c7-5e1d-4106-af02-86eee8c72f62" containerName="barbican-db-sync" Nov 24 19:36:29 crc kubenswrapper[4812]: E1124 19:36:29.025806 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da6aa924-f282-445c-8aed-71f9e3282b55" containerName="keystone-bootstrap" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.025813 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="da6aa924-f282-445c-8aed-71f9e3282b55" containerName="keystone-bootstrap" Nov 24 19:36:29 crc kubenswrapper[4812]: E1124 19:36:29.025831 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79f12eac-94ed-4787-b2ce-92c02040e8f0" containerName="init" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.025837 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="79f12eac-94ed-4787-b2ce-92c02040e8f0" containerName="init" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.025992 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e22c06f-3e52-4943-9a00-b964d62d8cab" containerName="placement-db-sync" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.026006 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fd280c7-5e1d-4106-af02-86eee8c72f62" containerName="barbican-db-sync" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.026018 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="79f12eac-94ed-4787-b2ce-92c02040e8f0" containerName="dnsmasq-dns" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.026029 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="da6aa924-f282-445c-8aed-71f9e3282b55" containerName="keystone-bootstrap" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.026882 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.028531 4812 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.028548 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.028558 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gpjd\" (UniqueName: \"kubernetes.io/projected/7fd280c7-5e1d-4106-af02-86eee8c72f62-kube-api-access-7gpjd\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.028567 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e22c06f-3e52-4943-9a00-b964d62d8cab-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.028576 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvfxf\" (UniqueName: \"kubernetes.io/projected/da6aa924-f282-445c-8aed-71f9e3282b55-kube-api-access-jvfxf\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.028584 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e22c06f-3e52-4943-9a00-b964d62d8cab-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.028592 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.028601 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd280c7-5e1d-4106-af02-86eee8c72f62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.028609 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.028617 4812 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/7fd280c7-5e1d-4106-af02-86eee8c72f62-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.028625 4812 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/da6aa924-f282-445c-8aed-71f9e3282b55-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.028632 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e22c06f-3e52-4943-9a00-b964d62d8cab-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.028640 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmcwh\" (UniqueName: \"kubernetes.io/projected/2e22c06f-3e52-4943-9a00-b964d62d8cab-kube-api-access-wmcwh\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.030016 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.033650 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.039261 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e22c06f-3e52-4943-9a00-b964d62d8cab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e22c06f-3e52-4943-9a00-b964d62d8cab" (UID: "2e22c06f-3e52-4943-9a00-b964d62d8cab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.052430 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-574f454648-szvd4"] Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.100229 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c8f69d86c-2v9p4"] Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.101253 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.103167 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.103597 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.113516 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c8f69d86c-2v9p4"] Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.131586 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-scripts\") pod \"placement-574f454648-szvd4\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.131639 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-fernet-keys\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.131663 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-internal-tls-certs\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.131698 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-config-data\") pod \"placement-574f454648-szvd4\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.131719 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-config-data\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.131773 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-internal-tls-certs\") pod \"placement-574f454648-szvd4\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.131849 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-public-tls-certs\") pod \"placement-574f454648-szvd4\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.131896 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdcvk\" (UniqueName: \"kubernetes.io/projected/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-kube-api-access-cdcvk\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.131933 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c8509a7-6885-4238-9da3-b214b5f8868e-logs\") pod \"placement-574f454648-szvd4\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.131976 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-scripts\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.131992 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-credential-keys\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.132047 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-combined-ca-bundle\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.132069 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-combined-ca-bundle\") pod \"placement-574f454648-szvd4\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.132091 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8dvx\" (UniqueName: \"kubernetes.io/projected/1c8509a7-6885-4238-9da3-b214b5f8868e-kube-api-access-x8dvx\") pod \"placement-574f454648-szvd4\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.132125 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-public-tls-certs\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.132171 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e22c06f-3e52-4943-9a00-b964d62d8cab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.204217 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6474fd5f77-zwqph"] Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.209998 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6474fd5f77-zwqph" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.215147 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.217137 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-94c5495b6-f8ptk"] Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.218726 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.226430 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.235472 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-combined-ca-bundle\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.235590 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-combined-ca-bundle\") pod \"placement-574f454648-szvd4\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.235672 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8dvx\" (UniqueName: \"kubernetes.io/projected/1c8509a7-6885-4238-9da3-b214b5f8868e-kube-api-access-x8dvx\") pod \"placement-574f454648-szvd4\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.235790 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-public-tls-certs\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.235888 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-scripts\") pod \"placement-574f454648-szvd4\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.235985 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-fernet-keys\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.236076 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-internal-tls-certs\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.236155 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-config-data\") pod \"placement-574f454648-szvd4\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.236235 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-config-data\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.236442 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-internal-tls-certs\") pod \"placement-574f454648-szvd4\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.236545 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-public-tls-certs\") pod \"placement-574f454648-szvd4\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.236619 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdcvk\" (UniqueName: \"kubernetes.io/projected/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-kube-api-access-cdcvk\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.236738 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c8509a7-6885-4238-9da3-b214b5f8868e-logs\") pod \"placement-574f454648-szvd4\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.236815 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-credential-keys\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.236878 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-scripts\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.241382 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c8509a7-6885-4238-9da3-b214b5f8868e-logs\") pod \"placement-574f454648-szvd4\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.250392 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6474fd5f77-zwqph"] Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.250502 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-public-tls-certs\") pod \"placement-574f454648-szvd4\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.253946 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-combined-ca-bundle\") pod \"placement-574f454648-szvd4\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.256545 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-scripts\") pod \"placement-574f454648-szvd4\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.257535 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-config-data\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.258657 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-credential-keys\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.259156 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-internal-tls-certs\") pod \"placement-574f454648-szvd4\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.259764 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-combined-ca-bundle\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.259930 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-public-tls-certs\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.261730 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-94c5495b6-f8ptk"] Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.267864 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-internal-tls-certs\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.267934 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-fernet-keys\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.268165 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-config-data\") pod \"placement-574f454648-szvd4\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.273059 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-scripts\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.287293 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdcvk\" (UniqueName: \"kubernetes.io/projected/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-kube-api-access-cdcvk\") pod \"keystone-c8f69d86c-2v9p4\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.288439 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8dvx\" (UniqueName: \"kubernetes.io/projected/1c8509a7-6885-4238-9da3-b214b5f8868e-kube-api-access-x8dvx\") pod \"placement-574f454648-szvd4\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.330480 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b5d4767c-7njn9"] Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.331852 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.337817 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5388a6e8-73d7-4097-9521-b47649b4c6c8-combined-ca-bundle\") pod \"barbican-worker-6474fd5f77-zwqph\" (UID: \"5388a6e8-73d7-4097-9521-b47649b4c6c8\") " pod="openstack/barbican-worker-6474fd5f77-zwqph" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.337975 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-config-data\") pod \"barbican-keystone-listener-94c5495b6-f8ptk\" (UID: \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\") " pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.338143 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-combined-ca-bundle\") pod \"barbican-keystone-listener-94c5495b6-f8ptk\" (UID: \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\") " pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.338198 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5388a6e8-73d7-4097-9521-b47649b4c6c8-config-data-custom\") pod \"barbican-worker-6474fd5f77-zwqph\" (UID: \"5388a6e8-73d7-4097-9521-b47649b4c6c8\") " pod="openstack/barbican-worker-6474fd5f77-zwqph" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.338231 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w2jx\" (UniqueName: \"kubernetes.io/projected/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-kube-api-access-2w2jx\") pod \"barbican-keystone-listener-94c5495b6-f8ptk\" (UID: \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\") " pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.338263 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5388a6e8-73d7-4097-9521-b47649b4c6c8-config-data\") pod \"barbican-worker-6474fd5f77-zwqph\" (UID: \"5388a6e8-73d7-4097-9521-b47649b4c6c8\") " pod="openstack/barbican-worker-6474fd5f77-zwqph" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.338373 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5388a6e8-73d7-4097-9521-b47649b4c6c8-logs\") pod \"barbican-worker-6474fd5f77-zwqph\" (UID: \"5388a6e8-73d7-4097-9521-b47649b4c6c8\") " pod="openstack/barbican-worker-6474fd5f77-zwqph" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.338422 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-config-data-custom\") pod \"barbican-keystone-listener-94c5495b6-f8ptk\" (UID: \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\") " pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.338472 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-logs\") pod \"barbican-keystone-listener-94c5495b6-f8ptk\" (UID: \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\") " pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.338522 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jphk8\" (UniqueName: \"kubernetes.io/projected/5388a6e8-73d7-4097-9521-b47649b4c6c8-kube-api-access-jphk8\") pod \"barbican-worker-6474fd5f77-zwqph\" (UID: \"5388a6e8-73d7-4097-9521-b47649b4c6c8\") " pod="openstack/barbican-worker-6474fd5f77-zwqph" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.340983 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b5d4767c-7njn9"] Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.348056 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.383239 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-677dbcd544-vn9hr"] Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.387710 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.394047 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.395717 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-677dbcd544-vn9hr"] Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.414394 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.439955 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5388a6e8-73d7-4097-9521-b47649b4c6c8-logs\") pod \"barbican-worker-6474fd5f77-zwqph\" (UID: \"5388a6e8-73d7-4097-9521-b47649b4c6c8\") " pod="openstack/barbican-worker-6474fd5f77-zwqph" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.439995 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2589de-d523-4229-adfa-d0b682557955-combined-ca-bundle\") pod \"barbican-api-677dbcd544-vn9hr\" (UID: \"1b2589de-d523-4229-adfa-d0b682557955\") " pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.440018 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tn4l\" (UniqueName: \"kubernetes.io/projected/d3ebe77b-98be-4168-92bb-205f96828b9f-kube-api-access-9tn4l\") pod \"dnsmasq-dns-6b5d4767c-7njn9\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.440047 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-config-data-custom\") pod \"barbican-keystone-listener-94c5495b6-f8ptk\" (UID: \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\") " pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.440075 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-logs\") pod \"barbican-keystone-listener-94c5495b6-f8ptk\" (UID: \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\") " pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.440103 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-config\") pod \"dnsmasq-dns-6b5d4767c-7njn9\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.440125 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jphk8\" (UniqueName: \"kubernetes.io/projected/5388a6e8-73d7-4097-9521-b47649b4c6c8-kube-api-access-jphk8\") pod \"barbican-worker-6474fd5f77-zwqph\" (UID: \"5388a6e8-73d7-4097-9521-b47649b4c6c8\") " pod="openstack/barbican-worker-6474fd5f77-zwqph" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.440143 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b2589de-d523-4229-adfa-d0b682557955-logs\") pod \"barbican-api-677dbcd544-vn9hr\" (UID: \"1b2589de-d523-4229-adfa-d0b682557955\") " pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.440168 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nvql\" (UniqueName: \"kubernetes.io/projected/1b2589de-d523-4229-adfa-d0b682557955-kube-api-access-9nvql\") pod \"barbican-api-677dbcd544-vn9hr\" (UID: \"1b2589de-d523-4229-adfa-d0b682557955\") " pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.440194 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5388a6e8-73d7-4097-9521-b47649b4c6c8-combined-ca-bundle\") pod \"barbican-worker-6474fd5f77-zwqph\" (UID: \"5388a6e8-73d7-4097-9521-b47649b4c6c8\") " pod="openstack/barbican-worker-6474fd5f77-zwqph" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.440212 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-dns-swift-storage-0\") pod \"dnsmasq-dns-6b5d4767c-7njn9\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.440237 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b2589de-d523-4229-adfa-d0b682557955-config-data-custom\") pod \"barbican-api-677dbcd544-vn9hr\" (UID: \"1b2589de-d523-4229-adfa-d0b682557955\") " pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.440267 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-ovsdbserver-nb\") pod \"dnsmasq-dns-6b5d4767c-7njn9\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.440289 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-config-data\") pod \"barbican-keystone-listener-94c5495b6-f8ptk\" (UID: \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\") " pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.440308 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2589de-d523-4229-adfa-d0b682557955-config-data\") pod \"barbican-api-677dbcd544-vn9hr\" (UID: \"1b2589de-d523-4229-adfa-d0b682557955\") " pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.440328 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-combined-ca-bundle\") pod \"barbican-keystone-listener-94c5495b6-f8ptk\" (UID: \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\") " pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.440366 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5388a6e8-73d7-4097-9521-b47649b4c6c8-config-data-custom\") pod \"barbican-worker-6474fd5f77-zwqph\" (UID: \"5388a6e8-73d7-4097-9521-b47649b4c6c8\") " pod="openstack/barbican-worker-6474fd5f77-zwqph" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.440389 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w2jx\" (UniqueName: \"kubernetes.io/projected/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-kube-api-access-2w2jx\") pod \"barbican-keystone-listener-94c5495b6-f8ptk\" (UID: \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\") " pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.440406 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5388a6e8-73d7-4097-9521-b47649b4c6c8-config-data\") pod \"barbican-worker-6474fd5f77-zwqph\" (UID: \"5388a6e8-73d7-4097-9521-b47649b4c6c8\") " pod="openstack/barbican-worker-6474fd5f77-zwqph" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.440425 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-ovsdbserver-sb\") pod \"dnsmasq-dns-6b5d4767c-7njn9\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.440451 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-dns-svc\") pod \"dnsmasq-dns-6b5d4767c-7njn9\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.440703 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5388a6e8-73d7-4097-9521-b47649b4c6c8-logs\") pod \"barbican-worker-6474fd5f77-zwqph\" (UID: \"5388a6e8-73d7-4097-9521-b47649b4c6c8\") " pod="openstack/barbican-worker-6474fd5f77-zwqph" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.441817 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-logs\") pod \"barbican-keystone-listener-94c5495b6-f8ptk\" (UID: \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\") " pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.482420 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-combined-ca-bundle\") pod \"barbican-keystone-listener-94c5495b6-f8ptk\" (UID: \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\") " pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.482856 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-config-data-custom\") pod \"barbican-keystone-listener-94c5495b6-f8ptk\" (UID: \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\") " pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.483077 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5388a6e8-73d7-4097-9521-b47649b4c6c8-combined-ca-bundle\") pod \"barbican-worker-6474fd5f77-zwqph\" (UID: \"5388a6e8-73d7-4097-9521-b47649b4c6c8\") " pod="openstack/barbican-worker-6474fd5f77-zwqph" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.483081 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5388a6e8-73d7-4097-9521-b47649b4c6c8-config-data-custom\") pod \"barbican-worker-6474fd5f77-zwqph\" (UID: \"5388a6e8-73d7-4097-9521-b47649b4c6c8\") " pod="openstack/barbican-worker-6474fd5f77-zwqph" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.484295 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-config-data\") pod \"barbican-keystone-listener-94c5495b6-f8ptk\" (UID: \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\") " pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.485472 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5388a6e8-73d7-4097-9521-b47649b4c6c8-config-data\") pod \"barbican-worker-6474fd5f77-zwqph\" (UID: \"5388a6e8-73d7-4097-9521-b47649b4c6c8\") " pod="openstack/barbican-worker-6474fd5f77-zwqph" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.488958 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w2jx\" (UniqueName: \"kubernetes.io/projected/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-kube-api-access-2w2jx\") pod \"barbican-keystone-listener-94c5495b6-f8ptk\" (UID: \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\") " pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.489409 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jphk8\" (UniqueName: \"kubernetes.io/projected/5388a6e8-73d7-4097-9521-b47649b4c6c8-kube-api-access-jphk8\") pod \"barbican-worker-6474fd5f77-zwqph\" (UID: \"5388a6e8-73d7-4097-9521-b47649b4c6c8\") " pod="openstack/barbican-worker-6474fd5f77-zwqph" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.541015 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6474fd5f77-zwqph" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.541474 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-ovsdbserver-nb\") pod \"dnsmasq-dns-6b5d4767c-7njn9\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.541527 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2589de-d523-4229-adfa-d0b682557955-config-data\") pod \"barbican-api-677dbcd544-vn9hr\" (UID: \"1b2589de-d523-4229-adfa-d0b682557955\") " pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.541558 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-ovsdbserver-sb\") pod \"dnsmasq-dns-6b5d4767c-7njn9\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.541586 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-dns-svc\") pod \"dnsmasq-dns-6b5d4767c-7njn9\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.541628 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2589de-d523-4229-adfa-d0b682557955-combined-ca-bundle\") pod \"barbican-api-677dbcd544-vn9hr\" (UID: \"1b2589de-d523-4229-adfa-d0b682557955\") " pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.541645 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tn4l\" (UniqueName: \"kubernetes.io/projected/d3ebe77b-98be-4168-92bb-205f96828b9f-kube-api-access-9tn4l\") pod \"dnsmasq-dns-6b5d4767c-7njn9\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.541684 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-config\") pod \"dnsmasq-dns-6b5d4767c-7njn9\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.541705 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b2589de-d523-4229-adfa-d0b682557955-logs\") pod \"barbican-api-677dbcd544-vn9hr\" (UID: \"1b2589de-d523-4229-adfa-d0b682557955\") " pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.541730 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nvql\" (UniqueName: \"kubernetes.io/projected/1b2589de-d523-4229-adfa-d0b682557955-kube-api-access-9nvql\") pod \"barbican-api-677dbcd544-vn9hr\" (UID: \"1b2589de-d523-4229-adfa-d0b682557955\") " pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.541757 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-dns-swift-storage-0\") pod \"dnsmasq-dns-6b5d4767c-7njn9\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.541781 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b2589de-d523-4229-adfa-d0b682557955-config-data-custom\") pod \"barbican-api-677dbcd544-vn9hr\" (UID: \"1b2589de-d523-4229-adfa-d0b682557955\") " pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.542864 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-ovsdbserver-sb\") pod \"dnsmasq-dns-6b5d4767c-7njn9\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.543161 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b2589de-d523-4229-adfa-d0b682557955-logs\") pod \"barbican-api-677dbcd544-vn9hr\" (UID: \"1b2589de-d523-4229-adfa-d0b682557955\") " pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.543969 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-config\") pod \"dnsmasq-dns-6b5d4767c-7njn9\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.544617 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-ovsdbserver-nb\") pod \"dnsmasq-dns-6b5d4767c-7njn9\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.545226 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2589de-d523-4229-adfa-d0b682557955-combined-ca-bundle\") pod \"barbican-api-677dbcd544-vn9hr\" (UID: \"1b2589de-d523-4229-adfa-d0b682557955\") " pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.545291 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-dns-svc\") pod \"dnsmasq-dns-6b5d4767c-7njn9\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.545869 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-dns-swift-storage-0\") pod \"dnsmasq-dns-6b5d4767c-7njn9\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.545994 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b2589de-d523-4229-adfa-d0b682557955-config-data-custom\") pod \"barbican-api-677dbcd544-vn9hr\" (UID: \"1b2589de-d523-4229-adfa-d0b682557955\") " pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.555449 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2589de-d523-4229-adfa-d0b682557955-config-data\") pod \"barbican-api-677dbcd544-vn9hr\" (UID: \"1b2589de-d523-4229-adfa-d0b682557955\") " pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.560907 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.560978 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nvql\" (UniqueName: \"kubernetes.io/projected/1b2589de-d523-4229-adfa-d0b682557955-kube-api-access-9nvql\") pod \"barbican-api-677dbcd544-vn9hr\" (UID: \"1b2589de-d523-4229-adfa-d0b682557955\") " pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.561925 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tn4l\" (UniqueName: \"kubernetes.io/projected/d3ebe77b-98be-4168-92bb-205f96828b9f-kube-api-access-9tn4l\") pod \"dnsmasq-dns-6b5d4767c-7njn9\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.648653 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.726584 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.852093 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-574f454648-szvd4"] Nov 24 19:36:29 crc kubenswrapper[4812]: I1124 19:36:29.927705 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c8f69d86c-2v9p4"] Nov 24 19:36:30 crc kubenswrapper[4812]: I1124 19:36:30.011493 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c8f69d86c-2v9p4" event={"ID":"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3","Type":"ContainerStarted","Data":"2c6233e47a664613934185a9c49954edbe75e5f9d825c33d1da8e1f2c02345fb"} Nov 24 19:36:30 crc kubenswrapper[4812]: I1124 19:36:30.020578 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-574f454648-szvd4" event={"ID":"1c8509a7-6885-4238-9da3-b214b5f8868e","Type":"ContainerStarted","Data":"737ba922531e24113a510f760940859511d64804cfa011f58aebb0fed14cef4e"} Nov 24 19:36:30 crc kubenswrapper[4812]: I1124 19:36:30.125589 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6474fd5f77-zwqph"] Nov 24 19:36:30 crc kubenswrapper[4812]: I1124 19:36:30.229780 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-94c5495b6-f8ptk"] Nov 24 19:36:30 crc kubenswrapper[4812]: I1124 19:36:30.300067 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b5d4767c-7njn9"] Nov 24 19:36:30 crc kubenswrapper[4812]: W1124 19:36:30.308161 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3ebe77b_98be_4168_92bb_205f96828b9f.slice/crio-159c4e243998923a13d735f0f92854816d64654253892cd9bb0cebffab90d272 WatchSource:0}: Error finding container 159c4e243998923a13d735f0f92854816d64654253892cd9bb0cebffab90d272: Status 404 returned error can't find the container with id 159c4e243998923a13d735f0f92854816d64654253892cd9bb0cebffab90d272 Nov 24 19:36:30 crc kubenswrapper[4812]: I1124 19:36:30.339465 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wv8xf" Nov 24 19:36:30 crc kubenswrapper[4812]: I1124 19:36:30.360964 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwxxb\" (UniqueName: \"kubernetes.io/projected/e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f-kube-api-access-gwxxb\") pod \"e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f\" (UID: \"e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f\") " Nov 24 19:36:30 crc kubenswrapper[4812]: I1124 19:36:30.361062 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f-combined-ca-bundle\") pod \"e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f\" (UID: \"e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f\") " Nov 24 19:36:30 crc kubenswrapper[4812]: I1124 19:36:30.361126 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f-config\") pod \"e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f\" (UID: \"e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f\") " Nov 24 19:36:30 crc kubenswrapper[4812]: I1124 19:36:30.379076 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f-kube-api-access-gwxxb" (OuterVolumeSpecName: "kube-api-access-gwxxb") pod "e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f" (UID: "e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f"). InnerVolumeSpecName "kube-api-access-gwxxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:36:30 crc kubenswrapper[4812]: I1124 19:36:30.402824 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-677dbcd544-vn9hr"] Nov 24 19:36:30 crc kubenswrapper[4812]: I1124 19:36:30.406016 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f-config" (OuterVolumeSpecName: "config") pod "e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f" (UID: "e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:30 crc kubenswrapper[4812]: I1124 19:36:30.427992 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f" (UID: "e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:30 crc kubenswrapper[4812]: I1124 19:36:30.463100 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:30 crc kubenswrapper[4812]: I1124 19:36:30.463136 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:30 crc kubenswrapper[4812]: I1124 19:36:30.463148 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwxxb\" (UniqueName: \"kubernetes.io/projected/e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f-kube-api-access-gwxxb\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.041610 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6474fd5f77-zwqph" event={"ID":"5388a6e8-73d7-4097-9521-b47649b4c6c8","Type":"ContainerStarted","Data":"20027080f2b1aa6654230e68a7287fd0f79e9920ece752829987525cae68b110"} Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.047672 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c8f69d86c-2v9p4" event={"ID":"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3","Type":"ContainerStarted","Data":"d758e81f7d47380dbb825bd4c9f03b18b5d028cad105d8ab2aa48cd43176fb2a"} Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.047817 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.050702 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-574f454648-szvd4" event={"ID":"1c8509a7-6885-4238-9da3-b214b5f8868e","Type":"ContainerStarted","Data":"d07bf2cbab76f1bef1a940806ac6f6323ae423f3a69064aeb96db6f329d5665f"} Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.050746 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-574f454648-szvd4" event={"ID":"1c8509a7-6885-4238-9da3-b214b5f8868e","Type":"ContainerStarted","Data":"692e428119dfde862fe0aeb20102df094a9cb17fc1bcce8548dce4d9b4546e92"} Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.051263 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.051327 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-574f454648-szvd4" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.059146 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-wv8xf" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.059181 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-wv8xf" event={"ID":"e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f","Type":"ContainerDied","Data":"53e3b280248ef935044052e12244eedb9d0055d38cfb3934b582fe8a60d70ad3"} Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.059252 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53e3b280248ef935044052e12244eedb9d0055d38cfb3934b582fe8a60d70ad3" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.070015 4812 generic.go:334] "Generic (PLEG): container finished" podID="d3ebe77b-98be-4168-92bb-205f96828b9f" containerID="03bf55c63b735a5330e894ed4be4b80d2f378ff394045e55404aef8a26664deb" exitCode=0 Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.070092 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" event={"ID":"d3ebe77b-98be-4168-92bb-205f96828b9f","Type":"ContainerDied","Data":"03bf55c63b735a5330e894ed4be4b80d2f378ff394045e55404aef8a26664deb"} Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.070117 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" event={"ID":"d3ebe77b-98be-4168-92bb-205f96828b9f","Type":"ContainerStarted","Data":"159c4e243998923a13d735f0f92854816d64654253892cd9bb0cebffab90d272"} Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.074080 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-c8f69d86c-2v9p4" podStartSLOduration=2.074068283 podStartE2EDuration="2.074068283s" podCreationTimestamp="2025-11-24 19:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:36:31.072838858 +0000 UTC m=+1184.861791229" watchObservedRunningTime="2025-11-24 19:36:31.074068283 +0000 UTC m=+1184.863020654" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.079926 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" event={"ID":"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3","Type":"ContainerStarted","Data":"942bc5cdf6f1cd2fd24a5d3e49e9e3e57fc49522e24b1ffe010062052809a919"} Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.100280 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-677dbcd544-vn9hr" event={"ID":"1b2589de-d523-4229-adfa-d0b682557955","Type":"ContainerStarted","Data":"2c08fc1656dc6af46fe487c79f4e8e13a88ca0c62c23dac3c5e5110812aabe0e"} Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.100327 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-677dbcd544-vn9hr" event={"ID":"1b2589de-d523-4229-adfa-d0b682557955","Type":"ContainerStarted","Data":"5e41bf460e7828cc8ad0ecb56e39118e5257e8cbef4e4a5d285aa5cb0bdedc8a"} Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.160174 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-574f454648-szvd4" podStartSLOduration=3.160156926 podStartE2EDuration="3.160156926s" podCreationTimestamp="2025-11-24 19:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:36:31.133005626 +0000 UTC m=+1184.921957997" watchObservedRunningTime="2025-11-24 19:36:31.160156926 +0000 UTC m=+1184.949109297" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.175866 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b5d4767c-7njn9"] Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.209099 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cc67f459c-fddsx"] Nov 24 19:36:31 crc kubenswrapper[4812]: E1124 19:36:31.209446 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f" containerName="neutron-db-sync" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.209462 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f" containerName="neutron-db-sync" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.209658 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f" containerName="neutron-db-sync" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.214778 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.229486 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc67f459c-fddsx"] Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.293849 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc67f459c-fddsx\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.293921 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc67f459c-fddsx\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.293967 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-dns-svc\") pod \"dnsmasq-dns-5cc67f459c-fddsx\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.294000 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkffz\" (UniqueName: \"kubernetes.io/projected/d260a634-3b2f-49ae-9506-1dc164f7423f-kube-api-access-gkffz\") pod \"dnsmasq-dns-5cc67f459c-fddsx\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.294022 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc67f459c-fddsx\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.294051 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-config\") pod \"dnsmasq-dns-5cc67f459c-fddsx\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.396769 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkffz\" (UniqueName: \"kubernetes.io/projected/d260a634-3b2f-49ae-9506-1dc164f7423f-kube-api-access-gkffz\") pod \"dnsmasq-dns-5cc67f459c-fddsx\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.397075 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc67f459c-fddsx\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.397133 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-config\") pod \"dnsmasq-dns-5cc67f459c-fddsx\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.397203 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc67f459c-fddsx\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.397252 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc67f459c-fddsx\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.397314 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-dns-svc\") pod \"dnsmasq-dns-5cc67f459c-fddsx\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.399386 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-dns-svc\") pod \"dnsmasq-dns-5cc67f459c-fddsx\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.402515 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-ovsdbserver-sb\") pod \"dnsmasq-dns-5cc67f459c-fddsx\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.402759 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-ovsdbserver-nb\") pod \"dnsmasq-dns-5cc67f459c-fddsx\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.402830 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-config\") pod \"dnsmasq-dns-5cc67f459c-fddsx\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.403209 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-dns-swift-storage-0\") pod \"dnsmasq-dns-5cc67f459c-fddsx\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.413018 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-84884b67b4-mzcbv"] Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.414427 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84884b67b4-mzcbv" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.417293 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.418402 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkffz\" (UniqueName: \"kubernetes.io/projected/d260a634-3b2f-49ae-9506-1dc164f7423f-kube-api-access-gkffz\") pod \"dnsmasq-dns-5cc67f459c-fddsx\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.418513 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.418637 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-shbmd" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.423695 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.431159 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84884b67b4-mzcbv"] Nov 24 19:36:31 crc kubenswrapper[4812]: E1124 19:36:31.463819 4812 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Nov 24 19:36:31 crc kubenswrapper[4812]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/d3ebe77b-98be-4168-92bb-205f96828b9f/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 24 19:36:31 crc kubenswrapper[4812]: > podSandboxID="159c4e243998923a13d735f0f92854816d64654253892cd9bb0cebffab90d272" Nov 24 19:36:31 crc kubenswrapper[4812]: E1124 19:36:31.463990 4812 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 24 19:36:31 crc kubenswrapper[4812]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5cch55h5bchd4h54bh78hbfh579h5hc4h67h8ch674h658h5c4hb4h87h6h95h574h65fhb8h659h9bh56bh66fh5c9h65dh594h5f8h77h79q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9tn4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6b5d4767c-7njn9_openstack(d3ebe77b-98be-4168-92bb-205f96828b9f): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/d3ebe77b-98be-4168-92bb-205f96828b9f/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 24 19:36:31 crc kubenswrapper[4812]: > logger="UnhandledError" Nov 24 19:36:31 crc kubenswrapper[4812]: E1124 19:36:31.465956 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/d3ebe77b-98be-4168-92bb-205f96828b9f/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" podUID="d3ebe77b-98be-4168-92bb-205f96828b9f" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.499481 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-httpd-config\") pod \"neutron-84884b67b4-mzcbv\" (UID: \"b4073e4e-f8f0-4ef9-a813-87920edaa840\") " pod="openstack/neutron-84884b67b4-mzcbv" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.499539 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-combined-ca-bundle\") pod \"neutron-84884b67b4-mzcbv\" (UID: \"b4073e4e-f8f0-4ef9-a813-87920edaa840\") " pod="openstack/neutron-84884b67b4-mzcbv" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.499594 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldbp4\" (UniqueName: \"kubernetes.io/projected/b4073e4e-f8f0-4ef9-a813-87920edaa840-kube-api-access-ldbp4\") pod \"neutron-84884b67b4-mzcbv\" (UID: \"b4073e4e-f8f0-4ef9-a813-87920edaa840\") " pod="openstack/neutron-84884b67b4-mzcbv" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.499625 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-ovndb-tls-certs\") pod \"neutron-84884b67b4-mzcbv\" (UID: \"b4073e4e-f8f0-4ef9-a813-87920edaa840\") " pod="openstack/neutron-84884b67b4-mzcbv" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.499655 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-config\") pod \"neutron-84884b67b4-mzcbv\" (UID: \"b4073e4e-f8f0-4ef9-a813-87920edaa840\") " pod="openstack/neutron-84884b67b4-mzcbv" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.556039 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.601163 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-httpd-config\") pod \"neutron-84884b67b4-mzcbv\" (UID: \"b4073e4e-f8f0-4ef9-a813-87920edaa840\") " pod="openstack/neutron-84884b67b4-mzcbv" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.601205 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-combined-ca-bundle\") pod \"neutron-84884b67b4-mzcbv\" (UID: \"b4073e4e-f8f0-4ef9-a813-87920edaa840\") " pod="openstack/neutron-84884b67b4-mzcbv" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.601243 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldbp4\" (UniqueName: \"kubernetes.io/projected/b4073e4e-f8f0-4ef9-a813-87920edaa840-kube-api-access-ldbp4\") pod \"neutron-84884b67b4-mzcbv\" (UID: \"b4073e4e-f8f0-4ef9-a813-87920edaa840\") " pod="openstack/neutron-84884b67b4-mzcbv" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.601264 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-ovndb-tls-certs\") pod \"neutron-84884b67b4-mzcbv\" (UID: \"b4073e4e-f8f0-4ef9-a813-87920edaa840\") " pod="openstack/neutron-84884b67b4-mzcbv" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.601283 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-config\") pod \"neutron-84884b67b4-mzcbv\" (UID: \"b4073e4e-f8f0-4ef9-a813-87920edaa840\") " pod="openstack/neutron-84884b67b4-mzcbv" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.605888 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-config\") pod \"neutron-84884b67b4-mzcbv\" (UID: \"b4073e4e-f8f0-4ef9-a813-87920edaa840\") " pod="openstack/neutron-84884b67b4-mzcbv" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.606421 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-combined-ca-bundle\") pod \"neutron-84884b67b4-mzcbv\" (UID: \"b4073e4e-f8f0-4ef9-a813-87920edaa840\") " pod="openstack/neutron-84884b67b4-mzcbv" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.610313 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-httpd-config\") pod \"neutron-84884b67b4-mzcbv\" (UID: \"b4073e4e-f8f0-4ef9-a813-87920edaa840\") " pod="openstack/neutron-84884b67b4-mzcbv" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.610718 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-ovndb-tls-certs\") pod \"neutron-84884b67b4-mzcbv\" (UID: \"b4073e4e-f8f0-4ef9-a813-87920edaa840\") " pod="openstack/neutron-84884b67b4-mzcbv" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.640119 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldbp4\" (UniqueName: \"kubernetes.io/projected/b4073e4e-f8f0-4ef9-a813-87920edaa840-kube-api-access-ldbp4\") pod \"neutron-84884b67b4-mzcbv\" (UID: \"b4073e4e-f8f0-4ef9-a813-87920edaa840\") " pod="openstack/neutron-84884b67b4-mzcbv" Nov 24 19:36:31 crc kubenswrapper[4812]: I1124 19:36:31.836488 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84884b67b4-mzcbv" Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.113245 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-677dbcd544-vn9hr" event={"ID":"1b2589de-d523-4229-adfa-d0b682557955","Type":"ContainerStarted","Data":"efe8190e176d4f61444a62711cb5acaffd4f1fc0e1300e501621076d93bceaf0"} Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.114275 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.114298 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.135162 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-677dbcd544-vn9hr" podStartSLOduration=3.135137922 podStartE2EDuration="3.135137922s" podCreationTimestamp="2025-11-24 19:36:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:36:32.128604184 +0000 UTC m=+1185.917556555" watchObservedRunningTime="2025-11-24 19:36:32.135137922 +0000 UTC m=+1185.924090303" Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.195579 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.195966 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.229038 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.255433 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.644635 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.658605 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-config\") pod \"d3ebe77b-98be-4168-92bb-205f96828b9f\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.659073 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-ovsdbserver-nb\") pod \"d3ebe77b-98be-4168-92bb-205f96828b9f\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.659107 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-ovsdbserver-sb\") pod \"d3ebe77b-98be-4168-92bb-205f96828b9f\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.659128 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tn4l\" (UniqueName: \"kubernetes.io/projected/d3ebe77b-98be-4168-92bb-205f96828b9f-kube-api-access-9tn4l\") pod \"d3ebe77b-98be-4168-92bb-205f96828b9f\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.659159 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-dns-svc\") pod \"d3ebe77b-98be-4168-92bb-205f96828b9f\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.659204 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-dns-swift-storage-0\") pod \"d3ebe77b-98be-4168-92bb-205f96828b9f\" (UID: \"d3ebe77b-98be-4168-92bb-205f96828b9f\") " Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.665745 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ebe77b-98be-4168-92bb-205f96828b9f-kube-api-access-9tn4l" (OuterVolumeSpecName: "kube-api-access-9tn4l") pod "d3ebe77b-98be-4168-92bb-205f96828b9f" (UID: "d3ebe77b-98be-4168-92bb-205f96828b9f"). InnerVolumeSpecName "kube-api-access-9tn4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.720418 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-config" (OuterVolumeSpecName: "config") pod "d3ebe77b-98be-4168-92bb-205f96828b9f" (UID: "d3ebe77b-98be-4168-92bb-205f96828b9f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.726730 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d3ebe77b-98be-4168-92bb-205f96828b9f" (UID: "d3ebe77b-98be-4168-92bb-205f96828b9f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.752836 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d3ebe77b-98be-4168-92bb-205f96828b9f" (UID: "d3ebe77b-98be-4168-92bb-205f96828b9f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.757808 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d3ebe77b-98be-4168-92bb-205f96828b9f" (UID: "d3ebe77b-98be-4168-92bb-205f96828b9f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.761356 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.761410 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.761420 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.761430 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tn4l\" (UniqueName: \"kubernetes.io/projected/d3ebe77b-98be-4168-92bb-205f96828b9f-kube-api-access-9tn4l\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.761439 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.778049 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d3ebe77b-98be-4168-92bb-205f96828b9f" (UID: "d3ebe77b-98be-4168-92bb-205f96828b9f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:32 crc kubenswrapper[4812]: I1124 19:36:32.863922 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3ebe77b-98be-4168-92bb-205f96828b9f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:33 crc kubenswrapper[4812]: I1124 19:36:33.127812 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" Nov 24 19:36:33 crc kubenswrapper[4812]: I1124 19:36:33.128347 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b5d4767c-7njn9" event={"ID":"d3ebe77b-98be-4168-92bb-205f96828b9f","Type":"ContainerDied","Data":"159c4e243998923a13d735f0f92854816d64654253892cd9bb0cebffab90d272"} Nov 24 19:36:33 crc kubenswrapper[4812]: I1124 19:36:33.128379 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 19:36:33 crc kubenswrapper[4812]: I1124 19:36:33.128409 4812 scope.go:117] "RemoveContainer" containerID="03bf55c63b735a5330e894ed4be4b80d2f378ff394045e55404aef8a26664deb" Nov 24 19:36:33 crc kubenswrapper[4812]: I1124 19:36:33.129489 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 19:36:33 crc kubenswrapper[4812]: I1124 19:36:33.167064 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b5d4767c-7njn9"] Nov 24 19:36:33 crc kubenswrapper[4812]: I1124 19:36:33.173213 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b5d4767c-7njn9"] Nov 24 19:36:34 crc kubenswrapper[4812]: I1124 19:36:34.401851 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 19:36:34 crc kubenswrapper[4812]: I1124 19:36:34.402719 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 19:36:34 crc kubenswrapper[4812]: I1124 19:36:34.430691 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 19:36:34 crc kubenswrapper[4812]: I1124 19:36:34.452913 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 19:36:34 crc kubenswrapper[4812]: I1124 19:36:34.978079 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3ebe77b-98be-4168-92bb-205f96828b9f" path="/var/lib/kubelet/pods/d3ebe77b-98be-4168-92bb-205f96828b9f/volumes" Nov 24 19:36:35 crc kubenswrapper[4812]: I1124 19:36:35.148136 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 19:36:35 crc kubenswrapper[4812]: I1124 19:36:35.148168 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 19:36:35 crc kubenswrapper[4812]: I1124 19:36:35.148351 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 19:36:35 crc kubenswrapper[4812]: I1124 19:36:35.148409 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 19:36:35 crc kubenswrapper[4812]: I1124 19:36:35.243654 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 19:36:35 crc kubenswrapper[4812]: I1124 19:36:35.329860 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 19:36:35 crc kubenswrapper[4812]: I1124 19:36:35.955902 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f58df686c-jn8qq"] Nov 24 19:36:35 crc kubenswrapper[4812]: E1124 19:36:35.956883 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ebe77b-98be-4168-92bb-205f96828b9f" containerName="init" Nov 24 19:36:35 crc kubenswrapper[4812]: I1124 19:36:35.956900 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ebe77b-98be-4168-92bb-205f96828b9f" containerName="init" Nov 24 19:36:35 crc kubenswrapper[4812]: I1124 19:36:35.957162 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ebe77b-98be-4168-92bb-205f96828b9f" containerName="init" Nov 24 19:36:35 crc kubenswrapper[4812]: I1124 19:36:35.958852 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:35 crc kubenswrapper[4812]: I1124 19:36:35.960940 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 24 19:36:35 crc kubenswrapper[4812]: I1124 19:36:35.969981 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 24 19:36:35 crc kubenswrapper[4812]: I1124 19:36:35.971609 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f58df686c-jn8qq"] Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.024663 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-internal-tls-certs\") pod \"neutron-f58df686c-jn8qq\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.024717 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-combined-ca-bundle\") pod \"neutron-f58df686c-jn8qq\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.024771 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-public-tls-certs\") pod \"neutron-f58df686c-jn8qq\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.024796 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg2vh\" (UniqueName: \"kubernetes.io/projected/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-kube-api-access-zg2vh\") pod \"neutron-f58df686c-jn8qq\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.024873 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-httpd-config\") pod \"neutron-f58df686c-jn8qq\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.024891 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-ovndb-tls-certs\") pod \"neutron-f58df686c-jn8qq\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.024915 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-config\") pod \"neutron-f58df686c-jn8qq\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.126712 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-internal-tls-certs\") pod \"neutron-f58df686c-jn8qq\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.127810 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-combined-ca-bundle\") pod \"neutron-f58df686c-jn8qq\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.128646 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-public-tls-certs\") pod \"neutron-f58df686c-jn8qq\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.128990 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zg2vh\" (UniqueName: \"kubernetes.io/projected/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-kube-api-access-zg2vh\") pod \"neutron-f58df686c-jn8qq\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.129445 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-httpd-config\") pod \"neutron-f58df686c-jn8qq\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.129748 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-ovndb-tls-certs\") pod \"neutron-f58df686c-jn8qq\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.129807 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-config\") pod \"neutron-f58df686c-jn8qq\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.138064 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-internal-tls-certs\") pod \"neutron-f58df686c-jn8qq\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.138171 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-config\") pod \"neutron-f58df686c-jn8qq\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.139514 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-combined-ca-bundle\") pod \"neutron-f58df686c-jn8qq\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.141182 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-ovndb-tls-certs\") pod \"neutron-f58df686c-jn8qq\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.146120 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-httpd-config\") pod \"neutron-f58df686c-jn8qq\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.147208 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-public-tls-certs\") pod \"neutron-f58df686c-jn8qq\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.152024 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg2vh\" (UniqueName: \"kubernetes.io/projected/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-kube-api-access-zg2vh\") pod \"neutron-f58df686c-jn8qq\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.229681 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-b4d7f8ffb-jqsz8"] Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.230984 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.234990 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.235428 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.274911 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b4d7f8ffb-jqsz8"] Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.297955 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.332947 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e1ab98f-9015-432e-90f2-4692dc37c99e-logs\") pod \"barbican-api-b4d7f8ffb-jqsz8\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.333051 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-public-tls-certs\") pod \"barbican-api-b4d7f8ffb-jqsz8\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.333100 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-combined-ca-bundle\") pod \"barbican-api-b4d7f8ffb-jqsz8\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.333123 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-config-data-custom\") pod \"barbican-api-b4d7f8ffb-jqsz8\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.333200 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn922\" (UniqueName: \"kubernetes.io/projected/2e1ab98f-9015-432e-90f2-4692dc37c99e-kube-api-access-fn922\") pod \"barbican-api-b4d7f8ffb-jqsz8\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.333247 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-config-data\") pod \"barbican-api-b4d7f8ffb-jqsz8\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.333272 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-internal-tls-certs\") pod \"barbican-api-b4d7f8ffb-jqsz8\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.439361 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e1ab98f-9015-432e-90f2-4692dc37c99e-logs\") pod \"barbican-api-b4d7f8ffb-jqsz8\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.439443 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-public-tls-certs\") pod \"barbican-api-b4d7f8ffb-jqsz8\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.439485 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-combined-ca-bundle\") pod \"barbican-api-b4d7f8ffb-jqsz8\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.439505 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-config-data-custom\") pod \"barbican-api-b4d7f8ffb-jqsz8\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.439541 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn922\" (UniqueName: \"kubernetes.io/projected/2e1ab98f-9015-432e-90f2-4692dc37c99e-kube-api-access-fn922\") pod \"barbican-api-b4d7f8ffb-jqsz8\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.439584 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-config-data\") pod \"barbican-api-b4d7f8ffb-jqsz8\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.439611 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-internal-tls-certs\") pod \"barbican-api-b4d7f8ffb-jqsz8\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.439979 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e1ab98f-9015-432e-90f2-4692dc37c99e-logs\") pod \"barbican-api-b4d7f8ffb-jqsz8\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.444880 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-config-data-custom\") pod \"barbican-api-b4d7f8ffb-jqsz8\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.446017 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-internal-tls-certs\") pod \"barbican-api-b4d7f8ffb-jqsz8\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.446984 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-config-data\") pod \"barbican-api-b4d7f8ffb-jqsz8\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.449991 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-combined-ca-bundle\") pod \"barbican-api-b4d7f8ffb-jqsz8\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.459832 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-public-tls-certs\") pod \"barbican-api-b4d7f8ffb-jqsz8\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.460570 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn922\" (UniqueName: \"kubernetes.io/projected/2e1ab98f-9015-432e-90f2-4692dc37c99e-kube-api-access-fn922\") pod \"barbican-api-b4d7f8ffb-jqsz8\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:36 crc kubenswrapper[4812]: I1124 19:36:36.556004 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:37 crc kubenswrapper[4812]: I1124 19:36:37.115452 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 19:36:37 crc kubenswrapper[4812]: I1124 19:36:37.116965 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 19:36:38 crc kubenswrapper[4812]: I1124 19:36:38.727416 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cc67f459c-fddsx"] Nov 24 19:36:41 crc kubenswrapper[4812]: I1124 19:36:41.092048 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:41 crc kubenswrapper[4812]: I1124 19:36:41.143572 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:41 crc kubenswrapper[4812]: W1124 19:36:41.156965 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd260a634_3b2f_49ae_9506_1dc164f7423f.slice/crio-fb7849910af48da5a6021cc94d856570e08515933439be782b359b41eaf00cc3 WatchSource:0}: Error finding container fb7849910af48da5a6021cc94d856570e08515933439be782b359b41eaf00cc3: Status 404 returned error can't find the container with id fb7849910af48da5a6021cc94d856570e08515933439be782b359b41eaf00cc3 Nov 24 19:36:41 crc kubenswrapper[4812]: I1124 19:36:41.231699 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" event={"ID":"d260a634-3b2f-49ae-9506-1dc164f7423f","Type":"ContainerStarted","Data":"fb7849910af48da5a6021cc94d856570e08515933439be782b359b41eaf00cc3"} Nov 24 19:36:42 crc kubenswrapper[4812]: I1124 19:36:42.246095 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6474fd5f77-zwqph" event={"ID":"5388a6e8-73d7-4097-9521-b47649b4c6c8","Type":"ContainerStarted","Data":"8394c58793ba4d118153a08a1542de897146c92d9f0bb3f90ac84fef380e3cf0"} Nov 24 19:36:42 crc kubenswrapper[4812]: I1124 19:36:42.251666 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" event={"ID":"d260a634-3b2f-49ae-9506-1dc164f7423f","Type":"ContainerStarted","Data":"b4a1f22de43ee24708541e30e1eded4a7fec3aed21f30085ce3168851a95fb50"} Nov 24 19:36:42 crc kubenswrapper[4812]: I1124 19:36:42.256853 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" event={"ID":"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3","Type":"ContainerStarted","Data":"c13e6786a49a879ebdafe7e2706bbeadf9a5620f8ec5d41a07a487a04ba9d615"} Nov 24 19:36:42 crc kubenswrapper[4812]: I1124 19:36:42.330782 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b4d7f8ffb-jqsz8"] Nov 24 19:36:42 crc kubenswrapper[4812]: W1124 19:36:42.356992 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e1ab98f_9015_432e_90f2_4692dc37c99e.slice/crio-a73d77712fdb135d27de8994c1a6c7f28c01894b719bfb07965ead8a121b2062 WatchSource:0}: Error finding container a73d77712fdb135d27de8994c1a6c7f28c01894b719bfb07965ead8a121b2062: Status 404 returned error can't find the container with id a73d77712fdb135d27de8994c1a6c7f28c01894b719bfb07965ead8a121b2062 Nov 24 19:36:42 crc kubenswrapper[4812]: I1124 19:36:42.440515 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84884b67b4-mzcbv"] Nov 24 19:36:42 crc kubenswrapper[4812]: I1124 19:36:42.541099 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f58df686c-jn8qq"] Nov 24 19:36:42 crc kubenswrapper[4812]: W1124 19:36:42.554385 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf04fa0a_bd1d_442f_afff_05fe1ebdeafa.slice/crio-c6b53af358657b435e4037070ddfedfd9df942ef3c054809152a79994b960257 WatchSource:0}: Error finding container c6b53af358657b435e4037070ddfedfd9df942ef3c054809152a79994b960257: Status 404 returned error can't find the container with id c6b53af358657b435e4037070ddfedfd9df942ef3c054809152a79994b960257 Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.267637 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"169bdfbe-91a0-476c-a315-e9ea82f10ca5","Type":"ContainerStarted","Data":"ce6c78e988cdf0a3ef8328b3f85e04a5852ee4ec1e15226815c49095f468ad99"} Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.268567 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="169bdfbe-91a0-476c-a315-e9ea82f10ca5" containerName="ceilometer-central-agent" containerID="cri-o://4a8d5ec6f0abd52f143fd632d0c822d757220d37a6bc65af0f49cd7ba26f3bb7" gracePeriod=30 Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.268640 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.269004 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="169bdfbe-91a0-476c-a315-e9ea82f10ca5" containerName="proxy-httpd" containerID="cri-o://ce6c78e988cdf0a3ef8328b3f85e04a5852ee4ec1e15226815c49095f468ad99" gracePeriod=30 Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.269048 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="169bdfbe-91a0-476c-a315-e9ea82f10ca5" containerName="sg-core" containerID="cri-o://42fc49c75b8cd7333bb583c37b53c2a2a3f1da4ba6d342977c3330998c3ed4d8" gracePeriod=30 Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.269077 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="169bdfbe-91a0-476c-a315-e9ea82f10ca5" containerName="ceilometer-notification-agent" containerID="cri-o://1562bd89b357fa537917bd8bf2fc7254e76fcce3becc7965c45d1585426c27f5" gracePeriod=30 Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.273721 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6474fd5f77-zwqph" event={"ID":"5388a6e8-73d7-4097-9521-b47649b4c6c8","Type":"ContainerStarted","Data":"a5b96a8d42c3405967068d87a03841996bd50f3213a845dde3d04bbf1ba0f3ff"} Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.285906 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f58df686c-jn8qq" event={"ID":"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa","Type":"ContainerStarted","Data":"9eecdb130173be4362c4b01b4d7a91e9916967b8217d2c08f6922d9d056d6e8b"} Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.285940 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f58df686c-jn8qq" event={"ID":"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa","Type":"ContainerStarted","Data":"1f11baf8ed33d8f96d179e8923989ae7c6c575405267a06b72ced9510109a883"} Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.285953 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.285964 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f58df686c-jn8qq" event={"ID":"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa","Type":"ContainerStarted","Data":"c6b53af358657b435e4037070ddfedfd9df942ef3c054809152a79994b960257"} Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.288884 4812 generic.go:334] "Generic (PLEG): container finished" podID="d260a634-3b2f-49ae-9506-1dc164f7423f" containerID="b4a1f22de43ee24708541e30e1eded4a7fec3aed21f30085ce3168851a95fb50" exitCode=0 Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.288942 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" event={"ID":"d260a634-3b2f-49ae-9506-1dc164f7423f","Type":"ContainerDied","Data":"b4a1f22de43ee24708541e30e1eded4a7fec3aed21f30085ce3168851a95fb50"} Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.288962 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" event={"ID":"d260a634-3b2f-49ae-9506-1dc164f7423f","Type":"ContainerStarted","Data":"451d964023b8aa7abae6cf6fa6715769fa25331a56e819b52615f9340c303939"} Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.289558 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.290587 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.449535354 podStartE2EDuration="42.290572994s" podCreationTimestamp="2025-11-24 19:36:01 +0000 UTC" firstStartedPulling="2025-11-24 19:36:02.126932092 +0000 UTC m=+1155.915884463" lastFinishedPulling="2025-11-24 19:36:41.967969732 +0000 UTC m=+1195.756922103" observedRunningTime="2025-11-24 19:36:43.285999442 +0000 UTC m=+1197.074951813" watchObservedRunningTime="2025-11-24 19:36:43.290572994 +0000 UTC m=+1197.079525365" Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.291464 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nl6jz" event={"ID":"5a5a2d87-4746-4254-9f63-10fe73a4001f","Type":"ContainerStarted","Data":"e021b7fe56e0097d5aa8fd3433742c3816130416aba858d21e90d0ed7ba6119b"} Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.293926 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84884b67b4-mzcbv" event={"ID":"b4073e4e-f8f0-4ef9-a813-87920edaa840","Type":"ContainerStarted","Data":"e7b27c07f1634e844989fa1c25a4ba21f6b40df5db57e3e377b5bf27b2d39a29"} Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.293955 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84884b67b4-mzcbv" event={"ID":"b4073e4e-f8f0-4ef9-a813-87920edaa840","Type":"ContainerStarted","Data":"79b5f87c486798f95647660266b1dbff5d59afbb11903583a975e0ccee7cb993"} Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.293966 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84884b67b4-mzcbv" event={"ID":"b4073e4e-f8f0-4ef9-a813-87920edaa840","Type":"ContainerStarted","Data":"cf0979564bea0d93eb2ff28df3a86dac19318b839f6acaa80f7ea1a1da2c1681"} Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.294518 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-84884b67b4-mzcbv" Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.300798 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" event={"ID":"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3","Type":"ContainerStarted","Data":"44d0bec5b4e03ee1c205aff04d3d557f0b6f05d3d6adee7a3fd558ceba3cbd6b"} Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.302926 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b4d7f8ffb-jqsz8" event={"ID":"2e1ab98f-9015-432e-90f2-4692dc37c99e","Type":"ContainerStarted","Data":"0e2652f399100af755bd463b87697dab3c3766ed7ffde1ccbef750bb41e7d20c"} Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.302954 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b4d7f8ffb-jqsz8" event={"ID":"2e1ab98f-9015-432e-90f2-4692dc37c99e","Type":"ContainerStarted","Data":"4cc9d9368a0a7cba1482b1fd1cdcb122538087b4b0ce32c9891541bd780042c8"} Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.302963 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b4d7f8ffb-jqsz8" event={"ID":"2e1ab98f-9015-432e-90f2-4692dc37c99e","Type":"ContainerStarted","Data":"a73d77712fdb135d27de8994c1a6c7f28c01894b719bfb07965ead8a121b2062"} Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.303614 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.303640 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.311823 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-f58df686c-jn8qq" podStartSLOduration=8.311799773 podStartE2EDuration="8.311799773s" podCreationTimestamp="2025-11-24 19:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:36:43.306207393 +0000 UTC m=+1197.095159764" watchObservedRunningTime="2025-11-24 19:36:43.311799773 +0000 UTC m=+1197.100752144" Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.334811 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6474fd5f77-zwqph" podStartSLOduration=2.638813564 podStartE2EDuration="14.334795594s" podCreationTimestamp="2025-11-24 19:36:29 +0000 UTC" firstStartedPulling="2025-11-24 19:36:30.142953566 +0000 UTC m=+1183.931905937" lastFinishedPulling="2025-11-24 19:36:41.838935596 +0000 UTC m=+1195.627887967" observedRunningTime="2025-11-24 19:36:43.331273493 +0000 UTC m=+1197.120225864" watchObservedRunningTime="2025-11-24 19:36:43.334795594 +0000 UTC m=+1197.123747965" Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.368153 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-84884b67b4-mzcbv" podStartSLOduration=12.368133162 podStartE2EDuration="12.368133162s" podCreationTimestamp="2025-11-24 19:36:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:36:43.347455758 +0000 UTC m=+1197.136408129" watchObservedRunningTime="2025-11-24 19:36:43.368133162 +0000 UTC m=+1197.157085553" Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.421375 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" podStartSLOduration=2.887395076 podStartE2EDuration="14.421358421s" podCreationTimestamp="2025-11-24 19:36:29 +0000 UTC" firstStartedPulling="2025-11-24 19:36:30.240131638 +0000 UTC m=+1184.029084009" lastFinishedPulling="2025-11-24 19:36:41.774094943 +0000 UTC m=+1195.563047354" observedRunningTime="2025-11-24 19:36:43.366904616 +0000 UTC m=+1197.155856987" watchObservedRunningTime="2025-11-24 19:36:43.421358421 +0000 UTC m=+1197.210310792" Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.430838 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-b4d7f8ffb-jqsz8" podStartSLOduration=7.430822592 podStartE2EDuration="7.430822592s" podCreationTimestamp="2025-11-24 19:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:36:43.420322371 +0000 UTC m=+1197.209274732" watchObservedRunningTime="2025-11-24 19:36:43.430822592 +0000 UTC m=+1197.219774963" Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.449893 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-nl6jz" podStartSLOduration=3.456293501 podStartE2EDuration="43.44987456s" podCreationTimestamp="2025-11-24 19:36:00 +0000 UTC" firstStartedPulling="2025-11-24 19:36:01.974400844 +0000 UTC m=+1155.763353215" lastFinishedPulling="2025-11-24 19:36:41.967981903 +0000 UTC m=+1195.756934274" observedRunningTime="2025-11-24 19:36:43.437801773 +0000 UTC m=+1197.226754144" watchObservedRunningTime="2025-11-24 19:36:43.44987456 +0000 UTC m=+1197.238826931" Nov 24 19:36:43 crc kubenswrapper[4812]: I1124 19:36:43.468457 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" podStartSLOduration=12.468438383 podStartE2EDuration="12.468438383s" podCreationTimestamp="2025-11-24 19:36:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:36:43.456786268 +0000 UTC m=+1197.245738639" watchObservedRunningTime="2025-11-24 19:36:43.468438383 +0000 UTC m=+1197.257390754" Nov 24 19:36:44 crc kubenswrapper[4812]: I1124 19:36:44.315024 4812 generic.go:334] "Generic (PLEG): container finished" podID="169bdfbe-91a0-476c-a315-e9ea82f10ca5" containerID="ce6c78e988cdf0a3ef8328b3f85e04a5852ee4ec1e15226815c49095f468ad99" exitCode=0 Nov 24 19:36:44 crc kubenswrapper[4812]: I1124 19:36:44.315299 4812 generic.go:334] "Generic (PLEG): container finished" podID="169bdfbe-91a0-476c-a315-e9ea82f10ca5" containerID="42fc49c75b8cd7333bb583c37b53c2a2a3f1da4ba6d342977c3330998c3ed4d8" exitCode=2 Nov 24 19:36:44 crc kubenswrapper[4812]: I1124 19:36:44.315311 4812 generic.go:334] "Generic (PLEG): container finished" podID="169bdfbe-91a0-476c-a315-e9ea82f10ca5" containerID="4a8d5ec6f0abd52f143fd632d0c822d757220d37a6bc65af0f49cd7ba26f3bb7" exitCode=0 Nov 24 19:36:44 crc kubenswrapper[4812]: I1124 19:36:44.315092 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"169bdfbe-91a0-476c-a315-e9ea82f10ca5","Type":"ContainerDied","Data":"ce6c78e988cdf0a3ef8328b3f85e04a5852ee4ec1e15226815c49095f468ad99"} Nov 24 19:36:44 crc kubenswrapper[4812]: I1124 19:36:44.315589 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"169bdfbe-91a0-476c-a315-e9ea82f10ca5","Type":"ContainerDied","Data":"42fc49c75b8cd7333bb583c37b53c2a2a3f1da4ba6d342977c3330998c3ed4d8"} Nov 24 19:36:44 crc kubenswrapper[4812]: I1124 19:36:44.315642 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"169bdfbe-91a0-476c-a315-e9ea82f10ca5","Type":"ContainerDied","Data":"4a8d5ec6f0abd52f143fd632d0c822d757220d37a6bc65af0f49cd7ba26f3bb7"} Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.112946 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.273217 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-sg-core-conf-yaml\") pod \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.273308 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-config-data\") pod \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.273410 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/169bdfbe-91a0-476c-a315-e9ea82f10ca5-run-httpd\") pod \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.273456 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtwkz\" (UniqueName: \"kubernetes.io/projected/169bdfbe-91a0-476c-a315-e9ea82f10ca5-kube-api-access-wtwkz\") pod \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.273524 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-combined-ca-bundle\") pod \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.273608 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-scripts\") pod \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.273668 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/169bdfbe-91a0-476c-a315-e9ea82f10ca5-log-httpd\") pod \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\" (UID: \"169bdfbe-91a0-476c-a315-e9ea82f10ca5\") " Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.273859 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/169bdfbe-91a0-476c-a315-e9ea82f10ca5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "169bdfbe-91a0-476c-a315-e9ea82f10ca5" (UID: "169bdfbe-91a0-476c-a315-e9ea82f10ca5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.274169 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/169bdfbe-91a0-476c-a315-e9ea82f10ca5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "169bdfbe-91a0-476c-a315-e9ea82f10ca5" (UID: "169bdfbe-91a0-476c-a315-e9ea82f10ca5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.274254 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/169bdfbe-91a0-476c-a315-e9ea82f10ca5-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.281545 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-scripts" (OuterVolumeSpecName: "scripts") pod "169bdfbe-91a0-476c-a315-e9ea82f10ca5" (UID: "169bdfbe-91a0-476c-a315-e9ea82f10ca5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.281588 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/169bdfbe-91a0-476c-a315-e9ea82f10ca5-kube-api-access-wtwkz" (OuterVolumeSpecName: "kube-api-access-wtwkz") pod "169bdfbe-91a0-476c-a315-e9ea82f10ca5" (UID: "169bdfbe-91a0-476c-a315-e9ea82f10ca5"). InnerVolumeSpecName "kube-api-access-wtwkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.318962 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "169bdfbe-91a0-476c-a315-e9ea82f10ca5" (UID: "169bdfbe-91a0-476c-a315-e9ea82f10ca5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.354497 4812 generic.go:334] "Generic (PLEG): container finished" podID="169bdfbe-91a0-476c-a315-e9ea82f10ca5" containerID="1562bd89b357fa537917bd8bf2fc7254e76fcce3becc7965c45d1585426c27f5" exitCode=0 Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.354546 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"169bdfbe-91a0-476c-a315-e9ea82f10ca5","Type":"ContainerDied","Data":"1562bd89b357fa537917bd8bf2fc7254e76fcce3becc7965c45d1585426c27f5"} Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.354575 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"169bdfbe-91a0-476c-a315-e9ea82f10ca5","Type":"ContainerDied","Data":"84dd7da0e7355a6c3fb1d9d1c6cbafb4f561ff71a4654c0c2abc04d1dff1547a"} Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.354592 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.354595 4812 scope.go:117] "RemoveContainer" containerID="ce6c78e988cdf0a3ef8328b3f85e04a5852ee4ec1e15226815c49095f468ad99" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.376135 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.376171 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/169bdfbe-91a0-476c-a315-e9ea82f10ca5-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.376184 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.376198 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtwkz\" (UniqueName: \"kubernetes.io/projected/169bdfbe-91a0-476c-a315-e9ea82f10ca5-kube-api-access-wtwkz\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.383047 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "169bdfbe-91a0-476c-a315-e9ea82f10ca5" (UID: "169bdfbe-91a0-476c-a315-e9ea82f10ca5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.410178 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-config-data" (OuterVolumeSpecName: "config-data") pod "169bdfbe-91a0-476c-a315-e9ea82f10ca5" (UID: "169bdfbe-91a0-476c-a315-e9ea82f10ca5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.420310 4812 scope.go:117] "RemoveContainer" containerID="42fc49c75b8cd7333bb583c37b53c2a2a3f1da4ba6d342977c3330998c3ed4d8" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.449211 4812 scope.go:117] "RemoveContainer" containerID="1562bd89b357fa537917bd8bf2fc7254e76fcce3becc7965c45d1585426c27f5" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.470364 4812 scope.go:117] "RemoveContainer" containerID="4a8d5ec6f0abd52f143fd632d0c822d757220d37a6bc65af0f49cd7ba26f3bb7" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.478215 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.478244 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/169bdfbe-91a0-476c-a315-e9ea82f10ca5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.489258 4812 scope.go:117] "RemoveContainer" containerID="ce6c78e988cdf0a3ef8328b3f85e04a5852ee4ec1e15226815c49095f468ad99" Nov 24 19:36:47 crc kubenswrapper[4812]: E1124 19:36:47.489650 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce6c78e988cdf0a3ef8328b3f85e04a5852ee4ec1e15226815c49095f468ad99\": container with ID starting with ce6c78e988cdf0a3ef8328b3f85e04a5852ee4ec1e15226815c49095f468ad99 not found: ID does not exist" containerID="ce6c78e988cdf0a3ef8328b3f85e04a5852ee4ec1e15226815c49095f468ad99" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.489698 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce6c78e988cdf0a3ef8328b3f85e04a5852ee4ec1e15226815c49095f468ad99"} err="failed to get container status \"ce6c78e988cdf0a3ef8328b3f85e04a5852ee4ec1e15226815c49095f468ad99\": rpc error: code = NotFound desc = could not find container \"ce6c78e988cdf0a3ef8328b3f85e04a5852ee4ec1e15226815c49095f468ad99\": container with ID starting with ce6c78e988cdf0a3ef8328b3f85e04a5852ee4ec1e15226815c49095f468ad99 not found: ID does not exist" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.489729 4812 scope.go:117] "RemoveContainer" containerID="42fc49c75b8cd7333bb583c37b53c2a2a3f1da4ba6d342977c3330998c3ed4d8" Nov 24 19:36:47 crc kubenswrapper[4812]: E1124 19:36:47.490170 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42fc49c75b8cd7333bb583c37b53c2a2a3f1da4ba6d342977c3330998c3ed4d8\": container with ID starting with 42fc49c75b8cd7333bb583c37b53c2a2a3f1da4ba6d342977c3330998c3ed4d8 not found: ID does not exist" containerID="42fc49c75b8cd7333bb583c37b53c2a2a3f1da4ba6d342977c3330998c3ed4d8" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.490207 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42fc49c75b8cd7333bb583c37b53c2a2a3f1da4ba6d342977c3330998c3ed4d8"} err="failed to get container status \"42fc49c75b8cd7333bb583c37b53c2a2a3f1da4ba6d342977c3330998c3ed4d8\": rpc error: code = NotFound desc = could not find container \"42fc49c75b8cd7333bb583c37b53c2a2a3f1da4ba6d342977c3330998c3ed4d8\": container with ID starting with 42fc49c75b8cd7333bb583c37b53c2a2a3f1da4ba6d342977c3330998c3ed4d8 not found: ID does not exist" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.490228 4812 scope.go:117] "RemoveContainer" containerID="1562bd89b357fa537917bd8bf2fc7254e76fcce3becc7965c45d1585426c27f5" Nov 24 19:36:47 crc kubenswrapper[4812]: E1124 19:36:47.490433 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1562bd89b357fa537917bd8bf2fc7254e76fcce3becc7965c45d1585426c27f5\": container with ID starting with 1562bd89b357fa537917bd8bf2fc7254e76fcce3becc7965c45d1585426c27f5 not found: ID does not exist" containerID="1562bd89b357fa537917bd8bf2fc7254e76fcce3becc7965c45d1585426c27f5" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.490450 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1562bd89b357fa537917bd8bf2fc7254e76fcce3becc7965c45d1585426c27f5"} err="failed to get container status \"1562bd89b357fa537917bd8bf2fc7254e76fcce3becc7965c45d1585426c27f5\": rpc error: code = NotFound desc = could not find container \"1562bd89b357fa537917bd8bf2fc7254e76fcce3becc7965c45d1585426c27f5\": container with ID starting with 1562bd89b357fa537917bd8bf2fc7254e76fcce3becc7965c45d1585426c27f5 not found: ID does not exist" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.490463 4812 scope.go:117] "RemoveContainer" containerID="4a8d5ec6f0abd52f143fd632d0c822d757220d37a6bc65af0f49cd7ba26f3bb7" Nov 24 19:36:47 crc kubenswrapper[4812]: E1124 19:36:47.490646 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a8d5ec6f0abd52f143fd632d0c822d757220d37a6bc65af0f49cd7ba26f3bb7\": container with ID starting with 4a8d5ec6f0abd52f143fd632d0c822d757220d37a6bc65af0f49cd7ba26f3bb7 not found: ID does not exist" containerID="4a8d5ec6f0abd52f143fd632d0c822d757220d37a6bc65af0f49cd7ba26f3bb7" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.490665 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a8d5ec6f0abd52f143fd632d0c822d757220d37a6bc65af0f49cd7ba26f3bb7"} err="failed to get container status \"4a8d5ec6f0abd52f143fd632d0c822d757220d37a6bc65af0f49cd7ba26f3bb7\": rpc error: code = NotFound desc = could not find container \"4a8d5ec6f0abd52f143fd632d0c822d757220d37a6bc65af0f49cd7ba26f3bb7\": container with ID starting with 4a8d5ec6f0abd52f143fd632d0c822d757220d37a6bc65af0f49cd7ba26f3bb7 not found: ID does not exist" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.699606 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.712675 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.727995 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:36:47 crc kubenswrapper[4812]: E1124 19:36:47.728432 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169bdfbe-91a0-476c-a315-e9ea82f10ca5" containerName="ceilometer-notification-agent" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.728465 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="169bdfbe-91a0-476c-a315-e9ea82f10ca5" containerName="ceilometer-notification-agent" Nov 24 19:36:47 crc kubenswrapper[4812]: E1124 19:36:47.728487 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169bdfbe-91a0-476c-a315-e9ea82f10ca5" containerName="ceilometer-central-agent" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.728495 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="169bdfbe-91a0-476c-a315-e9ea82f10ca5" containerName="ceilometer-central-agent" Nov 24 19:36:47 crc kubenswrapper[4812]: E1124 19:36:47.728506 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169bdfbe-91a0-476c-a315-e9ea82f10ca5" containerName="sg-core" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.728512 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="169bdfbe-91a0-476c-a315-e9ea82f10ca5" containerName="sg-core" Nov 24 19:36:47 crc kubenswrapper[4812]: E1124 19:36:47.728536 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169bdfbe-91a0-476c-a315-e9ea82f10ca5" containerName="proxy-httpd" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.728542 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="169bdfbe-91a0-476c-a315-e9ea82f10ca5" containerName="proxy-httpd" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.728714 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="169bdfbe-91a0-476c-a315-e9ea82f10ca5" containerName="sg-core" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.728727 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="169bdfbe-91a0-476c-a315-e9ea82f10ca5" containerName="ceilometer-notification-agent" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.728745 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="169bdfbe-91a0-476c-a315-e9ea82f10ca5" containerName="proxy-httpd" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.728758 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="169bdfbe-91a0-476c-a315-e9ea82f10ca5" containerName="ceilometer-central-agent" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.730722 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.733987 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.734112 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.749725 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.883647 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.883703 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvc7v\" (UniqueName: \"kubernetes.io/projected/04be8da8-17d2-450e-8d75-f30600c4656a-kube-api-access-mvc7v\") pod \"ceilometer-0\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.883735 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04be8da8-17d2-450e-8d75-f30600c4656a-log-httpd\") pod \"ceilometer-0\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.883905 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04be8da8-17d2-450e-8d75-f30600c4656a-run-httpd\") pod \"ceilometer-0\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.884036 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-scripts\") pod \"ceilometer-0\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.884083 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-config-data\") pod \"ceilometer-0\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.884108 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.985461 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-scripts\") pod \"ceilometer-0\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.985546 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-config-data\") pod \"ceilometer-0\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.985582 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.985648 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.985691 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvc7v\" (UniqueName: \"kubernetes.io/projected/04be8da8-17d2-450e-8d75-f30600c4656a-kube-api-access-mvc7v\") pod \"ceilometer-0\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.985733 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04be8da8-17d2-450e-8d75-f30600c4656a-log-httpd\") pod \"ceilometer-0\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.985787 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04be8da8-17d2-450e-8d75-f30600c4656a-run-httpd\") pod \"ceilometer-0\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.986478 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04be8da8-17d2-450e-8d75-f30600c4656a-log-httpd\") pod \"ceilometer-0\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.986513 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04be8da8-17d2-450e-8d75-f30600c4656a-run-httpd\") pod \"ceilometer-0\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.992236 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.992414 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-config-data\") pod \"ceilometer-0\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.992663 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-scripts\") pod \"ceilometer-0\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " pod="openstack/ceilometer-0" Nov 24 19:36:47 crc kubenswrapper[4812]: I1124 19:36:47.993288 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " pod="openstack/ceilometer-0" Nov 24 19:36:48 crc kubenswrapper[4812]: I1124 19:36:48.013779 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvc7v\" (UniqueName: \"kubernetes.io/projected/04be8da8-17d2-450e-8d75-f30600c4656a-kube-api-access-mvc7v\") pod \"ceilometer-0\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " pod="openstack/ceilometer-0" Nov 24 19:36:48 crc kubenswrapper[4812]: I1124 19:36:48.053644 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:36:48 crc kubenswrapper[4812]: W1124 19:36:48.534154 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04be8da8_17d2_450e_8d75_f30600c4656a.slice/crio-f56c6b0bd8da8b7ab4281a4580efcdb049022fc7f06bcf927a04b4f964ceffb5 WatchSource:0}: Error finding container f56c6b0bd8da8b7ab4281a4580efcdb049022fc7f06bcf927a04b4f964ceffb5: Status 404 returned error can't find the container with id f56c6b0bd8da8b7ab4281a4580efcdb049022fc7f06bcf927a04b4f964ceffb5 Nov 24 19:36:48 crc kubenswrapper[4812]: I1124 19:36:48.537003 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:36:48 crc kubenswrapper[4812]: I1124 19:36:48.979454 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="169bdfbe-91a0-476c-a315-e9ea82f10ca5" path="/var/lib/kubelet/pods/169bdfbe-91a0-476c-a315-e9ea82f10ca5/volumes" Nov 24 19:36:49 crc kubenswrapper[4812]: I1124 19:36:49.388654 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04be8da8-17d2-450e-8d75-f30600c4656a","Type":"ContainerStarted","Data":"0a203479b0f595f21c194ef6e751fa70d1d548b8e651e778d697aa9152d02055"} Nov 24 19:36:49 crc kubenswrapper[4812]: I1124 19:36:49.388755 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04be8da8-17d2-450e-8d75-f30600c4656a","Type":"ContainerStarted","Data":"f56c6b0bd8da8b7ab4281a4580efcdb049022fc7f06bcf927a04b4f964ceffb5"} Nov 24 19:36:50 crc kubenswrapper[4812]: I1124 19:36:50.398317 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04be8da8-17d2-450e-8d75-f30600c4656a","Type":"ContainerStarted","Data":"bc05ae9e099323cf253c539256e6465cdc984a072acda5989444fd5f65b298b1"} Nov 24 19:36:50 crc kubenswrapper[4812]: I1124 19:36:50.401885 4812 generic.go:334] "Generic (PLEG): container finished" podID="5a5a2d87-4746-4254-9f63-10fe73a4001f" containerID="e021b7fe56e0097d5aa8fd3433742c3816130416aba858d21e90d0ed7ba6119b" exitCode=0 Nov 24 19:36:50 crc kubenswrapper[4812]: I1124 19:36:50.401937 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nl6jz" event={"ID":"5a5a2d87-4746-4254-9f63-10fe73a4001f","Type":"ContainerDied","Data":"e021b7fe56e0097d5aa8fd3433742c3816130416aba858d21e90d0ed7ba6119b"} Nov 24 19:36:51 crc kubenswrapper[4812]: I1124 19:36:51.412977 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04be8da8-17d2-450e-8d75-f30600c4656a","Type":"ContainerStarted","Data":"da7e2b25a70c3fba95c4a6655e698a5160edbc87c129cdd5d1209dffed50a286"} Nov 24 19:36:51 crc kubenswrapper[4812]: I1124 19:36:51.557528 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:36:51 crc kubenswrapper[4812]: I1124 19:36:51.610536 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76c58b6d97-gfqnn"] Nov 24 19:36:51 crc kubenswrapper[4812]: I1124 19:36:51.610785 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" podUID="9788b094-1c08-4c10-be9f-a22f3087c814" containerName="dnsmasq-dns" containerID="cri-o://82a663ac9b08a960ac4fa8a9f60ad5df010a1bfdcd690d82d8e1c5195f06d79f" gracePeriod=10 Nov 24 19:36:51 crc kubenswrapper[4812]: I1124 19:36:51.929930 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.087503 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-db-sync-config-data\") pod \"5a5a2d87-4746-4254-9f63-10fe73a4001f\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.087707 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-scripts\") pod \"5a5a2d87-4746-4254-9f63-10fe73a4001f\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.087785 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6cc9\" (UniqueName: \"kubernetes.io/projected/5a5a2d87-4746-4254-9f63-10fe73a4001f-kube-api-access-s6cc9\") pod \"5a5a2d87-4746-4254-9f63-10fe73a4001f\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.087885 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-combined-ca-bundle\") pod \"5a5a2d87-4746-4254-9f63-10fe73a4001f\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.087906 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a5a2d87-4746-4254-9f63-10fe73a4001f-etc-machine-id\") pod \"5a5a2d87-4746-4254-9f63-10fe73a4001f\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.087940 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-config-data\") pod \"5a5a2d87-4746-4254-9f63-10fe73a4001f\" (UID: \"5a5a2d87-4746-4254-9f63-10fe73a4001f\") " Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.092051 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a5a2d87-4746-4254-9f63-10fe73a4001f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5a5a2d87-4746-4254-9f63-10fe73a4001f" (UID: "5a5a2d87-4746-4254-9f63-10fe73a4001f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.092192 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-scripts" (OuterVolumeSpecName: "scripts") pod "5a5a2d87-4746-4254-9f63-10fe73a4001f" (UID: "5a5a2d87-4746-4254-9f63-10fe73a4001f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.096894 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a5a2d87-4746-4254-9f63-10fe73a4001f-kube-api-access-s6cc9" (OuterVolumeSpecName: "kube-api-access-s6cc9") pod "5a5a2d87-4746-4254-9f63-10fe73a4001f" (UID: "5a5a2d87-4746-4254-9f63-10fe73a4001f"). InnerVolumeSpecName "kube-api-access-s6cc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.122502 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5a5a2d87-4746-4254-9f63-10fe73a4001f" (UID: "5a5a2d87-4746-4254-9f63-10fe73a4001f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.149508 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a5a2d87-4746-4254-9f63-10fe73a4001f" (UID: "5a5a2d87-4746-4254-9f63-10fe73a4001f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.173459 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-config-data" (OuterVolumeSpecName: "config-data") pod "5a5a2d87-4746-4254-9f63-10fe73a4001f" (UID: "5a5a2d87-4746-4254-9f63-10fe73a4001f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.177439 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.194452 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.194494 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6cc9\" (UniqueName: \"kubernetes.io/projected/5a5a2d87-4746-4254-9f63-10fe73a4001f-kube-api-access-s6cc9\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.194508 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.194520 4812 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a5a2d87-4746-4254-9f63-10fe73a4001f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.194531 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.194542 4812 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a5a2d87-4746-4254-9f63-10fe73a4001f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.297235 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jxwq\" (UniqueName: \"kubernetes.io/projected/9788b094-1c08-4c10-be9f-a22f3087c814-kube-api-access-6jxwq\") pod \"9788b094-1c08-4c10-be9f-a22f3087c814\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.297327 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-dns-swift-storage-0\") pod \"9788b094-1c08-4c10-be9f-a22f3087c814\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.297408 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-config\") pod \"9788b094-1c08-4c10-be9f-a22f3087c814\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.297567 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-ovsdbserver-nb\") pod \"9788b094-1c08-4c10-be9f-a22f3087c814\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.297712 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-ovsdbserver-sb\") pod \"9788b094-1c08-4c10-be9f-a22f3087c814\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.297783 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-dns-svc\") pod \"9788b094-1c08-4c10-be9f-a22f3087c814\" (UID: \"9788b094-1c08-4c10-be9f-a22f3087c814\") " Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.303189 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9788b094-1c08-4c10-be9f-a22f3087c814-kube-api-access-6jxwq" (OuterVolumeSpecName: "kube-api-access-6jxwq") pod "9788b094-1c08-4c10-be9f-a22f3087c814" (UID: "9788b094-1c08-4c10-be9f-a22f3087c814"). InnerVolumeSpecName "kube-api-access-6jxwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.346419 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9788b094-1c08-4c10-be9f-a22f3087c814" (UID: "9788b094-1c08-4c10-be9f-a22f3087c814"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.360858 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-config" (OuterVolumeSpecName: "config") pod "9788b094-1c08-4c10-be9f-a22f3087c814" (UID: "9788b094-1c08-4c10-be9f-a22f3087c814"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.361533 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9788b094-1c08-4c10-be9f-a22f3087c814" (UID: "9788b094-1c08-4c10-be9f-a22f3087c814"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.372207 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9788b094-1c08-4c10-be9f-a22f3087c814" (UID: "9788b094-1c08-4c10-be9f-a22f3087c814"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.377261 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9788b094-1c08-4c10-be9f-a22f3087c814" (UID: "9788b094-1c08-4c10-be9f-a22f3087c814"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.400459 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jxwq\" (UniqueName: \"kubernetes.io/projected/9788b094-1c08-4c10-be9f-a22f3087c814-kube-api-access-6jxwq\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.400497 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.400507 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.400516 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.400523 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.400531 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9788b094-1c08-4c10-be9f-a22f3087c814-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.424046 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nl6jz" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.425391 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nl6jz" event={"ID":"5a5a2d87-4746-4254-9f63-10fe73a4001f","Type":"ContainerDied","Data":"91d1d1002d53d0bbc84cd1e8faecccc6beb4f0059a4cefa6dffa9f2f75890729"} Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.425481 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91d1d1002d53d0bbc84cd1e8faecccc6beb4f0059a4cefa6dffa9f2f75890729" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.427582 4812 generic.go:334] "Generic (PLEG): container finished" podID="9788b094-1c08-4c10-be9f-a22f3087c814" containerID="82a663ac9b08a960ac4fa8a9f60ad5df010a1bfdcd690d82d8e1c5195f06d79f" exitCode=0 Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.427641 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" event={"ID":"9788b094-1c08-4c10-be9f-a22f3087c814","Type":"ContainerDied","Data":"82a663ac9b08a960ac4fa8a9f60ad5df010a1bfdcd690d82d8e1c5195f06d79f"} Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.427662 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" event={"ID":"9788b094-1c08-4c10-be9f-a22f3087c814","Type":"ContainerDied","Data":"bd7e31fbce8fe57d6c064aaf6d13adb838b7f4bc793229d83d8d4769e8cfb7fe"} Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.427680 4812 scope.go:117] "RemoveContainer" containerID="82a663ac9b08a960ac4fa8a9f60ad5df010a1bfdcd690d82d8e1c5195f06d79f" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.427822 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76c58b6d97-gfqnn" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.434067 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04be8da8-17d2-450e-8d75-f30600c4656a","Type":"ContainerStarted","Data":"61636222c24dbc69d2827dbc5ca5cbbb7dd96e86f967026404630566bd435d78"} Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.434948 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.463589 4812 scope.go:117] "RemoveContainer" containerID="ff8070c545474c3f5bf799fc34b73e0f121d4a68b5d19b4273fb40c3a21c9151" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.474134 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.956235441 podStartE2EDuration="5.474111563s" podCreationTimestamp="2025-11-24 19:36:47 +0000 UTC" firstStartedPulling="2025-11-24 19:36:48.537885224 +0000 UTC m=+1202.326837605" lastFinishedPulling="2025-11-24 19:36:52.055761366 +0000 UTC m=+1205.844713727" observedRunningTime="2025-11-24 19:36:52.464181368 +0000 UTC m=+1206.253133749" watchObservedRunningTime="2025-11-24 19:36:52.474111563 +0000 UTC m=+1206.263063934" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.505090 4812 scope.go:117] "RemoveContainer" containerID="82a663ac9b08a960ac4fa8a9f60ad5df010a1bfdcd690d82d8e1c5195f06d79f" Nov 24 19:36:52 crc kubenswrapper[4812]: E1124 19:36:52.505590 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82a663ac9b08a960ac4fa8a9f60ad5df010a1bfdcd690d82d8e1c5195f06d79f\": container with ID starting with 82a663ac9b08a960ac4fa8a9f60ad5df010a1bfdcd690d82d8e1c5195f06d79f not found: ID does not exist" containerID="82a663ac9b08a960ac4fa8a9f60ad5df010a1bfdcd690d82d8e1c5195f06d79f" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.505630 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82a663ac9b08a960ac4fa8a9f60ad5df010a1bfdcd690d82d8e1c5195f06d79f"} err="failed to get container status \"82a663ac9b08a960ac4fa8a9f60ad5df010a1bfdcd690d82d8e1c5195f06d79f\": rpc error: code = NotFound desc = could not find container \"82a663ac9b08a960ac4fa8a9f60ad5df010a1bfdcd690d82d8e1c5195f06d79f\": container with ID starting with 82a663ac9b08a960ac4fa8a9f60ad5df010a1bfdcd690d82d8e1c5195f06d79f not found: ID does not exist" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.505665 4812 scope.go:117] "RemoveContainer" containerID="ff8070c545474c3f5bf799fc34b73e0f121d4a68b5d19b4273fb40c3a21c9151" Nov 24 19:36:52 crc kubenswrapper[4812]: E1124 19:36:52.506033 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff8070c545474c3f5bf799fc34b73e0f121d4a68b5d19b4273fb40c3a21c9151\": container with ID starting with ff8070c545474c3f5bf799fc34b73e0f121d4a68b5d19b4273fb40c3a21c9151 not found: ID does not exist" containerID="ff8070c545474c3f5bf799fc34b73e0f121d4a68b5d19b4273fb40c3a21c9151" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.506073 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff8070c545474c3f5bf799fc34b73e0f121d4a68b5d19b4273fb40c3a21c9151"} err="failed to get container status \"ff8070c545474c3f5bf799fc34b73e0f121d4a68b5d19b4273fb40c3a21c9151\": rpc error: code = NotFound desc = could not find container \"ff8070c545474c3f5bf799fc34b73e0f121d4a68b5d19b4273fb40c3a21c9151\": container with ID starting with ff8070c545474c3f5bf799fc34b73e0f121d4a68b5d19b4273fb40c3a21c9151 not found: ID does not exist" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.507051 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76c58b6d97-gfqnn"] Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.513398 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76c58b6d97-gfqnn"] Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.706046 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 19:36:52 crc kubenswrapper[4812]: E1124 19:36:52.708431 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9788b094-1c08-4c10-be9f-a22f3087c814" containerName="dnsmasq-dns" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.708451 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9788b094-1c08-4c10-be9f-a22f3087c814" containerName="dnsmasq-dns" Nov 24 19:36:52 crc kubenswrapper[4812]: E1124 19:36:52.708475 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9788b094-1c08-4c10-be9f-a22f3087c814" containerName="init" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.708482 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9788b094-1c08-4c10-be9f-a22f3087c814" containerName="init" Nov 24 19:36:52 crc kubenswrapper[4812]: E1124 19:36:52.708517 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a5a2d87-4746-4254-9f63-10fe73a4001f" containerName="cinder-db-sync" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.708523 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a5a2d87-4746-4254-9f63-10fe73a4001f" containerName="cinder-db-sync" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.713944 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="9788b094-1c08-4c10-be9f-a22f3087c814" containerName="dnsmasq-dns" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.713987 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a5a2d87-4746-4254-9f63-10fe73a4001f" containerName="cinder-db-sync" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.715161 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.718896 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.718982 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.718910 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.721455 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hnc55" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.725509 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.806208 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " pod="openstack/cinder-scheduler-0" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.806279 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-config-data\") pod \"cinder-scheduler-0\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " pod="openstack/cinder-scheduler-0" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.806323 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckj8t\" (UniqueName: \"kubernetes.io/projected/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-kube-api-access-ckj8t\") pod \"cinder-scheduler-0\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " pod="openstack/cinder-scheduler-0" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.806390 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-scripts\") pod \"cinder-scheduler-0\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " pod="openstack/cinder-scheduler-0" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.806464 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " pod="openstack/cinder-scheduler-0" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.806479 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " pod="openstack/cinder-scheduler-0" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.826133 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-797bbc649-65cf5"] Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.827756 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.852331 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-797bbc649-65cf5"] Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.908227 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " pod="openstack/cinder-scheduler-0" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.908287 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " pod="openstack/cinder-scheduler-0" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.908363 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-ovsdbserver-nb\") pod \"dnsmasq-dns-797bbc649-65cf5\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.908509 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfcm6\" (UniqueName: \"kubernetes.io/projected/43666515-aeb8-458b-a6a4-380d22e689ba-kube-api-access-cfcm6\") pod \"dnsmasq-dns-797bbc649-65cf5\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.908534 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " pod="openstack/cinder-scheduler-0" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.908557 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-config\") pod \"dnsmasq-dns-797bbc649-65cf5\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.908599 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-dns-swift-storage-0\") pod \"dnsmasq-dns-797bbc649-65cf5\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.908629 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-config-data\") pod \"cinder-scheduler-0\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " pod="openstack/cinder-scheduler-0" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.908684 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-ovsdbserver-sb\") pod \"dnsmasq-dns-797bbc649-65cf5\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.908714 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-dns-svc\") pod \"dnsmasq-dns-797bbc649-65cf5\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.908751 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckj8t\" (UniqueName: \"kubernetes.io/projected/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-kube-api-access-ckj8t\") pod \"cinder-scheduler-0\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " pod="openstack/cinder-scheduler-0" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.908767 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-scripts\") pod \"cinder-scheduler-0\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " pod="openstack/cinder-scheduler-0" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.908441 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " pod="openstack/cinder-scheduler-0" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.920478 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " pod="openstack/cinder-scheduler-0" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.922882 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-config-data\") pod \"cinder-scheduler-0\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " pod="openstack/cinder-scheduler-0" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.923823 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " pod="openstack/cinder-scheduler-0" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.938716 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckj8t\" (UniqueName: \"kubernetes.io/projected/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-kube-api-access-ckj8t\") pod \"cinder-scheduler-0\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " pod="openstack/cinder-scheduler-0" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.947204 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-scripts\") pod \"cinder-scheduler-0\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " pod="openstack/cinder-scheduler-0" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.964807 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.966225 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.974703 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.984840 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9788b094-1c08-4c10-be9f-a22f3087c814" path="/var/lib/kubelet/pods/9788b094-1c08-4c10-be9f-a22f3087c814/volumes" Nov 24 19:36:52 crc kubenswrapper[4812]: I1124 19:36:52.988940 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.014218 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-ovsdbserver-sb\") pod \"dnsmasq-dns-797bbc649-65cf5\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.014278 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-dns-svc\") pod \"dnsmasq-dns-797bbc649-65cf5\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.014397 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-ovsdbserver-nb\") pod \"dnsmasq-dns-797bbc649-65cf5\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.014449 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfcm6\" (UniqueName: \"kubernetes.io/projected/43666515-aeb8-458b-a6a4-380d22e689ba-kube-api-access-cfcm6\") pod \"dnsmasq-dns-797bbc649-65cf5\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.014478 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-config\") pod \"dnsmasq-dns-797bbc649-65cf5\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.014520 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-dns-swift-storage-0\") pod \"dnsmasq-dns-797bbc649-65cf5\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.015517 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-dns-swift-storage-0\") pod \"dnsmasq-dns-797bbc649-65cf5\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.016473 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-ovsdbserver-sb\") pod \"dnsmasq-dns-797bbc649-65cf5\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.018320 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-ovsdbserver-nb\") pod \"dnsmasq-dns-797bbc649-65cf5\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.018837 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-config\") pod \"dnsmasq-dns-797bbc649-65cf5\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.030757 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-dns-svc\") pod \"dnsmasq-dns-797bbc649-65cf5\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.065002 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfcm6\" (UniqueName: \"kubernetes.io/projected/43666515-aeb8-458b-a6a4-380d22e689ba-kube-api-access-cfcm6\") pod \"dnsmasq-dns-797bbc649-65cf5\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.107789 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.116207 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-logs\") pod \"cinder-api-0\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.117014 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-config-data-custom\") pod \"cinder-api-0\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.117126 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-scripts\") pod \"cinder-api-0\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.117228 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.117584 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsn7s\" (UniqueName: \"kubernetes.io/projected/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-kube-api-access-zsn7s\") pod \"cinder-api-0\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.117734 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.117933 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-config-data\") pod \"cinder-api-0\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.165594 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.225359 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-config-data\") pod \"cinder-api-0\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.225619 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-logs\") pod \"cinder-api-0\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.225694 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-config-data-custom\") pod \"cinder-api-0\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.225726 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-scripts\") pod \"cinder-api-0\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.225758 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.225801 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsn7s\" (UniqueName: \"kubernetes.io/projected/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-kube-api-access-zsn7s\") pod \"cinder-api-0\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.225839 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.225936 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.230190 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-logs\") pod \"cinder-api-0\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.254886 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-config-data-custom\") pod \"cinder-api-0\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.258688 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsn7s\" (UniqueName: \"kubernetes.io/projected/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-kube-api-access-zsn7s\") pod \"cinder-api-0\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.261950 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-scripts\") pod \"cinder-api-0\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.272340 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-config-data\") pod \"cinder-api-0\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.272902 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.407692 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.582378 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.711099 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-797bbc649-65cf5"] Nov 24 19:36:53 crc kubenswrapper[4812]: W1124 19:36:53.713857 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43666515_aeb8_458b_a6a4_380d22e689ba.slice/crio-0c88b8954410912b442cb480b40681f4bf15d8291f560677e86db1b4b586982f WatchSource:0}: Error finding container 0c88b8954410912b442cb480b40681f4bf15d8291f560677e86db1b4b586982f: Status 404 returned error can't find the container with id 0c88b8954410912b442cb480b40681f4bf15d8291f560677e86db1b4b586982f Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.809264 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.875411 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 19:36:53 crc kubenswrapper[4812]: W1124 19:36:53.875480 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64fe61b4_a0ff_4112_a03b_3efbf41fc49d.slice/crio-9c5d526ad2a989d4a5b2db94f738d588885f984ccaa77a82bd456b9a730a9826 WatchSource:0}: Error finding container 9c5d526ad2a989d4a5b2db94f738d588885f984ccaa77a82bd456b9a730a9826: Status 404 returned error can't find the container with id 9c5d526ad2a989d4a5b2db94f738d588885f984ccaa77a82bd456b9a730a9826 Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.895368 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.959464 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-677dbcd544-vn9hr"] Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.966976 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-677dbcd544-vn9hr" podUID="1b2589de-d523-4229-adfa-d0b682557955" containerName="barbican-api-log" containerID="cri-o://2c08fc1656dc6af46fe487c79f4e8e13a88ca0c62c23dac3c5e5110812aabe0e" gracePeriod=30 Nov 24 19:36:53 crc kubenswrapper[4812]: I1124 19:36:53.967007 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-677dbcd544-vn9hr" podUID="1b2589de-d523-4229-adfa-d0b682557955" containerName="barbican-api" containerID="cri-o://efe8190e176d4f61444a62711cb5acaffd4f1fc0e1300e501621076d93bceaf0" gracePeriod=30 Nov 24 19:36:54 crc kubenswrapper[4812]: I1124 19:36:54.475279 4812 generic.go:334] "Generic (PLEG): container finished" podID="43666515-aeb8-458b-a6a4-380d22e689ba" containerID="3bcc922155878f867131d2e51a1bc4805377db170cd2e3ef87a2f4071b3c3146" exitCode=0 Nov 24 19:36:54 crc kubenswrapper[4812]: I1124 19:36:54.475827 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-797bbc649-65cf5" event={"ID":"43666515-aeb8-458b-a6a4-380d22e689ba","Type":"ContainerDied","Data":"3bcc922155878f867131d2e51a1bc4805377db170cd2e3ef87a2f4071b3c3146"} Nov 24 19:36:54 crc kubenswrapper[4812]: I1124 19:36:54.475872 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-797bbc649-65cf5" event={"ID":"43666515-aeb8-458b-a6a4-380d22e689ba","Type":"ContainerStarted","Data":"0c88b8954410912b442cb480b40681f4bf15d8291f560677e86db1b4b586982f"} Nov 24 19:36:54 crc kubenswrapper[4812]: I1124 19:36:54.489521 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601","Type":"ContainerStarted","Data":"83e2d85f12d1f7209edc72100c24df2c6bcd8a742586324a84e577be3808aefd"} Nov 24 19:36:54 crc kubenswrapper[4812]: I1124 19:36:54.499930 4812 generic.go:334] "Generic (PLEG): container finished" podID="1b2589de-d523-4229-adfa-d0b682557955" containerID="2c08fc1656dc6af46fe487c79f4e8e13a88ca0c62c23dac3c5e5110812aabe0e" exitCode=143 Nov 24 19:36:54 crc kubenswrapper[4812]: I1124 19:36:54.500041 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-677dbcd544-vn9hr" event={"ID":"1b2589de-d523-4229-adfa-d0b682557955","Type":"ContainerDied","Data":"2c08fc1656dc6af46fe487c79f4e8e13a88ca0c62c23dac3c5e5110812aabe0e"} Nov 24 19:36:54 crc kubenswrapper[4812]: I1124 19:36:54.504360 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"64fe61b4-a0ff-4112-a03b-3efbf41fc49d","Type":"ContainerStarted","Data":"9c5d526ad2a989d4a5b2db94f738d588885f984ccaa77a82bd456b9a730a9826"} Nov 24 19:36:55 crc kubenswrapper[4812]: I1124 19:36:55.463720 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 19:36:55 crc kubenswrapper[4812]: I1124 19:36:55.514492 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"64fe61b4-a0ff-4112-a03b-3efbf41fc49d","Type":"ContainerStarted","Data":"4799377af470aac68ad366d0732a5d2fea16a8519a2428c80693862d7f147aa1"} Nov 24 19:36:55 crc kubenswrapper[4812]: I1124 19:36:55.514544 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"64fe61b4-a0ff-4112-a03b-3efbf41fc49d","Type":"ContainerStarted","Data":"73a0ba9b546e038faaf53cdeb9962704e01ecec10ec08a01638a5d5cc066edb3"} Nov 24 19:36:55 crc kubenswrapper[4812]: I1124 19:36:55.514931 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 19:36:55 crc kubenswrapper[4812]: I1124 19:36:55.527720 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-797bbc649-65cf5" event={"ID":"43666515-aeb8-458b-a6a4-380d22e689ba","Type":"ContainerStarted","Data":"cc384a033e885680ff57972eecad744044537cc51849f3362c88b04c86d90d0a"} Nov 24 19:36:55 crc kubenswrapper[4812]: I1124 19:36:55.527829 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:36:55 crc kubenswrapper[4812]: I1124 19:36:55.532136 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.532118185 podStartE2EDuration="3.532118185s" podCreationTimestamp="2025-11-24 19:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:36:55.530421456 +0000 UTC m=+1209.319373847" watchObservedRunningTime="2025-11-24 19:36:55.532118185 +0000 UTC m=+1209.321070566" Nov 24 19:36:55 crc kubenswrapper[4812]: I1124 19:36:55.538026 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601","Type":"ContainerStarted","Data":"089803d3ffe85e680486b30faf84d12675eff5e621536769557e8542dbd997e8"} Nov 24 19:36:55 crc kubenswrapper[4812]: I1124 19:36:55.572310 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-797bbc649-65cf5" podStartSLOduration=3.572292769 podStartE2EDuration="3.572292769s" podCreationTimestamp="2025-11-24 19:36:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:36:55.560463509 +0000 UTC m=+1209.349415890" watchObservedRunningTime="2025-11-24 19:36:55.572292769 +0000 UTC m=+1209.361245140" Nov 24 19:36:56 crc kubenswrapper[4812]: I1124 19:36:56.553193 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="64fe61b4-a0ff-4112-a03b-3efbf41fc49d" containerName="cinder-api-log" containerID="cri-o://73a0ba9b546e038faaf53cdeb9962704e01ecec10ec08a01638a5d5cc066edb3" gracePeriod=30 Nov 24 19:36:56 crc kubenswrapper[4812]: I1124 19:36:56.553299 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="64fe61b4-a0ff-4112-a03b-3efbf41fc49d" containerName="cinder-api" containerID="cri-o://4799377af470aac68ad366d0732a5d2fea16a8519a2428c80693862d7f147aa1" gracePeriod=30 Nov 24 19:36:56 crc kubenswrapper[4812]: I1124 19:36:56.553163 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601","Type":"ContainerStarted","Data":"fdd570be79e8627c601073ae1b301242766bbe505b10dc6ffaa76bfa5f9576f7"} Nov 24 19:36:56 crc kubenswrapper[4812]: I1124 19:36:56.587793 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.8150277519999998 podStartE2EDuration="4.587766939s" podCreationTimestamp="2025-11-24 19:36:52 +0000 UTC" firstStartedPulling="2025-11-24 19:36:53.595040741 +0000 UTC m=+1207.383993112" lastFinishedPulling="2025-11-24 19:36:54.367779928 +0000 UTC m=+1208.156732299" observedRunningTime="2025-11-24 19:36:56.579314356 +0000 UTC m=+1210.368266737" watchObservedRunningTime="2025-11-24 19:36:56.587766939 +0000 UTC m=+1210.376719350" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.176263 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.307270 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-scripts\") pod \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.307323 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-config-data\") pod \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.307382 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsn7s\" (UniqueName: \"kubernetes.io/projected/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-kube-api-access-zsn7s\") pod \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.307497 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-combined-ca-bundle\") pod \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.307952 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-logs\") pod \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.308225 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-logs" (OuterVolumeSpecName: "logs") pod "64fe61b4-a0ff-4112-a03b-3efbf41fc49d" (UID: "64fe61b4-a0ff-4112-a03b-3efbf41fc49d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.308356 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-etc-machine-id\") pod \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.308461 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "64fe61b4-a0ff-4112-a03b-3efbf41fc49d" (UID: "64fe61b4-a0ff-4112-a03b-3efbf41fc49d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.308477 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-config-data-custom\") pod \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\" (UID: \"64fe61b4-a0ff-4112-a03b-3efbf41fc49d\") " Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.308920 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.308936 4812 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.313587 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "64fe61b4-a0ff-4112-a03b-3efbf41fc49d" (UID: "64fe61b4-a0ff-4112-a03b-3efbf41fc49d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.333714 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-scripts" (OuterVolumeSpecName: "scripts") pod "64fe61b4-a0ff-4112-a03b-3efbf41fc49d" (UID: "64fe61b4-a0ff-4112-a03b-3efbf41fc49d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.333730 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-kube-api-access-zsn7s" (OuterVolumeSpecName: "kube-api-access-zsn7s") pod "64fe61b4-a0ff-4112-a03b-3efbf41fc49d" (UID: "64fe61b4-a0ff-4112-a03b-3efbf41fc49d"). InnerVolumeSpecName "kube-api-access-zsn7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.336853 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64fe61b4-a0ff-4112-a03b-3efbf41fc49d" (UID: "64fe61b4-a0ff-4112-a03b-3efbf41fc49d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.363901 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-config-data" (OuterVolumeSpecName: "config-data") pod "64fe61b4-a0ff-4112-a03b-3efbf41fc49d" (UID: "64fe61b4-a0ff-4112-a03b-3efbf41fc49d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.406721 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-677dbcd544-vn9hr" podUID="1b2589de-d523-4229-adfa-d0b682557955" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": read tcp 10.217.0.2:52616->10.217.0.158:9311: read: connection reset by peer" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.406972 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-677dbcd544-vn9hr" podUID="1b2589de-d523-4229-adfa-d0b682557955" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": read tcp 10.217.0.2:52618->10.217.0.158:9311: read: connection reset by peer" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.410916 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.410944 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.410955 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsn7s\" (UniqueName: \"kubernetes.io/projected/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-kube-api-access-zsn7s\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.410969 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.410979 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/64fe61b4-a0ff-4112-a03b-3efbf41fc49d-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.568358 4812 generic.go:334] "Generic (PLEG): container finished" podID="1b2589de-d523-4229-adfa-d0b682557955" containerID="efe8190e176d4f61444a62711cb5acaffd4f1fc0e1300e501621076d93bceaf0" exitCode=0 Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.568404 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-677dbcd544-vn9hr" event={"ID":"1b2589de-d523-4229-adfa-d0b682557955","Type":"ContainerDied","Data":"efe8190e176d4f61444a62711cb5acaffd4f1fc0e1300e501621076d93bceaf0"} Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.570843 4812 generic.go:334] "Generic (PLEG): container finished" podID="64fe61b4-a0ff-4112-a03b-3efbf41fc49d" containerID="4799377af470aac68ad366d0732a5d2fea16a8519a2428c80693862d7f147aa1" exitCode=0 Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.570912 4812 generic.go:334] "Generic (PLEG): container finished" podID="64fe61b4-a0ff-4112-a03b-3efbf41fc49d" containerID="73a0ba9b546e038faaf53cdeb9962704e01ecec10ec08a01638a5d5cc066edb3" exitCode=143 Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.570949 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.570949 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"64fe61b4-a0ff-4112-a03b-3efbf41fc49d","Type":"ContainerDied","Data":"4799377af470aac68ad366d0732a5d2fea16a8519a2428c80693862d7f147aa1"} Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.571483 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"64fe61b4-a0ff-4112-a03b-3efbf41fc49d","Type":"ContainerDied","Data":"73a0ba9b546e038faaf53cdeb9962704e01ecec10ec08a01638a5d5cc066edb3"} Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.571521 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"64fe61b4-a0ff-4112-a03b-3efbf41fc49d","Type":"ContainerDied","Data":"9c5d526ad2a989d4a5b2db94f738d588885f984ccaa77a82bd456b9a730a9826"} Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.571547 4812 scope.go:117] "RemoveContainer" containerID="4799377af470aac68ad366d0732a5d2fea16a8519a2428c80693862d7f147aa1" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.599598 4812 scope.go:117] "RemoveContainer" containerID="73a0ba9b546e038faaf53cdeb9962704e01ecec10ec08a01638a5d5cc066edb3" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.643082 4812 scope.go:117] "RemoveContainer" containerID="4799377af470aac68ad366d0732a5d2fea16a8519a2428c80693862d7f147aa1" Nov 24 19:36:57 crc kubenswrapper[4812]: E1124 19:36:57.643592 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4799377af470aac68ad366d0732a5d2fea16a8519a2428c80693862d7f147aa1\": container with ID starting with 4799377af470aac68ad366d0732a5d2fea16a8519a2428c80693862d7f147aa1 not found: ID does not exist" containerID="4799377af470aac68ad366d0732a5d2fea16a8519a2428c80693862d7f147aa1" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.643620 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4799377af470aac68ad366d0732a5d2fea16a8519a2428c80693862d7f147aa1"} err="failed to get container status \"4799377af470aac68ad366d0732a5d2fea16a8519a2428c80693862d7f147aa1\": rpc error: code = NotFound desc = could not find container \"4799377af470aac68ad366d0732a5d2fea16a8519a2428c80693862d7f147aa1\": container with ID starting with 4799377af470aac68ad366d0732a5d2fea16a8519a2428c80693862d7f147aa1 not found: ID does not exist" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.643639 4812 scope.go:117] "RemoveContainer" containerID="73a0ba9b546e038faaf53cdeb9962704e01ecec10ec08a01638a5d5cc066edb3" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.646370 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 19:36:57 crc kubenswrapper[4812]: E1124 19:36:57.647059 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73a0ba9b546e038faaf53cdeb9962704e01ecec10ec08a01638a5d5cc066edb3\": container with ID starting with 73a0ba9b546e038faaf53cdeb9962704e01ecec10ec08a01638a5d5cc066edb3 not found: ID does not exist" containerID="73a0ba9b546e038faaf53cdeb9962704e01ecec10ec08a01638a5d5cc066edb3" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.647113 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73a0ba9b546e038faaf53cdeb9962704e01ecec10ec08a01638a5d5cc066edb3"} err="failed to get container status \"73a0ba9b546e038faaf53cdeb9962704e01ecec10ec08a01638a5d5cc066edb3\": rpc error: code = NotFound desc = could not find container \"73a0ba9b546e038faaf53cdeb9962704e01ecec10ec08a01638a5d5cc066edb3\": container with ID starting with 73a0ba9b546e038faaf53cdeb9962704e01ecec10ec08a01638a5d5cc066edb3 not found: ID does not exist" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.647144 4812 scope.go:117] "RemoveContainer" containerID="4799377af470aac68ad366d0732a5d2fea16a8519a2428c80693862d7f147aa1" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.650372 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4799377af470aac68ad366d0732a5d2fea16a8519a2428c80693862d7f147aa1"} err="failed to get container status \"4799377af470aac68ad366d0732a5d2fea16a8519a2428c80693862d7f147aa1\": rpc error: code = NotFound desc = could not find container \"4799377af470aac68ad366d0732a5d2fea16a8519a2428c80693862d7f147aa1\": container with ID starting with 4799377af470aac68ad366d0732a5d2fea16a8519a2428c80693862d7f147aa1 not found: ID does not exist" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.650413 4812 scope.go:117] "RemoveContainer" containerID="73a0ba9b546e038faaf53cdeb9962704e01ecec10ec08a01638a5d5cc066edb3" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.651080 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73a0ba9b546e038faaf53cdeb9962704e01ecec10ec08a01638a5d5cc066edb3"} err="failed to get container status \"73a0ba9b546e038faaf53cdeb9962704e01ecec10ec08a01638a5d5cc066edb3\": rpc error: code = NotFound desc = could not find container \"73a0ba9b546e038faaf53cdeb9962704e01ecec10ec08a01638a5d5cc066edb3\": container with ID starting with 73a0ba9b546e038faaf53cdeb9962704e01ecec10ec08a01638a5d5cc066edb3 not found: ID does not exist" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.658969 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.666481 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 19:36:57 crc kubenswrapper[4812]: E1124 19:36:57.666878 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64fe61b4-a0ff-4112-a03b-3efbf41fc49d" containerName="cinder-api-log" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.666895 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="64fe61b4-a0ff-4112-a03b-3efbf41fc49d" containerName="cinder-api-log" Nov 24 19:36:57 crc kubenswrapper[4812]: E1124 19:36:57.666919 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64fe61b4-a0ff-4112-a03b-3efbf41fc49d" containerName="cinder-api" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.666957 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="64fe61b4-a0ff-4112-a03b-3efbf41fc49d" containerName="cinder-api" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.667130 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="64fe61b4-a0ff-4112-a03b-3efbf41fc49d" containerName="cinder-api-log" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.667151 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="64fe61b4-a0ff-4112-a03b-3efbf41fc49d" containerName="cinder-api" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.668110 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.670791 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.671056 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.671118 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.673397 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.822400 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-config-data-custom\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.822471 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-logs\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.822669 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-scripts\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.822721 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj264\" (UniqueName: \"kubernetes.io/projected/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-kube-api-access-mj264\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.822749 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-config-data\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.822792 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.822889 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.823102 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-public-tls-certs\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.823323 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-etc-machine-id\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.893155 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.925786 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.926106 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-public-tls-certs\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.926698 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-etc-machine-id\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.926760 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-config-data-custom\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.926823 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-logs\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.926858 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-scripts\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.926930 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj264\" (UniqueName: \"kubernetes.io/projected/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-kube-api-access-mj264\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.926952 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-config-data\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.926991 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.927586 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-etc-machine-id\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.927923 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-logs\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.934957 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-config-data-custom\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.935085 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.935154 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.935388 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-public-tls-certs\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.935799 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-scripts\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.942030 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-config-data\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:57 crc kubenswrapper[4812]: I1124 19:36:57.942615 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj264\" (UniqueName: \"kubernetes.io/projected/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-kube-api-access-mj264\") pod \"cinder-api-0\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " pod="openstack/cinder-api-0" Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.028269 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nvql\" (UniqueName: \"kubernetes.io/projected/1b2589de-d523-4229-adfa-d0b682557955-kube-api-access-9nvql\") pod \"1b2589de-d523-4229-adfa-d0b682557955\" (UID: \"1b2589de-d523-4229-adfa-d0b682557955\") " Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.028368 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b2589de-d523-4229-adfa-d0b682557955-config-data-custom\") pod \"1b2589de-d523-4229-adfa-d0b682557955\" (UID: \"1b2589de-d523-4229-adfa-d0b682557955\") " Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.028432 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b2589de-d523-4229-adfa-d0b682557955-logs\") pod \"1b2589de-d523-4229-adfa-d0b682557955\" (UID: \"1b2589de-d523-4229-adfa-d0b682557955\") " Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.028551 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2589de-d523-4229-adfa-d0b682557955-combined-ca-bundle\") pod \"1b2589de-d523-4229-adfa-d0b682557955\" (UID: \"1b2589de-d523-4229-adfa-d0b682557955\") " Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.028579 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2589de-d523-4229-adfa-d0b682557955-config-data\") pod \"1b2589de-d523-4229-adfa-d0b682557955\" (UID: \"1b2589de-d523-4229-adfa-d0b682557955\") " Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.030458 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.030809 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b2589de-d523-4229-adfa-d0b682557955-logs" (OuterVolumeSpecName: "logs") pod "1b2589de-d523-4229-adfa-d0b682557955" (UID: "1b2589de-d523-4229-adfa-d0b682557955"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.033253 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2589de-d523-4229-adfa-d0b682557955-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1b2589de-d523-4229-adfa-d0b682557955" (UID: "1b2589de-d523-4229-adfa-d0b682557955"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.041734 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b2589de-d523-4229-adfa-d0b682557955-kube-api-access-9nvql" (OuterVolumeSpecName: "kube-api-access-9nvql") pod "1b2589de-d523-4229-adfa-d0b682557955" (UID: "1b2589de-d523-4229-adfa-d0b682557955"). InnerVolumeSpecName "kube-api-access-9nvql". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.084442 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2589de-d523-4229-adfa-d0b682557955-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b2589de-d523-4229-adfa-d0b682557955" (UID: "1b2589de-d523-4229-adfa-d0b682557955"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.088690 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b2589de-d523-4229-adfa-d0b682557955-config-data" (OuterVolumeSpecName: "config-data") pod "1b2589de-d523-4229-adfa-d0b682557955" (UID: "1b2589de-d523-4229-adfa-d0b682557955"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.107886 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.132898 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b2589de-d523-4229-adfa-d0b682557955-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.132937 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b2589de-d523-4229-adfa-d0b682557955-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.132948 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b2589de-d523-4229-adfa-d0b682557955-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.132956 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b2589de-d523-4229-adfa-d0b682557955-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.132964 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nvql\" (UniqueName: \"kubernetes.io/projected/1b2589de-d523-4229-adfa-d0b682557955-kube-api-access-9nvql\") on node \"crc\" DevicePath \"\"" Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.495876 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 19:36:58 crc kubenswrapper[4812]: W1124 19:36:58.500218 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde0c0304_db1e_4a1e_8b7e_5700c55fc9ed.slice/crio-8400e0f5c9e4eb63ab3189b007b4d18c10dc4d7ede0f8b4a402d159ad599d536 WatchSource:0}: Error finding container 8400e0f5c9e4eb63ab3189b007b4d18c10dc4d7ede0f8b4a402d159ad599d536: Status 404 returned error can't find the container with id 8400e0f5c9e4eb63ab3189b007b4d18c10dc4d7ede0f8b4a402d159ad599d536 Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.583958 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed","Type":"ContainerStarted","Data":"8400e0f5c9e4eb63ab3189b007b4d18c10dc4d7ede0f8b4a402d159ad599d536"} Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.586572 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-677dbcd544-vn9hr" event={"ID":"1b2589de-d523-4229-adfa-d0b682557955","Type":"ContainerDied","Data":"5e41bf460e7828cc8ad0ecb56e39118e5257e8cbef4e4a5d285aa5cb0bdedc8a"} Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.586626 4812 scope.go:117] "RemoveContainer" containerID="efe8190e176d4f61444a62711cb5acaffd4f1fc0e1300e501621076d93bceaf0" Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.586762 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-677dbcd544-vn9hr" Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.632601 4812 scope.go:117] "RemoveContainer" containerID="2c08fc1656dc6af46fe487c79f4e8e13a88ca0c62c23dac3c5e5110812aabe0e" Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.637586 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-677dbcd544-vn9hr"] Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.646401 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-677dbcd544-vn9hr"] Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.982263 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b2589de-d523-4229-adfa-d0b682557955" path="/var/lib/kubelet/pods/1b2589de-d523-4229-adfa-d0b682557955/volumes" Nov 24 19:36:58 crc kubenswrapper[4812]: I1124 19:36:58.983132 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64fe61b4-a0ff-4112-a03b-3efbf41fc49d" path="/var/lib/kubelet/pods/64fe61b4-a0ff-4112-a03b-3efbf41fc49d/volumes" Nov 24 19:36:59 crc kubenswrapper[4812]: I1124 19:36:59.615419 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed","Type":"ContainerStarted","Data":"5afde10d2f90a0c5d1bc9ea9375923c024c51c32da257a3842c5489dcc1fd794"} Nov 24 19:37:00 crc kubenswrapper[4812]: I1124 19:37:00.408567 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-574f454648-szvd4" Nov 24 19:37:00 crc kubenswrapper[4812]: I1124 19:37:00.450948 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-574f454648-szvd4" Nov 24 19:37:00 crc kubenswrapper[4812]: I1124 19:37:00.628472 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed","Type":"ContainerStarted","Data":"4984c0a89981ef7e9ccb2912cd1e3db25ba3f333055dcbc10b30cb69f8c45835"} Nov 24 19:37:00 crc kubenswrapper[4812]: I1124 19:37:00.628535 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 19:37:00 crc kubenswrapper[4812]: I1124 19:37:00.652979 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.652953911 podStartE2EDuration="3.652953911s" podCreationTimestamp="2025-11-24 19:36:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:37:00.644677593 +0000 UTC m=+1214.433629964" watchObservedRunningTime="2025-11-24 19:37:00.652953911 +0000 UTC m=+1214.441906272" Nov 24 19:37:00 crc kubenswrapper[4812]: I1124 19:37:00.990820 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.532960 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 24 19:37:01 crc kubenswrapper[4812]: E1124 19:37:01.533656 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b2589de-d523-4229-adfa-d0b682557955" containerName="barbican-api" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.533676 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b2589de-d523-4229-adfa-d0b682557955" containerName="barbican-api" Nov 24 19:37:01 crc kubenswrapper[4812]: E1124 19:37:01.533713 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b2589de-d523-4229-adfa-d0b682557955" containerName="barbican-api-log" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.533721 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b2589de-d523-4229-adfa-d0b682557955" containerName="barbican-api-log" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.533955 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b2589de-d523-4229-adfa-d0b682557955" containerName="barbican-api-log" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.533975 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b2589de-d523-4229-adfa-d0b682557955" containerName="barbican-api" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.534639 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.537700 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.537782 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-ws29m" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.544812 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.556904 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.689557 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72qwx\" (UniqueName: \"kubernetes.io/projected/8f7453e8-f9c5-4588-80b8-82bba37e1514-kube-api-access-72qwx\") pod \"openstackclient\" (UID: \"8f7453e8-f9c5-4588-80b8-82bba37e1514\") " pod="openstack/openstackclient" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.689708 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f7453e8-f9c5-4588-80b8-82bba37e1514-openstack-config-secret\") pod \"openstackclient\" (UID: \"8f7453e8-f9c5-4588-80b8-82bba37e1514\") " pod="openstack/openstackclient" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.689760 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f7453e8-f9c5-4588-80b8-82bba37e1514-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8f7453e8-f9c5-4588-80b8-82bba37e1514\") " pod="openstack/openstackclient" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.689792 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f7453e8-f9c5-4588-80b8-82bba37e1514-openstack-config\") pod \"openstackclient\" (UID: \"8f7453e8-f9c5-4588-80b8-82bba37e1514\") " pod="openstack/openstackclient" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.791249 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f7453e8-f9c5-4588-80b8-82bba37e1514-openstack-config-secret\") pod \"openstackclient\" (UID: \"8f7453e8-f9c5-4588-80b8-82bba37e1514\") " pod="openstack/openstackclient" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.791421 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f7453e8-f9c5-4588-80b8-82bba37e1514-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8f7453e8-f9c5-4588-80b8-82bba37e1514\") " pod="openstack/openstackclient" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.791498 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f7453e8-f9c5-4588-80b8-82bba37e1514-openstack-config\") pod \"openstackclient\" (UID: \"8f7453e8-f9c5-4588-80b8-82bba37e1514\") " pod="openstack/openstackclient" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.791569 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72qwx\" (UniqueName: \"kubernetes.io/projected/8f7453e8-f9c5-4588-80b8-82bba37e1514-kube-api-access-72qwx\") pod \"openstackclient\" (UID: \"8f7453e8-f9c5-4588-80b8-82bba37e1514\") " pod="openstack/openstackclient" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.792379 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f7453e8-f9c5-4588-80b8-82bba37e1514-openstack-config\") pod \"openstackclient\" (UID: \"8f7453e8-f9c5-4588-80b8-82bba37e1514\") " pod="openstack/openstackclient" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.797897 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f7453e8-f9c5-4588-80b8-82bba37e1514-openstack-config-secret\") pod \"openstackclient\" (UID: \"8f7453e8-f9c5-4588-80b8-82bba37e1514\") " pod="openstack/openstackclient" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.808998 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f7453e8-f9c5-4588-80b8-82bba37e1514-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8f7453e8-f9c5-4588-80b8-82bba37e1514\") " pod="openstack/openstackclient" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.813698 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72qwx\" (UniqueName: \"kubernetes.io/projected/8f7453e8-f9c5-4588-80b8-82bba37e1514-kube-api-access-72qwx\") pod \"openstackclient\" (UID: \"8f7453e8-f9c5-4588-80b8-82bba37e1514\") " pod="openstack/openstackclient" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.846308 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-84884b67b4-mzcbv" Nov 24 19:37:01 crc kubenswrapper[4812]: I1124 19:37:01.874162 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 19:37:02 crc kubenswrapper[4812]: I1124 19:37:02.489168 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 19:37:02 crc kubenswrapper[4812]: I1124 19:37:02.649263 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8f7453e8-f9c5-4588-80b8-82bba37e1514","Type":"ContainerStarted","Data":"dda9fb491806444d102ed4073bce56b6a072539d0c4dae0b2c50b32b6227cf74"} Nov 24 19:37:02 crc kubenswrapper[4812]: I1124 19:37:02.998643 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:37:02 crc kubenswrapper[4812]: I1124 19:37:02.998706 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:37:03 crc kubenswrapper[4812]: I1124 19:37:03.167598 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:37:03 crc kubenswrapper[4812]: I1124 19:37:03.227684 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc67f459c-fddsx"] Nov 24 19:37:03 crc kubenswrapper[4812]: I1124 19:37:03.227878 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" podUID="d260a634-3b2f-49ae-9506-1dc164f7423f" containerName="dnsmasq-dns" containerID="cri-o://451d964023b8aa7abae6cf6fa6715769fa25331a56e819b52615f9340c303939" gracePeriod=10 Nov 24 19:37:03 crc kubenswrapper[4812]: I1124 19:37:03.369437 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 19:37:03 crc kubenswrapper[4812]: I1124 19:37:03.414849 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 19:37:03 crc kubenswrapper[4812]: I1124 19:37:03.661386 4812 generic.go:334] "Generic (PLEG): container finished" podID="d260a634-3b2f-49ae-9506-1dc164f7423f" containerID="451d964023b8aa7abae6cf6fa6715769fa25331a56e819b52615f9340c303939" exitCode=0 Nov 24 19:37:03 crc kubenswrapper[4812]: I1124 19:37:03.661476 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" event={"ID":"d260a634-3b2f-49ae-9506-1dc164f7423f","Type":"ContainerDied","Data":"451d964023b8aa7abae6cf6fa6715769fa25331a56e819b52615f9340c303939"} Nov 24 19:37:03 crc kubenswrapper[4812]: I1124 19:37:03.662025 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="cf87d5d2-b5f7-42ca-8e2b-1cca59fad601" containerName="cinder-scheduler" containerID="cri-o://089803d3ffe85e680486b30faf84d12675eff5e621536769557e8542dbd997e8" gracePeriod=30 Nov 24 19:37:03 crc kubenswrapper[4812]: I1124 19:37:03.662059 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="cf87d5d2-b5f7-42ca-8e2b-1cca59fad601" containerName="probe" containerID="cri-o://fdd570be79e8627c601073ae1b301242766bbe505b10dc6ffaa76bfa5f9576f7" gracePeriod=30 Nov 24 19:37:03 crc kubenswrapper[4812]: I1124 19:37:03.797266 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:37:03 crc kubenswrapper[4812]: I1124 19:37:03.941962 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-ovsdbserver-sb\") pod \"d260a634-3b2f-49ae-9506-1dc164f7423f\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " Nov 24 19:37:03 crc kubenswrapper[4812]: I1124 19:37:03.942015 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-ovsdbserver-nb\") pod \"d260a634-3b2f-49ae-9506-1dc164f7423f\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " Nov 24 19:37:03 crc kubenswrapper[4812]: I1124 19:37:03.942060 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-dns-swift-storage-0\") pod \"d260a634-3b2f-49ae-9506-1dc164f7423f\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " Nov 24 19:37:03 crc kubenswrapper[4812]: I1124 19:37:03.942111 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-config\") pod \"d260a634-3b2f-49ae-9506-1dc164f7423f\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " Nov 24 19:37:03 crc kubenswrapper[4812]: I1124 19:37:03.942150 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-dns-svc\") pod \"d260a634-3b2f-49ae-9506-1dc164f7423f\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " Nov 24 19:37:03 crc kubenswrapper[4812]: I1124 19:37:03.942197 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkffz\" (UniqueName: \"kubernetes.io/projected/d260a634-3b2f-49ae-9506-1dc164f7423f-kube-api-access-gkffz\") pod \"d260a634-3b2f-49ae-9506-1dc164f7423f\" (UID: \"d260a634-3b2f-49ae-9506-1dc164f7423f\") " Nov 24 19:37:03 crc kubenswrapper[4812]: I1124 19:37:03.947630 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d260a634-3b2f-49ae-9506-1dc164f7423f-kube-api-access-gkffz" (OuterVolumeSpecName: "kube-api-access-gkffz") pod "d260a634-3b2f-49ae-9506-1dc164f7423f" (UID: "d260a634-3b2f-49ae-9506-1dc164f7423f"). InnerVolumeSpecName "kube-api-access-gkffz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:37:03 crc kubenswrapper[4812]: I1124 19:37:03.998902 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d260a634-3b2f-49ae-9506-1dc164f7423f" (UID: "d260a634-3b2f-49ae-9506-1dc164f7423f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:37:04 crc kubenswrapper[4812]: I1124 19:37:03.999539 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d260a634-3b2f-49ae-9506-1dc164f7423f" (UID: "d260a634-3b2f-49ae-9506-1dc164f7423f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:37:04 crc kubenswrapper[4812]: I1124 19:37:04.010198 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d260a634-3b2f-49ae-9506-1dc164f7423f" (UID: "d260a634-3b2f-49ae-9506-1dc164f7423f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:37:04 crc kubenswrapper[4812]: I1124 19:37:04.027144 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-config" (OuterVolumeSpecName: "config") pod "d260a634-3b2f-49ae-9506-1dc164f7423f" (UID: "d260a634-3b2f-49ae-9506-1dc164f7423f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:37:04 crc kubenswrapper[4812]: I1124 19:37:04.040020 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d260a634-3b2f-49ae-9506-1dc164f7423f" (UID: "d260a634-3b2f-49ae-9506-1dc164f7423f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:37:04 crc kubenswrapper[4812]: I1124 19:37:04.044170 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkffz\" (UniqueName: \"kubernetes.io/projected/d260a634-3b2f-49ae-9506-1dc164f7423f-kube-api-access-gkffz\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:04 crc kubenswrapper[4812]: I1124 19:37:04.044198 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:04 crc kubenswrapper[4812]: I1124 19:37:04.044206 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:04 crc kubenswrapper[4812]: I1124 19:37:04.044217 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:04 crc kubenswrapper[4812]: I1124 19:37:04.044226 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:04 crc kubenswrapper[4812]: I1124 19:37:04.044234 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d260a634-3b2f-49ae-9506-1dc164f7423f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:04 crc kubenswrapper[4812]: I1124 19:37:04.671896 4812 generic.go:334] "Generic (PLEG): container finished" podID="cf87d5d2-b5f7-42ca-8e2b-1cca59fad601" containerID="fdd570be79e8627c601073ae1b301242766bbe505b10dc6ffaa76bfa5f9576f7" exitCode=0 Nov 24 19:37:04 crc kubenswrapper[4812]: I1124 19:37:04.672089 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601","Type":"ContainerDied","Data":"fdd570be79e8627c601073ae1b301242766bbe505b10dc6ffaa76bfa5f9576f7"} Nov 24 19:37:04 crc kubenswrapper[4812]: I1124 19:37:04.674656 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" event={"ID":"d260a634-3b2f-49ae-9506-1dc164f7423f","Type":"ContainerDied","Data":"fb7849910af48da5a6021cc94d856570e08515933439be782b359b41eaf00cc3"} Nov 24 19:37:04 crc kubenswrapper[4812]: I1124 19:37:04.674694 4812 scope.go:117] "RemoveContainer" containerID="451d964023b8aa7abae6cf6fa6715769fa25331a56e819b52615f9340c303939" Nov 24 19:37:04 crc kubenswrapper[4812]: I1124 19:37:04.674793 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cc67f459c-fddsx" Nov 24 19:37:04 crc kubenswrapper[4812]: I1124 19:37:04.715937 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cc67f459c-fddsx"] Nov 24 19:37:04 crc kubenswrapper[4812]: I1124 19:37:04.717273 4812 scope.go:117] "RemoveContainer" containerID="b4a1f22de43ee24708541e30e1eded4a7fec3aed21f30085ce3168851a95fb50" Nov 24 19:37:04 crc kubenswrapper[4812]: I1124 19:37:04.725794 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cc67f459c-fddsx"] Nov 24 19:37:04 crc kubenswrapper[4812]: I1124 19:37:04.979927 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d260a634-3b2f-49ae-9506-1dc164f7423f" path="/var/lib/kubelet/pods/d260a634-3b2f-49ae-9506-1dc164f7423f/volumes" Nov 24 19:37:05 crc kubenswrapper[4812]: I1124 19:37:05.838547 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:05 crc kubenswrapper[4812]: I1124 19:37:05.840230 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04be8da8-17d2-450e-8d75-f30600c4656a" containerName="ceilometer-central-agent" containerID="cri-o://0a203479b0f595f21c194ef6e751fa70d1d548b8e651e778d697aa9152d02055" gracePeriod=30 Nov 24 19:37:05 crc kubenswrapper[4812]: I1124 19:37:05.840498 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04be8da8-17d2-450e-8d75-f30600c4656a" containerName="proxy-httpd" containerID="cri-o://61636222c24dbc69d2827dbc5ca5cbbb7dd96e86f967026404630566bd435d78" gracePeriod=30 Nov 24 19:37:05 crc kubenswrapper[4812]: I1124 19:37:05.840644 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04be8da8-17d2-450e-8d75-f30600c4656a" containerName="sg-core" containerID="cri-o://da7e2b25a70c3fba95c4a6655e698a5160edbc87c129cdd5d1209dffed50a286" gracePeriod=30 Nov 24 19:37:05 crc kubenswrapper[4812]: I1124 19:37:05.840794 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04be8da8-17d2-450e-8d75-f30600c4656a" containerName="ceilometer-notification-agent" containerID="cri-o://bc05ae9e099323cf253c539256e6465cdc984a072acda5989444fd5f65b298b1" gracePeriod=30 Nov 24 19:37:05 crc kubenswrapper[4812]: I1124 19:37:05.851702 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 19:37:05 crc kubenswrapper[4812]: I1124 19:37:05.986527 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-86f4846457-t2fl4"] Nov 24 19:37:05 crc kubenswrapper[4812]: E1124 19:37:05.986961 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d260a634-3b2f-49ae-9506-1dc164f7423f" containerName="init" Nov 24 19:37:05 crc kubenswrapper[4812]: I1124 19:37:05.986974 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d260a634-3b2f-49ae-9506-1dc164f7423f" containerName="init" Nov 24 19:37:05 crc kubenswrapper[4812]: E1124 19:37:05.986991 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d260a634-3b2f-49ae-9506-1dc164f7423f" containerName="dnsmasq-dns" Nov 24 19:37:05 crc kubenswrapper[4812]: I1124 19:37:05.986999 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d260a634-3b2f-49ae-9506-1dc164f7423f" containerName="dnsmasq-dns" Nov 24 19:37:05 crc kubenswrapper[4812]: I1124 19:37:05.987184 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d260a634-3b2f-49ae-9506-1dc164f7423f" containerName="dnsmasq-dns" Nov 24 19:37:05 crc kubenswrapper[4812]: I1124 19:37:05.988152 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.003881 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.004104 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.004261 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.013708 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-86f4846457-t2fl4"] Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.085513 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6pmc\" (UniqueName: \"kubernetes.io/projected/418d4320-50e7-4767-bed7-db43af1583f0-kube-api-access-j6pmc\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.085556 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/418d4320-50e7-4767-bed7-db43af1583f0-run-httpd\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.085599 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-public-tls-certs\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.085629 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-combined-ca-bundle\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.085650 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-internal-tls-certs\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.085676 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/418d4320-50e7-4767-bed7-db43af1583f0-etc-swift\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.085711 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/418d4320-50e7-4767-bed7-db43af1583f0-log-httpd\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.085769 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-config-data\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.186837 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6pmc\" (UniqueName: \"kubernetes.io/projected/418d4320-50e7-4767-bed7-db43af1583f0-kube-api-access-j6pmc\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.187095 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/418d4320-50e7-4767-bed7-db43af1583f0-run-httpd\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.187205 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-public-tls-certs\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.187292 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-combined-ca-bundle\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.187387 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-internal-tls-certs\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.187470 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/418d4320-50e7-4767-bed7-db43af1583f0-etc-swift\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.187565 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/418d4320-50e7-4767-bed7-db43af1583f0-log-httpd\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.187674 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-config-data\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.190670 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/418d4320-50e7-4767-bed7-db43af1583f0-run-httpd\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.190739 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/418d4320-50e7-4767-bed7-db43af1583f0-log-httpd\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.193601 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-config-data\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.193888 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/418d4320-50e7-4767-bed7-db43af1583f0-etc-swift\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.194647 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-internal-tls-certs\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.199035 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-combined-ca-bundle\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.199549 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-public-tls-certs\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.213040 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6pmc\" (UniqueName: \"kubernetes.io/projected/418d4320-50e7-4767-bed7-db43af1583f0-kube-api-access-j6pmc\") pod \"swift-proxy-86f4846457-t2fl4\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.319681 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.395527 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-84884b67b4-mzcbv"] Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.395808 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-84884b67b4-mzcbv" podUID="b4073e4e-f8f0-4ef9-a813-87920edaa840" containerName="neutron-api" containerID="cri-o://79b5f87c486798f95647660266b1dbff5d59afbb11903583a975e0ccee7cb993" gracePeriod=30 Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.396972 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-84884b67b4-mzcbv" podUID="b4073e4e-f8f0-4ef9-a813-87920edaa840" containerName="neutron-httpd" containerID="cri-o://e7b27c07f1634e844989fa1c25a4ba21f6b40df5db57e3e377b5bf27b2d39a29" gracePeriod=30 Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.457776 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.477008 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.599870 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-config-data-custom\") pod \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.600225 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-combined-ca-bundle\") pod \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.600261 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-etc-machine-id\") pod \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.600283 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-scripts\") pod \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.600538 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckj8t\" (UniqueName: \"kubernetes.io/projected/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-kube-api-access-ckj8t\") pod \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.600575 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-config-data\") pod \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\" (UID: \"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601\") " Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.600607 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cf87d5d2-b5f7-42ca-8e2b-1cca59fad601" (UID: "cf87d5d2-b5f7-42ca-8e2b-1cca59fad601"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.600988 4812 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.605749 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-scripts" (OuterVolumeSpecName: "scripts") pod "cf87d5d2-b5f7-42ca-8e2b-1cca59fad601" (UID: "cf87d5d2-b5f7-42ca-8e2b-1cca59fad601"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.606977 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cf87d5d2-b5f7-42ca-8e2b-1cca59fad601" (UID: "cf87d5d2-b5f7-42ca-8e2b-1cca59fad601"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.612768 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-kube-api-access-ckj8t" (OuterVolumeSpecName: "kube-api-access-ckj8t") pod "cf87d5d2-b5f7-42ca-8e2b-1cca59fad601" (UID: "cf87d5d2-b5f7-42ca-8e2b-1cca59fad601"). InnerVolumeSpecName "kube-api-access-ckj8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.675123 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf87d5d2-b5f7-42ca-8e2b-1cca59fad601" (UID: "cf87d5d2-b5f7-42ca-8e2b-1cca59fad601"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.703922 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.704143 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.704209 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.704264 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckj8t\" (UniqueName: \"kubernetes.io/projected/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-kube-api-access-ckj8t\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.712544 4812 generic.go:334] "Generic (PLEG): container finished" podID="b4073e4e-f8f0-4ef9-a813-87920edaa840" containerID="e7b27c07f1634e844989fa1c25a4ba21f6b40df5db57e3e377b5bf27b2d39a29" exitCode=0 Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.712760 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84884b67b4-mzcbv" event={"ID":"b4073e4e-f8f0-4ef9-a813-87920edaa840","Type":"ContainerDied","Data":"e7b27c07f1634e844989fa1c25a4ba21f6b40df5db57e3e377b5bf27b2d39a29"} Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.716360 4812 generic.go:334] "Generic (PLEG): container finished" podID="cf87d5d2-b5f7-42ca-8e2b-1cca59fad601" containerID="089803d3ffe85e680486b30faf84d12675eff5e621536769557e8542dbd997e8" exitCode=0 Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.716469 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601","Type":"ContainerDied","Data":"089803d3ffe85e680486b30faf84d12675eff5e621536769557e8542dbd997e8"} Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.716536 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"cf87d5d2-b5f7-42ca-8e2b-1cca59fad601","Type":"ContainerDied","Data":"83e2d85f12d1f7209edc72100c24df2c6bcd8a742586324a84e577be3808aefd"} Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.716597 4812 scope.go:117] "RemoveContainer" containerID="fdd570be79e8627c601073ae1b301242766bbe505b10dc6ffaa76bfa5f9576f7" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.717003 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.726250 4812 generic.go:334] "Generic (PLEG): container finished" podID="04be8da8-17d2-450e-8d75-f30600c4656a" containerID="61636222c24dbc69d2827dbc5ca5cbbb7dd96e86f967026404630566bd435d78" exitCode=0 Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.728237 4812 generic.go:334] "Generic (PLEG): container finished" podID="04be8da8-17d2-450e-8d75-f30600c4656a" containerID="da7e2b25a70c3fba95c4a6655e698a5160edbc87c129cdd5d1209dffed50a286" exitCode=2 Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.728299 4812 generic.go:334] "Generic (PLEG): container finished" podID="04be8da8-17d2-450e-8d75-f30600c4656a" containerID="0a203479b0f595f21c194ef6e751fa70d1d548b8e651e778d697aa9152d02055" exitCode=0 Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.728376 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04be8da8-17d2-450e-8d75-f30600c4656a","Type":"ContainerDied","Data":"61636222c24dbc69d2827dbc5ca5cbbb7dd96e86f967026404630566bd435d78"} Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.728446 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04be8da8-17d2-450e-8d75-f30600c4656a","Type":"ContainerDied","Data":"da7e2b25a70c3fba95c4a6655e698a5160edbc87c129cdd5d1209dffed50a286"} Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.728503 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04be8da8-17d2-450e-8d75-f30600c4656a","Type":"ContainerDied","Data":"0a203479b0f595f21c194ef6e751fa70d1d548b8e651e778d697aa9152d02055"} Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.775282 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.775654 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="61c84290-c737-4cee-8ac2-cfe1bf02e656" containerName="glance-httpd" containerID="cri-o://9901b266eef29a4a8c9eb6aa553ed705c00a6fa7715ab7158d9f2421450a1193" gracePeriod=30 Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.775557 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="61c84290-c737-4cee-8ac2-cfe1bf02e656" containerName="glance-log" containerID="cri-o://8bdc38b71291931cd55ce60ff4be4f9cb9e8f79ff63fb83d416f76649df38c59" gracePeriod=30 Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.777434 4812 scope.go:117] "RemoveContainer" containerID="089803d3ffe85e680486b30faf84d12675eff5e621536769557e8542dbd997e8" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.800457 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-config-data" (OuterVolumeSpecName: "config-data") pod "cf87d5d2-b5f7-42ca-8e2b-1cca59fad601" (UID: "cf87d5d2-b5f7-42ca-8e2b-1cca59fad601"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.808145 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.840344 4812 scope.go:117] "RemoveContainer" containerID="fdd570be79e8627c601073ae1b301242766bbe505b10dc6ffaa76bfa5f9576f7" Nov 24 19:37:06 crc kubenswrapper[4812]: E1124 19:37:06.850925 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdd570be79e8627c601073ae1b301242766bbe505b10dc6ffaa76bfa5f9576f7\": container with ID starting with fdd570be79e8627c601073ae1b301242766bbe505b10dc6ffaa76bfa5f9576f7 not found: ID does not exist" containerID="fdd570be79e8627c601073ae1b301242766bbe505b10dc6ffaa76bfa5f9576f7" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.850979 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdd570be79e8627c601073ae1b301242766bbe505b10dc6ffaa76bfa5f9576f7"} err="failed to get container status \"fdd570be79e8627c601073ae1b301242766bbe505b10dc6ffaa76bfa5f9576f7\": rpc error: code = NotFound desc = could not find container \"fdd570be79e8627c601073ae1b301242766bbe505b10dc6ffaa76bfa5f9576f7\": container with ID starting with fdd570be79e8627c601073ae1b301242766bbe505b10dc6ffaa76bfa5f9576f7 not found: ID does not exist" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.851006 4812 scope.go:117] "RemoveContainer" containerID="089803d3ffe85e680486b30faf84d12675eff5e621536769557e8542dbd997e8" Nov 24 19:37:06 crc kubenswrapper[4812]: E1124 19:37:06.853286 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"089803d3ffe85e680486b30faf84d12675eff5e621536769557e8542dbd997e8\": container with ID starting with 089803d3ffe85e680486b30faf84d12675eff5e621536769557e8542dbd997e8 not found: ID does not exist" containerID="089803d3ffe85e680486b30faf84d12675eff5e621536769557e8542dbd997e8" Nov 24 19:37:06 crc kubenswrapper[4812]: I1124 19:37:06.853308 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"089803d3ffe85e680486b30faf84d12675eff5e621536769557e8542dbd997e8"} err="failed to get container status \"089803d3ffe85e680486b30faf84d12675eff5e621536769557e8542dbd997e8\": rpc error: code = NotFound desc = could not find container \"089803d3ffe85e680486b30faf84d12675eff5e621536769557e8542dbd997e8\": container with ID starting with 089803d3ffe85e680486b30faf84d12675eff5e621536769557e8542dbd997e8 not found: ID does not exist" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.104264 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-86f4846457-t2fl4"] Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.128768 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.143600 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.152491 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 19:37:07 crc kubenswrapper[4812]: E1124 19:37:07.152937 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf87d5d2-b5f7-42ca-8e2b-1cca59fad601" containerName="probe" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.152950 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf87d5d2-b5f7-42ca-8e2b-1cca59fad601" containerName="probe" Nov 24 19:37:07 crc kubenswrapper[4812]: E1124 19:37:07.152966 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf87d5d2-b5f7-42ca-8e2b-1cca59fad601" containerName="cinder-scheduler" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.152974 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf87d5d2-b5f7-42ca-8e2b-1cca59fad601" containerName="cinder-scheduler" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.153143 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf87d5d2-b5f7-42ca-8e2b-1cca59fad601" containerName="probe" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.153156 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf87d5d2-b5f7-42ca-8e2b-1cca59fad601" containerName="cinder-scheduler" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.160306 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.162012 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.175499 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.225257 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p8cv\" (UniqueName: \"kubernetes.io/projected/7063bb41-d0d1-4605-b265-1fb3adce77b5-kube-api-access-5p8cv\") pod \"cinder-scheduler-0\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " pod="openstack/cinder-scheduler-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.225306 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7063bb41-d0d1-4605-b265-1fb3adce77b5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " pod="openstack/cinder-scheduler-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.225326 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " pod="openstack/cinder-scheduler-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.225370 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " pod="openstack/cinder-scheduler-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.225404 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-scripts\") pod \"cinder-scheduler-0\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " pod="openstack/cinder-scheduler-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.225450 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-config-data\") pod \"cinder-scheduler-0\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " pod="openstack/cinder-scheduler-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.265199 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.326309 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04be8da8-17d2-450e-8d75-f30600c4656a-run-httpd\") pod \"04be8da8-17d2-450e-8d75-f30600c4656a\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.326392 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-scripts\") pod \"04be8da8-17d2-450e-8d75-f30600c4656a\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.326471 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-config-data\") pod \"04be8da8-17d2-450e-8d75-f30600c4656a\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.326488 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-combined-ca-bundle\") pod \"04be8da8-17d2-450e-8d75-f30600c4656a\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.326529 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvc7v\" (UniqueName: \"kubernetes.io/projected/04be8da8-17d2-450e-8d75-f30600c4656a-kube-api-access-mvc7v\") pod \"04be8da8-17d2-450e-8d75-f30600c4656a\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.326550 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04be8da8-17d2-450e-8d75-f30600c4656a-log-httpd\") pod \"04be8da8-17d2-450e-8d75-f30600c4656a\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.326583 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-sg-core-conf-yaml\") pod \"04be8da8-17d2-450e-8d75-f30600c4656a\" (UID: \"04be8da8-17d2-450e-8d75-f30600c4656a\") " Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.326883 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " pod="openstack/cinder-scheduler-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.326925 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-scripts\") pod \"cinder-scheduler-0\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " pod="openstack/cinder-scheduler-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.326976 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-config-data\") pod \"cinder-scheduler-0\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " pod="openstack/cinder-scheduler-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.327045 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p8cv\" (UniqueName: \"kubernetes.io/projected/7063bb41-d0d1-4605-b265-1fb3adce77b5-kube-api-access-5p8cv\") pod \"cinder-scheduler-0\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " pod="openstack/cinder-scheduler-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.327063 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7063bb41-d0d1-4605-b265-1fb3adce77b5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " pod="openstack/cinder-scheduler-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.327079 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " pod="openstack/cinder-scheduler-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.331509 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04be8da8-17d2-450e-8d75-f30600c4656a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "04be8da8-17d2-450e-8d75-f30600c4656a" (UID: "04be8da8-17d2-450e-8d75-f30600c4656a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.331750 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04be8da8-17d2-450e-8d75-f30600c4656a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "04be8da8-17d2-450e-8d75-f30600c4656a" (UID: "04be8da8-17d2-450e-8d75-f30600c4656a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.335093 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7063bb41-d0d1-4605-b265-1fb3adce77b5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " pod="openstack/cinder-scheduler-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.335934 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " pod="openstack/cinder-scheduler-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.336065 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-config-data\") pod \"cinder-scheduler-0\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " pod="openstack/cinder-scheduler-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.338662 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-scripts\") pod \"cinder-scheduler-0\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " pod="openstack/cinder-scheduler-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.339037 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " pod="openstack/cinder-scheduler-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.340328 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-scripts" (OuterVolumeSpecName: "scripts") pod "04be8da8-17d2-450e-8d75-f30600c4656a" (UID: "04be8da8-17d2-450e-8d75-f30600c4656a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.341919 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04be8da8-17d2-450e-8d75-f30600c4656a-kube-api-access-mvc7v" (OuterVolumeSpecName: "kube-api-access-mvc7v") pod "04be8da8-17d2-450e-8d75-f30600c4656a" (UID: "04be8da8-17d2-450e-8d75-f30600c4656a"). InnerVolumeSpecName "kube-api-access-mvc7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.371318 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p8cv\" (UniqueName: \"kubernetes.io/projected/7063bb41-d0d1-4605-b265-1fb3adce77b5-kube-api-access-5p8cv\") pod \"cinder-scheduler-0\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " pod="openstack/cinder-scheduler-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.401587 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "04be8da8-17d2-450e-8d75-f30600c4656a" (UID: "04be8da8-17d2-450e-8d75-f30600c4656a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.429252 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04be8da8-17d2-450e-8d75-f30600c4656a-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.429297 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.429313 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvc7v\" (UniqueName: \"kubernetes.io/projected/04be8da8-17d2-450e-8d75-f30600c4656a-kube-api-access-mvc7v\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.429324 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04be8da8-17d2-450e-8d75-f30600c4656a-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.429346 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.453798 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04be8da8-17d2-450e-8d75-f30600c4656a" (UID: "04be8da8-17d2-450e-8d75-f30600c4656a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.468511 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-config-data" (OuterVolumeSpecName: "config-data") pod "04be8da8-17d2-450e-8d75-f30600c4656a" (UID: "04be8da8-17d2-450e-8d75-f30600c4656a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.530849 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.530880 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04be8da8-17d2-450e-8d75-f30600c4656a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.575227 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.739162 4812 generic.go:334] "Generic (PLEG): container finished" podID="61c84290-c737-4cee-8ac2-cfe1bf02e656" containerID="8bdc38b71291931cd55ce60ff4be4f9cb9e8f79ff63fb83d416f76649df38c59" exitCode=143 Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.739239 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"61c84290-c737-4cee-8ac2-cfe1bf02e656","Type":"ContainerDied","Data":"8bdc38b71291931cd55ce60ff4be4f9cb9e8f79ff63fb83d416f76649df38c59"} Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.744813 4812 generic.go:334] "Generic (PLEG): container finished" podID="04be8da8-17d2-450e-8d75-f30600c4656a" containerID="bc05ae9e099323cf253c539256e6465cdc984a072acda5989444fd5f65b298b1" exitCode=0 Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.744874 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04be8da8-17d2-450e-8d75-f30600c4656a","Type":"ContainerDied","Data":"bc05ae9e099323cf253c539256e6465cdc984a072acda5989444fd5f65b298b1"} Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.744889 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.744902 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04be8da8-17d2-450e-8d75-f30600c4656a","Type":"ContainerDied","Data":"f56c6b0bd8da8b7ab4281a4580efcdb049022fc7f06bcf927a04b4f964ceffb5"} Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.744919 4812 scope.go:117] "RemoveContainer" containerID="61636222c24dbc69d2827dbc5ca5cbbb7dd96e86f967026404630566bd435d78" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.749074 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-86f4846457-t2fl4" event={"ID":"418d4320-50e7-4767-bed7-db43af1583f0","Type":"ContainerStarted","Data":"c1a3bf87878ce67c52c317f9bd13b4bf250280872ea6f819e3464fba67ab5710"} Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.749099 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-86f4846457-t2fl4" event={"ID":"418d4320-50e7-4767-bed7-db43af1583f0","Type":"ContainerStarted","Data":"8896e5afc4cb3d3e2a72bc8141f236974d95f7861b6c4367e44b1f7fb86fe9c4"} Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.780079 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.801252 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.813926 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:07 crc kubenswrapper[4812]: E1124 19:37:07.814611 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04be8da8-17d2-450e-8d75-f30600c4656a" containerName="ceilometer-central-agent" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.814636 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="04be8da8-17d2-450e-8d75-f30600c4656a" containerName="ceilometer-central-agent" Nov 24 19:37:07 crc kubenswrapper[4812]: E1124 19:37:07.814654 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04be8da8-17d2-450e-8d75-f30600c4656a" containerName="proxy-httpd" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.814665 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="04be8da8-17d2-450e-8d75-f30600c4656a" containerName="proxy-httpd" Nov 24 19:37:07 crc kubenswrapper[4812]: E1124 19:37:07.814683 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04be8da8-17d2-450e-8d75-f30600c4656a" containerName="sg-core" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.814693 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="04be8da8-17d2-450e-8d75-f30600c4656a" containerName="sg-core" Nov 24 19:37:07 crc kubenswrapper[4812]: E1124 19:37:07.814714 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04be8da8-17d2-450e-8d75-f30600c4656a" containerName="ceilometer-notification-agent" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.814722 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="04be8da8-17d2-450e-8d75-f30600c4656a" containerName="ceilometer-notification-agent" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.815127 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="04be8da8-17d2-450e-8d75-f30600c4656a" containerName="ceilometer-central-agent" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.815154 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="04be8da8-17d2-450e-8d75-f30600c4656a" containerName="sg-core" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.815166 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="04be8da8-17d2-450e-8d75-f30600c4656a" containerName="proxy-httpd" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.815194 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="04be8da8-17d2-450e-8d75-f30600c4656a" containerName="ceilometer-notification-agent" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.817777 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.821582 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.821892 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.822146 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.937060 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-config-data\") pod \"ceilometer-0\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " pod="openstack/ceilometer-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.937109 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " pod="openstack/ceilometer-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.937134 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-scripts\") pod \"ceilometer-0\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " pod="openstack/ceilometer-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.937154 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b22484e-ecb8-48ad-846a-9bf09c732ad3-run-httpd\") pod \"ceilometer-0\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " pod="openstack/ceilometer-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.937217 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " pod="openstack/ceilometer-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.937276 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpklf\" (UniqueName: \"kubernetes.io/projected/5b22484e-ecb8-48ad-846a-9bf09c732ad3-kube-api-access-zpklf\") pod \"ceilometer-0\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " pod="openstack/ceilometer-0" Nov 24 19:37:07 crc kubenswrapper[4812]: I1124 19:37:07.937437 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b22484e-ecb8-48ad-846a-9bf09c732ad3-log-httpd\") pod \"ceilometer-0\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " pod="openstack/ceilometer-0" Nov 24 19:37:08 crc kubenswrapper[4812]: I1124 19:37:08.039568 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b22484e-ecb8-48ad-846a-9bf09c732ad3-log-httpd\") pod \"ceilometer-0\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " pod="openstack/ceilometer-0" Nov 24 19:37:08 crc kubenswrapper[4812]: I1124 19:37:08.039663 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-config-data\") pod \"ceilometer-0\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " pod="openstack/ceilometer-0" Nov 24 19:37:08 crc kubenswrapper[4812]: I1124 19:37:08.039703 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " pod="openstack/ceilometer-0" Nov 24 19:37:08 crc kubenswrapper[4812]: I1124 19:37:08.039727 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-scripts\") pod \"ceilometer-0\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " pod="openstack/ceilometer-0" Nov 24 19:37:08 crc kubenswrapper[4812]: I1124 19:37:08.039748 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b22484e-ecb8-48ad-846a-9bf09c732ad3-run-httpd\") pod \"ceilometer-0\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " pod="openstack/ceilometer-0" Nov 24 19:37:08 crc kubenswrapper[4812]: I1124 19:37:08.039768 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " pod="openstack/ceilometer-0" Nov 24 19:37:08 crc kubenswrapper[4812]: I1124 19:37:08.039801 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpklf\" (UniqueName: \"kubernetes.io/projected/5b22484e-ecb8-48ad-846a-9bf09c732ad3-kube-api-access-zpklf\") pod \"ceilometer-0\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " pod="openstack/ceilometer-0" Nov 24 19:37:08 crc kubenswrapper[4812]: I1124 19:37:08.040833 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b22484e-ecb8-48ad-846a-9bf09c732ad3-log-httpd\") pod \"ceilometer-0\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " pod="openstack/ceilometer-0" Nov 24 19:37:08 crc kubenswrapper[4812]: I1124 19:37:08.045606 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-config-data\") pod \"ceilometer-0\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " pod="openstack/ceilometer-0" Nov 24 19:37:08 crc kubenswrapper[4812]: I1124 19:37:08.047023 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-scripts\") pod \"ceilometer-0\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " pod="openstack/ceilometer-0" Nov 24 19:37:08 crc kubenswrapper[4812]: I1124 19:37:08.047330 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b22484e-ecb8-48ad-846a-9bf09c732ad3-run-httpd\") pod \"ceilometer-0\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " pod="openstack/ceilometer-0" Nov 24 19:37:08 crc kubenswrapper[4812]: I1124 19:37:08.049184 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " pod="openstack/ceilometer-0" Nov 24 19:37:08 crc kubenswrapper[4812]: I1124 19:37:08.050029 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " pod="openstack/ceilometer-0" Nov 24 19:37:08 crc kubenswrapper[4812]: I1124 19:37:08.056056 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpklf\" (UniqueName: \"kubernetes.io/projected/5b22484e-ecb8-48ad-846a-9bf09c732ad3-kube-api-access-zpklf\") pod \"ceilometer-0\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " pod="openstack/ceilometer-0" Nov 24 19:37:08 crc kubenswrapper[4812]: I1124 19:37:08.131601 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:37:08 crc kubenswrapper[4812]: I1124 19:37:08.983314 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04be8da8-17d2-450e-8d75-f30600c4656a" path="/var/lib/kubelet/pods/04be8da8-17d2-450e-8d75-f30600c4656a/volumes" Nov 24 19:37:08 crc kubenswrapper[4812]: I1124 19:37:08.984540 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf87d5d2-b5f7-42ca-8e2b-1cca59fad601" path="/var/lib/kubelet/pods/cf87d5d2-b5f7-42ca-8e2b-1cca59fad601/volumes" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.331462 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.331697 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1b6e1e33-ebbe-4cfc-906f-7f9ac961b905" containerName="glance-log" containerID="cri-o://d58f24f98b6547ef728cf80650b0bb8c34d954282d7cfc59ab58fb5f9fd62005" gracePeriod=30 Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.331822 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1b6e1e33-ebbe-4cfc-906f-7f9ac961b905" containerName="glance-httpd" containerID="cri-o://9ba3157ec54faa9b0fe3a70b70f8e9b99036f5aa81cab267d90afa91ac91cfc9" gracePeriod=30 Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.515710 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-wv687"] Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.516930 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wv687" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.536309 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wv687"] Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.566528 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff63a07-b659-4f65-a752-4b244ec5b470-operator-scripts\") pod \"nova-api-db-create-wv687\" (UID: \"1ff63a07-b659-4f65-a752-4b244ec5b470\") " pod="openstack/nova-api-db-create-wv687" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.566614 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr9sr\" (UniqueName: \"kubernetes.io/projected/1ff63a07-b659-4f65-a752-4b244ec5b470-kube-api-access-cr9sr\") pod \"nova-api-db-create-wv687\" (UID: \"1ff63a07-b659-4f65-a752-4b244ec5b470\") " pod="openstack/nova-api-db-create-wv687" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.610197 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-24r56"] Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.619394 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-5538-account-create-b9jdq"] Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.620160 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5538-account-create-b9jdq" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.620611 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-24r56" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.630651 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.631666 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-24r56"] Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.637566 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5538-account-create-b9jdq"] Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.672486 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53b4924c-df56-43c7-a032-83eb18d1eca0-operator-scripts\") pod \"nova-api-5538-account-create-b9jdq\" (UID: \"53b4924c-df56-43c7-a032-83eb18d1eca0\") " pod="openstack/nova-api-5538-account-create-b9jdq" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.672533 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff101b79-8551-4d29-b686-d476abad5900-operator-scripts\") pod \"nova-cell0-db-create-24r56\" (UID: \"ff101b79-8551-4d29-b686-d476abad5900\") " pod="openstack/nova-cell0-db-create-24r56" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.672565 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff63a07-b659-4f65-a752-4b244ec5b470-operator-scripts\") pod \"nova-api-db-create-wv687\" (UID: \"1ff63a07-b659-4f65-a752-4b244ec5b470\") " pod="openstack/nova-api-db-create-wv687" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.672585 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jxpr\" (UniqueName: \"kubernetes.io/projected/ff101b79-8551-4d29-b686-d476abad5900-kube-api-access-2jxpr\") pod \"nova-cell0-db-create-24r56\" (UID: \"ff101b79-8551-4d29-b686-d476abad5900\") " pod="openstack/nova-cell0-db-create-24r56" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.672613 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75ct7\" (UniqueName: \"kubernetes.io/projected/53b4924c-df56-43c7-a032-83eb18d1eca0-kube-api-access-75ct7\") pod \"nova-api-5538-account-create-b9jdq\" (UID: \"53b4924c-df56-43c7-a032-83eb18d1eca0\") " pod="openstack/nova-api-5538-account-create-b9jdq" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.672632 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr9sr\" (UniqueName: \"kubernetes.io/projected/1ff63a07-b659-4f65-a752-4b244ec5b470-kube-api-access-cr9sr\") pod \"nova-api-db-create-wv687\" (UID: \"1ff63a07-b659-4f65-a752-4b244ec5b470\") " pod="openstack/nova-api-db-create-wv687" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.673732 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff63a07-b659-4f65-a752-4b244ec5b470-operator-scripts\") pod \"nova-api-db-create-wv687\" (UID: \"1ff63a07-b659-4f65-a752-4b244ec5b470\") " pod="openstack/nova-api-db-create-wv687" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.694209 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr9sr\" (UniqueName: \"kubernetes.io/projected/1ff63a07-b659-4f65-a752-4b244ec5b470-kube-api-access-cr9sr\") pod \"nova-api-db-create-wv687\" (UID: \"1ff63a07-b659-4f65-a752-4b244ec5b470\") " pod="openstack/nova-api-db-create-wv687" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.715749 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-c5h9s"] Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.718760 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c5h9s" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.739877 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-c5h9s"] Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.774217 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff101b79-8551-4d29-b686-d476abad5900-operator-scripts\") pod \"nova-cell0-db-create-24r56\" (UID: \"ff101b79-8551-4d29-b686-d476abad5900\") " pod="openstack/nova-cell0-db-create-24r56" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.774974 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff101b79-8551-4d29-b686-d476abad5900-operator-scripts\") pod \"nova-cell0-db-create-24r56\" (UID: \"ff101b79-8551-4d29-b686-d476abad5900\") " pod="openstack/nova-cell0-db-create-24r56" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.775055 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jxpr\" (UniqueName: \"kubernetes.io/projected/ff101b79-8551-4d29-b686-d476abad5900-kube-api-access-2jxpr\") pod \"nova-cell0-db-create-24r56\" (UID: \"ff101b79-8551-4d29-b686-d476abad5900\") " pod="openstack/nova-cell0-db-create-24r56" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.775356 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75ct7\" (UniqueName: \"kubernetes.io/projected/53b4924c-df56-43c7-a032-83eb18d1eca0-kube-api-access-75ct7\") pod \"nova-api-5538-account-create-b9jdq\" (UID: \"53b4924c-df56-43c7-a032-83eb18d1eca0\") " pod="openstack/nova-api-5538-account-create-b9jdq" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.775464 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psndm\" (UniqueName: \"kubernetes.io/projected/29f01750-5996-43aa-9799-1f08a3e68b53-kube-api-access-psndm\") pod \"nova-cell1-db-create-c5h9s\" (UID: \"29f01750-5996-43aa-9799-1f08a3e68b53\") " pod="openstack/nova-cell1-db-create-c5h9s" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.775530 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53b4924c-df56-43c7-a032-83eb18d1eca0-operator-scripts\") pod \"nova-api-5538-account-create-b9jdq\" (UID: \"53b4924c-df56-43c7-a032-83eb18d1eca0\") " pod="openstack/nova-api-5538-account-create-b9jdq" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.775557 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29f01750-5996-43aa-9799-1f08a3e68b53-operator-scripts\") pod \"nova-cell1-db-create-c5h9s\" (UID: \"29f01750-5996-43aa-9799-1f08a3e68b53\") " pod="openstack/nova-cell1-db-create-c5h9s" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.776323 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53b4924c-df56-43c7-a032-83eb18d1eca0-operator-scripts\") pod \"nova-api-5538-account-create-b9jdq\" (UID: \"53b4924c-df56-43c7-a032-83eb18d1eca0\") " pod="openstack/nova-api-5538-account-create-b9jdq" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.794795 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75ct7\" (UniqueName: \"kubernetes.io/projected/53b4924c-df56-43c7-a032-83eb18d1eca0-kube-api-access-75ct7\") pod \"nova-api-5538-account-create-b9jdq\" (UID: \"53b4924c-df56-43c7-a032-83eb18d1eca0\") " pod="openstack/nova-api-5538-account-create-b9jdq" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.799268 4812 generic.go:334] "Generic (PLEG): container finished" podID="1b6e1e33-ebbe-4cfc-906f-7f9ac961b905" containerID="d58f24f98b6547ef728cf80650b0bb8c34d954282d7cfc59ab58fb5f9fd62005" exitCode=143 Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.799309 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905","Type":"ContainerDied","Data":"d58f24f98b6547ef728cf80650b0bb8c34d954282d7cfc59ab58fb5f9fd62005"} Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.805406 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jxpr\" (UniqueName: \"kubernetes.io/projected/ff101b79-8551-4d29-b686-d476abad5900-kube-api-access-2jxpr\") pod \"nova-cell0-db-create-24r56\" (UID: \"ff101b79-8551-4d29-b686-d476abad5900\") " pod="openstack/nova-cell0-db-create-24r56" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.829566 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-63c1-account-create-zcbsq"] Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.831251 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wv687" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.832095 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-63c1-account-create-zcbsq" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.837583 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.845768 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-63c1-account-create-zcbsq"] Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.876725 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29f01750-5996-43aa-9799-1f08a3e68b53-operator-scripts\") pod \"nova-cell1-db-create-c5h9s\" (UID: \"29f01750-5996-43aa-9799-1f08a3e68b53\") " pod="openstack/nova-cell1-db-create-c5h9s" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.876803 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5hjd\" (UniqueName: \"kubernetes.io/projected/689ace22-4dce-411e-8558-8e84de48d105-kube-api-access-m5hjd\") pod \"nova-cell0-63c1-account-create-zcbsq\" (UID: \"689ace22-4dce-411e-8558-8e84de48d105\") " pod="openstack/nova-cell0-63c1-account-create-zcbsq" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.876848 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/689ace22-4dce-411e-8558-8e84de48d105-operator-scripts\") pod \"nova-cell0-63c1-account-create-zcbsq\" (UID: \"689ace22-4dce-411e-8558-8e84de48d105\") " pod="openstack/nova-cell0-63c1-account-create-zcbsq" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.876908 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psndm\" (UniqueName: \"kubernetes.io/projected/29f01750-5996-43aa-9799-1f08a3e68b53-kube-api-access-psndm\") pod \"nova-cell1-db-create-c5h9s\" (UID: \"29f01750-5996-43aa-9799-1f08a3e68b53\") " pod="openstack/nova-cell1-db-create-c5h9s" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.877616 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29f01750-5996-43aa-9799-1f08a3e68b53-operator-scripts\") pod \"nova-cell1-db-create-c5h9s\" (UID: \"29f01750-5996-43aa-9799-1f08a3e68b53\") " pod="openstack/nova-cell1-db-create-c5h9s" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.894251 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psndm\" (UniqueName: \"kubernetes.io/projected/29f01750-5996-43aa-9799-1f08a3e68b53-kube-api-access-psndm\") pod \"nova-cell1-db-create-c5h9s\" (UID: \"29f01750-5996-43aa-9799-1f08a3e68b53\") " pod="openstack/nova-cell1-db-create-c5h9s" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.942452 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5538-account-create-b9jdq" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.955991 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-24r56" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.978715 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5hjd\" (UniqueName: \"kubernetes.io/projected/689ace22-4dce-411e-8558-8e84de48d105-kube-api-access-m5hjd\") pod \"nova-cell0-63c1-account-create-zcbsq\" (UID: \"689ace22-4dce-411e-8558-8e84de48d105\") " pod="openstack/nova-cell0-63c1-account-create-zcbsq" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.978767 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/689ace22-4dce-411e-8558-8e84de48d105-operator-scripts\") pod \"nova-cell0-63c1-account-create-zcbsq\" (UID: \"689ace22-4dce-411e-8558-8e84de48d105\") " pod="openstack/nova-cell0-63c1-account-create-zcbsq" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.979391 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/689ace22-4dce-411e-8558-8e84de48d105-operator-scripts\") pod \"nova-cell0-63c1-account-create-zcbsq\" (UID: \"689ace22-4dce-411e-8558-8e84de48d105\") " pod="openstack/nova-cell0-63c1-account-create-zcbsq" Nov 24 19:37:09 crc kubenswrapper[4812]: I1124 19:37:09.998433 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5hjd\" (UniqueName: \"kubernetes.io/projected/689ace22-4dce-411e-8558-8e84de48d105-kube-api-access-m5hjd\") pod \"nova-cell0-63c1-account-create-zcbsq\" (UID: \"689ace22-4dce-411e-8558-8e84de48d105\") " pod="openstack/nova-cell0-63c1-account-create-zcbsq" Nov 24 19:37:10 crc kubenswrapper[4812]: I1124 19:37:10.037240 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-2180-account-create-k94c2"] Nov 24 19:37:10 crc kubenswrapper[4812]: I1124 19:37:10.038304 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2180-account-create-k94c2" Nov 24 19:37:10 crc kubenswrapper[4812]: I1124 19:37:10.044897 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2180-account-create-k94c2"] Nov 24 19:37:10 crc kubenswrapper[4812]: I1124 19:37:10.046613 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 24 19:37:10 crc kubenswrapper[4812]: I1124 19:37:10.056486 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c5h9s" Nov 24 19:37:10 crc kubenswrapper[4812]: I1124 19:37:10.080507 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/948e94ea-f146-46a9-81e7-1254f7e3661e-operator-scripts\") pod \"nova-cell1-2180-account-create-k94c2\" (UID: \"948e94ea-f146-46a9-81e7-1254f7e3661e\") " pod="openstack/nova-cell1-2180-account-create-k94c2" Nov 24 19:37:10 crc kubenswrapper[4812]: I1124 19:37:10.080645 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9vs2\" (UniqueName: \"kubernetes.io/projected/948e94ea-f146-46a9-81e7-1254f7e3661e-kube-api-access-z9vs2\") pod \"nova-cell1-2180-account-create-k94c2\" (UID: \"948e94ea-f146-46a9-81e7-1254f7e3661e\") " pod="openstack/nova-cell1-2180-account-create-k94c2" Nov 24 19:37:10 crc kubenswrapper[4812]: I1124 19:37:10.155418 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 24 19:37:10 crc kubenswrapper[4812]: I1124 19:37:10.182283 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9vs2\" (UniqueName: \"kubernetes.io/projected/948e94ea-f146-46a9-81e7-1254f7e3661e-kube-api-access-z9vs2\") pod \"nova-cell1-2180-account-create-k94c2\" (UID: \"948e94ea-f146-46a9-81e7-1254f7e3661e\") " pod="openstack/nova-cell1-2180-account-create-k94c2" Nov 24 19:37:10 crc kubenswrapper[4812]: I1124 19:37:10.182490 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/948e94ea-f146-46a9-81e7-1254f7e3661e-operator-scripts\") pod \"nova-cell1-2180-account-create-k94c2\" (UID: \"948e94ea-f146-46a9-81e7-1254f7e3661e\") " pod="openstack/nova-cell1-2180-account-create-k94c2" Nov 24 19:37:10 crc kubenswrapper[4812]: I1124 19:37:10.183424 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/948e94ea-f146-46a9-81e7-1254f7e3661e-operator-scripts\") pod \"nova-cell1-2180-account-create-k94c2\" (UID: \"948e94ea-f146-46a9-81e7-1254f7e3661e\") " pod="openstack/nova-cell1-2180-account-create-k94c2" Nov 24 19:37:10 crc kubenswrapper[4812]: I1124 19:37:10.186808 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-63c1-account-create-zcbsq" Nov 24 19:37:10 crc kubenswrapper[4812]: I1124 19:37:10.226980 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:10 crc kubenswrapper[4812]: I1124 19:37:10.241051 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9vs2\" (UniqueName: \"kubernetes.io/projected/948e94ea-f146-46a9-81e7-1254f7e3661e-kube-api-access-z9vs2\") pod \"nova-cell1-2180-account-create-k94c2\" (UID: \"948e94ea-f146-46a9-81e7-1254f7e3661e\") " pod="openstack/nova-cell1-2180-account-create-k94c2" Nov 24 19:37:10 crc kubenswrapper[4812]: I1124 19:37:10.368277 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2180-account-create-k94c2" Nov 24 19:37:10 crc kubenswrapper[4812]: I1124 19:37:10.837785 4812 generic.go:334] "Generic (PLEG): container finished" podID="61c84290-c737-4cee-8ac2-cfe1bf02e656" containerID="9901b266eef29a4a8c9eb6aa553ed705c00a6fa7715ab7158d9f2421450a1193" exitCode=0 Nov 24 19:37:10 crc kubenswrapper[4812]: I1124 19:37:10.837941 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"61c84290-c737-4cee-8ac2-cfe1bf02e656","Type":"ContainerDied","Data":"9901b266eef29a4a8c9eb6aa553ed705c00a6fa7715ab7158d9f2421450a1193"} Nov 24 19:37:10 crc kubenswrapper[4812]: I1124 19:37:10.840449 4812 generic.go:334] "Generic (PLEG): container finished" podID="b4073e4e-f8f0-4ef9-a813-87920edaa840" containerID="79b5f87c486798f95647660266b1dbff5d59afbb11903583a975e0ccee7cb993" exitCode=0 Nov 24 19:37:10 crc kubenswrapper[4812]: I1124 19:37:10.840476 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84884b67b4-mzcbv" event={"ID":"b4073e4e-f8f0-4ef9-a813-87920edaa840","Type":"ContainerDied","Data":"79b5f87c486798f95647660266b1dbff5d59afbb11903583a975e0ccee7cb993"} Nov 24 19:37:12 crc kubenswrapper[4812]: I1124 19:37:12.877890 4812 generic.go:334] "Generic (PLEG): container finished" podID="1b6e1e33-ebbe-4cfc-906f-7f9ac961b905" containerID="9ba3157ec54faa9b0fe3a70b70f8e9b99036f5aa81cab267d90afa91ac91cfc9" exitCode=0 Nov 24 19:37:12 crc kubenswrapper[4812]: I1124 19:37:12.877961 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905","Type":"ContainerDied","Data":"9ba3157ec54faa9b0fe3a70b70f8e9b99036f5aa81cab267d90afa91ac91cfc9"} Nov 24 19:37:13 crc kubenswrapper[4812]: I1124 19:37:13.935312 4812 scope.go:117] "RemoveContainer" containerID="da7e2b25a70c3fba95c4a6655e698a5160edbc87c129cdd5d1209dffed50a286" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.075972 4812 scope.go:117] "RemoveContainer" containerID="bc05ae9e099323cf253c539256e6465cdc984a072acda5989444fd5f65b298b1" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.101729 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.153776 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61c84290-c737-4cee-8ac2-cfe1bf02e656-httpd-run\") pod \"61c84290-c737-4cee-8ac2-cfe1bf02e656\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.153907 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-scripts\") pod \"61c84290-c737-4cee-8ac2-cfe1bf02e656\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.153950 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc7k7\" (UniqueName: \"kubernetes.io/projected/61c84290-c737-4cee-8ac2-cfe1bf02e656-kube-api-access-wc7k7\") pod \"61c84290-c737-4cee-8ac2-cfe1bf02e656\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.153972 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-public-tls-certs\") pod \"61c84290-c737-4cee-8ac2-cfe1bf02e656\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.154002 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-config-data\") pod \"61c84290-c737-4cee-8ac2-cfe1bf02e656\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.154024 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61c84290-c737-4cee-8ac2-cfe1bf02e656-logs\") pod \"61c84290-c737-4cee-8ac2-cfe1bf02e656\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.154098 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"61c84290-c737-4cee-8ac2-cfe1bf02e656\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.154160 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-combined-ca-bundle\") pod \"61c84290-c737-4cee-8ac2-cfe1bf02e656\" (UID: \"61c84290-c737-4cee-8ac2-cfe1bf02e656\") " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.156357 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61c84290-c737-4cee-8ac2-cfe1bf02e656-logs" (OuterVolumeSpecName: "logs") pod "61c84290-c737-4cee-8ac2-cfe1bf02e656" (UID: "61c84290-c737-4cee-8ac2-cfe1bf02e656"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.156793 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61c84290-c737-4cee-8ac2-cfe1bf02e656-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "61c84290-c737-4cee-8ac2-cfe1bf02e656" (UID: "61c84290-c737-4cee-8ac2-cfe1bf02e656"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.173710 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "61c84290-c737-4cee-8ac2-cfe1bf02e656" (UID: "61c84290-c737-4cee-8ac2-cfe1bf02e656"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.197261 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61c84290-c737-4cee-8ac2-cfe1bf02e656-kube-api-access-wc7k7" (OuterVolumeSpecName: "kube-api-access-wc7k7") pod "61c84290-c737-4cee-8ac2-cfe1bf02e656" (UID: "61c84290-c737-4cee-8ac2-cfe1bf02e656"). InnerVolumeSpecName "kube-api-access-wc7k7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.208594 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-scripts" (OuterVolumeSpecName: "scripts") pod "61c84290-c737-4cee-8ac2-cfe1bf02e656" (UID: "61c84290-c737-4cee-8ac2-cfe1bf02e656"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.239653 4812 scope.go:117] "RemoveContainer" containerID="0a203479b0f595f21c194ef6e751fa70d1d548b8e651e778d697aa9152d02055" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.259223 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/61c84290-c737-4cee-8ac2-cfe1bf02e656-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.259550 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.259560 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc7k7\" (UniqueName: \"kubernetes.io/projected/61c84290-c737-4cee-8ac2-cfe1bf02e656-kube-api-access-wc7k7\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.259573 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61c84290-c737-4cee-8ac2-cfe1bf02e656-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.259595 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.307728 4812 scope.go:117] "RemoveContainer" containerID="61636222c24dbc69d2827dbc5ca5cbbb7dd96e86f967026404630566bd435d78" Nov 24 19:37:14 crc kubenswrapper[4812]: E1124 19:37:14.309375 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61636222c24dbc69d2827dbc5ca5cbbb7dd96e86f967026404630566bd435d78\": container with ID starting with 61636222c24dbc69d2827dbc5ca5cbbb7dd96e86f967026404630566bd435d78 not found: ID does not exist" containerID="61636222c24dbc69d2827dbc5ca5cbbb7dd96e86f967026404630566bd435d78" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.309419 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61636222c24dbc69d2827dbc5ca5cbbb7dd96e86f967026404630566bd435d78"} err="failed to get container status \"61636222c24dbc69d2827dbc5ca5cbbb7dd96e86f967026404630566bd435d78\": rpc error: code = NotFound desc = could not find container \"61636222c24dbc69d2827dbc5ca5cbbb7dd96e86f967026404630566bd435d78\": container with ID starting with 61636222c24dbc69d2827dbc5ca5cbbb7dd96e86f967026404630566bd435d78 not found: ID does not exist" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.309448 4812 scope.go:117] "RemoveContainer" containerID="da7e2b25a70c3fba95c4a6655e698a5160edbc87c129cdd5d1209dffed50a286" Nov 24 19:37:14 crc kubenswrapper[4812]: E1124 19:37:14.310077 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da7e2b25a70c3fba95c4a6655e698a5160edbc87c129cdd5d1209dffed50a286\": container with ID starting with da7e2b25a70c3fba95c4a6655e698a5160edbc87c129cdd5d1209dffed50a286 not found: ID does not exist" containerID="da7e2b25a70c3fba95c4a6655e698a5160edbc87c129cdd5d1209dffed50a286" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.310143 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da7e2b25a70c3fba95c4a6655e698a5160edbc87c129cdd5d1209dffed50a286"} err="failed to get container status \"da7e2b25a70c3fba95c4a6655e698a5160edbc87c129cdd5d1209dffed50a286\": rpc error: code = NotFound desc = could not find container \"da7e2b25a70c3fba95c4a6655e698a5160edbc87c129cdd5d1209dffed50a286\": container with ID starting with da7e2b25a70c3fba95c4a6655e698a5160edbc87c129cdd5d1209dffed50a286 not found: ID does not exist" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.310169 4812 scope.go:117] "RemoveContainer" containerID="bc05ae9e099323cf253c539256e6465cdc984a072acda5989444fd5f65b298b1" Nov 24 19:37:14 crc kubenswrapper[4812]: E1124 19:37:14.311098 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc05ae9e099323cf253c539256e6465cdc984a072acda5989444fd5f65b298b1\": container with ID starting with bc05ae9e099323cf253c539256e6465cdc984a072acda5989444fd5f65b298b1 not found: ID does not exist" containerID="bc05ae9e099323cf253c539256e6465cdc984a072acda5989444fd5f65b298b1" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.311127 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc05ae9e099323cf253c539256e6465cdc984a072acda5989444fd5f65b298b1"} err="failed to get container status \"bc05ae9e099323cf253c539256e6465cdc984a072acda5989444fd5f65b298b1\": rpc error: code = NotFound desc = could not find container \"bc05ae9e099323cf253c539256e6465cdc984a072acda5989444fd5f65b298b1\": container with ID starting with bc05ae9e099323cf253c539256e6465cdc984a072acda5989444fd5f65b298b1 not found: ID does not exist" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.311154 4812 scope.go:117] "RemoveContainer" containerID="0a203479b0f595f21c194ef6e751fa70d1d548b8e651e778d697aa9152d02055" Nov 24 19:37:14 crc kubenswrapper[4812]: E1124 19:37:14.311854 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a203479b0f595f21c194ef6e751fa70d1d548b8e651e778d697aa9152d02055\": container with ID starting with 0a203479b0f595f21c194ef6e751fa70d1d548b8e651e778d697aa9152d02055 not found: ID does not exist" containerID="0a203479b0f595f21c194ef6e751fa70d1d548b8e651e778d697aa9152d02055" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.311876 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a203479b0f595f21c194ef6e751fa70d1d548b8e651e778d697aa9152d02055"} err="failed to get container status \"0a203479b0f595f21c194ef6e751fa70d1d548b8e651e778d697aa9152d02055\": rpc error: code = NotFound desc = could not find container \"0a203479b0f595f21c194ef6e751fa70d1d548b8e651e778d697aa9152d02055\": container with ID starting with 0a203479b0f595f21c194ef6e751fa70d1d548b8e651e778d697aa9152d02055 not found: ID does not exist" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.341998 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61c84290-c737-4cee-8ac2-cfe1bf02e656" (UID: "61c84290-c737-4cee-8ac2-cfe1bf02e656"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.360659 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.454717 4812 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.460675 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84884b67b4-mzcbv" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.465895 4812 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.485155 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-config-data" (OuterVolumeSpecName: "config-data") pod "61c84290-c737-4cee-8ac2-cfe1bf02e656" (UID: "61c84290-c737-4cee-8ac2-cfe1bf02e656"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.502519 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "61c84290-c737-4cee-8ac2-cfe1bf02e656" (UID: "61c84290-c737-4cee-8ac2-cfe1bf02e656"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.516956 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.581884 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-config-data\") pod \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.582206 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-logs\") pod \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.582280 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.582298 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-internal-tls-certs\") pod \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.582326 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldbp4\" (UniqueName: \"kubernetes.io/projected/b4073e4e-f8f0-4ef9-a813-87920edaa840-kube-api-access-ldbp4\") pod \"b4073e4e-f8f0-4ef9-a813-87920edaa840\" (UID: \"b4073e4e-f8f0-4ef9-a813-87920edaa840\") " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.582376 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-ovndb-tls-certs\") pod \"b4073e4e-f8f0-4ef9-a813-87920edaa840\" (UID: \"b4073e4e-f8f0-4ef9-a813-87920edaa840\") " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.582405 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-scripts\") pod \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.582448 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-config\") pod \"b4073e4e-f8f0-4ef9-a813-87920edaa840\" (UID: \"b4073e4e-f8f0-4ef9-a813-87920edaa840\") " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.582466 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-combined-ca-bundle\") pod \"b4073e4e-f8f0-4ef9-a813-87920edaa840\" (UID: \"b4073e4e-f8f0-4ef9-a813-87920edaa840\") " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.582522 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bp2l\" (UniqueName: \"kubernetes.io/projected/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-kube-api-access-8bp2l\") pod \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.582574 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-httpd-config\") pod \"b4073e4e-f8f0-4ef9-a813-87920edaa840\" (UID: \"b4073e4e-f8f0-4ef9-a813-87920edaa840\") " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.582628 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-combined-ca-bundle\") pod \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.582642 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-httpd-run\") pod \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\" (UID: \"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905\") " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.583370 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1b6e1e33-ebbe-4cfc-906f-7f9ac961b905" (UID: "1b6e1e33-ebbe-4cfc-906f-7f9ac961b905"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.584200 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.584254 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.584273 4812 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/61c84290-c737-4cee-8ac2-cfe1bf02e656-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.589426 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-logs" (OuterVolumeSpecName: "logs") pod "1b6e1e33-ebbe-4cfc-906f-7f9ac961b905" (UID: "1b6e1e33-ebbe-4cfc-906f-7f9ac961b905"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.599945 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "b4073e4e-f8f0-4ef9-a813-87920edaa840" (UID: "b4073e4e-f8f0-4ef9-a813-87920edaa840"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.599965 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "1b6e1e33-ebbe-4cfc-906f-7f9ac961b905" (UID: "1b6e1e33-ebbe-4cfc-906f-7f9ac961b905"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.601680 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4073e4e-f8f0-4ef9-a813-87920edaa840-kube-api-access-ldbp4" (OuterVolumeSpecName: "kube-api-access-ldbp4") pod "b4073e4e-f8f0-4ef9-a813-87920edaa840" (UID: "b4073e4e-f8f0-4ef9-a813-87920edaa840"). InnerVolumeSpecName "kube-api-access-ldbp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.606896 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-kube-api-access-8bp2l" (OuterVolumeSpecName: "kube-api-access-8bp2l") pod "1b6e1e33-ebbe-4cfc-906f-7f9ac961b905" (UID: "1b6e1e33-ebbe-4cfc-906f-7f9ac961b905"). InnerVolumeSpecName "kube-api-access-8bp2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.615915 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-scripts" (OuterVolumeSpecName: "scripts") pod "1b6e1e33-ebbe-4cfc-906f-7f9ac961b905" (UID: "1b6e1e33-ebbe-4cfc-906f-7f9ac961b905"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.685577 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.685607 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.685633 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.685643 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldbp4\" (UniqueName: \"kubernetes.io/projected/b4073e4e-f8f0-4ef9-a813-87920edaa840-kube-api-access-ldbp4\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.685653 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.685660 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bp2l\" (UniqueName: \"kubernetes.io/projected/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-kube-api-access-8bp2l\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.704441 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4073e4e-f8f0-4ef9-a813-87920edaa840" (UID: "b4073e4e-f8f0-4ef9-a813-87920edaa840"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.714089 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1b6e1e33-ebbe-4cfc-906f-7f9ac961b905" (UID: "1b6e1e33-ebbe-4cfc-906f-7f9ac961b905"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.719961 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b6e1e33-ebbe-4cfc-906f-7f9ac961b905" (UID: "1b6e1e33-ebbe-4cfc-906f-7f9ac961b905"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.725702 4812 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.728269 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-config" (OuterVolumeSpecName: "config") pod "b4073e4e-f8f0-4ef9-a813-87920edaa840" (UID: "b4073e4e-f8f0-4ef9-a813-87920edaa840"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.775843 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-config-data" (OuterVolumeSpecName: "config-data") pod "1b6e1e33-ebbe-4cfc-906f-7f9ac961b905" (UID: "1b6e1e33-ebbe-4cfc-906f-7f9ac961b905"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.783907 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "b4073e4e-f8f0-4ef9-a813-87920edaa840" (UID: "b4073e4e-f8f0-4ef9-a813-87920edaa840"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.789398 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.789434 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.789444 4812 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.789454 4812 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.789465 4812 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.789475 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.789483 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4073e4e-f8f0-4ef9-a813-87920edaa840-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.920761 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1b6e1e33-ebbe-4cfc-906f-7f9ac961b905","Type":"ContainerDied","Data":"b83ed651766287ec7bf8b5d5165367a7b5529e1dd1e57e8c4691a9967e828824"} Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.920813 4812 scope.go:117] "RemoveContainer" containerID="9ba3157ec54faa9b0fe3a70b70f8e9b99036f5aa81cab267d90afa91ac91cfc9" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.920951 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 19:37:14 crc kubenswrapper[4812]: W1124 19:37:14.929519 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff101b79_8551_4d29_b686_d476abad5900.slice/crio-6a86968970dc97a18964562e38620620b340205928021d789a4a2170cc031fa5 WatchSource:0}: Error finding container 6a86968970dc97a18964562e38620620b340205928021d789a4a2170cc031fa5: Status 404 returned error can't find the container with id 6a86968970dc97a18964562e38620620b340205928021d789a4a2170cc031fa5 Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.931723 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-24r56"] Nov 24 19:37:14 crc kubenswrapper[4812]: W1124 19:37:14.935430 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29f01750_5996_43aa_9799_1f08a3e68b53.slice/crio-6ffa9c13a4e9f3cc246ee9874272c24915d800426e8368cb7be975a9f2e73deb WatchSource:0}: Error finding container 6ffa9c13a4e9f3cc246ee9874272c24915d800426e8368cb7be975a9f2e73deb: Status 404 returned error can't find the container with id 6ffa9c13a4e9f3cc246ee9874272c24915d800426e8368cb7be975a9f2e73deb Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.935874 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-86f4846457-t2fl4" event={"ID":"418d4320-50e7-4767-bed7-db43af1583f0","Type":"ContainerStarted","Data":"37219908d6616db696c761cac954619d4884750ea17c429640079a84f18bb5e8"} Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.937263 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.938236 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.941897 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8f7453e8-f9c5-4588-80b8-82bba37e1514","Type":"ContainerStarted","Data":"efc44587faa3ea03e56f32c183bd4613a63898038cbd2f133bb473692484cc9e"} Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.947302 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-c5h9s"] Nov 24 19:37:14 crc kubenswrapper[4812]: I1124 19:37:14.972788 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.010108 4812 scope.go:117] "RemoveContainer" containerID="d58f24f98b6547ef728cf80650b0bb8c34d954282d7cfc59ab58fb5f9fd62005" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.010446 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84884b67b4-mzcbv" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.011101 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-86f4846457-t2fl4" podUID="418d4320-50e7-4767-bed7-db43af1583f0" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.012952 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"61c84290-c737-4cee-8ac2-cfe1bf02e656","Type":"ContainerDied","Data":"2286e9664e0ad72046feb8ee264e75f1896ca113b6536b04af1cb77ba8d0a413"} Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.012993 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84884b67b4-mzcbv" event={"ID":"b4073e4e-f8f0-4ef9-a813-87920edaa840","Type":"ContainerDied","Data":"cf0979564bea0d93eb2ff28df3a86dac19318b839f6acaa80f7ea1a1da2c1681"} Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.013013 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5538-account-create-b9jdq"] Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.029546 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-63c1-account-create-zcbsq"] Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.034873 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-86f4846457-t2fl4" podStartSLOduration=10.034857934 podStartE2EDuration="10.034857934s" podCreationTimestamp="2025-11-24 19:37:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:37:14.953084225 +0000 UTC m=+1228.742036606" watchObservedRunningTime="2025-11-24 19:37:15.034857934 +0000 UTC m=+1228.823810305" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.055486 4812 scope.go:117] "RemoveContainer" containerID="9901b266eef29a4a8c9eb6aa553ed705c00a6fa7715ab7158d9f2421450a1193" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.070599 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.096147 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-wv687"] Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.116568 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.673491667 podStartE2EDuration="14.116552201s" podCreationTimestamp="2025-11-24 19:37:01 +0000 UTC" firstStartedPulling="2025-11-24 19:37:02.492227345 +0000 UTC m=+1216.281179716" lastFinishedPulling="2025-11-24 19:37:13.935287889 +0000 UTC m=+1227.724240250" observedRunningTime="2025-11-24 19:37:14.987992168 +0000 UTC m=+1228.776944539" watchObservedRunningTime="2025-11-24 19:37:15.116552201 +0000 UTC m=+1228.905504572" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.132121 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.142696 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2180-account-create-k94c2"] Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.151392 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.160844 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.168399 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 19:37:15 crc kubenswrapper[4812]: E1124 19:37:15.168810 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c84290-c737-4cee-8ac2-cfe1bf02e656" containerName="glance-log" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.168826 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c84290-c737-4cee-8ac2-cfe1bf02e656" containerName="glance-log" Nov 24 19:37:15 crc kubenswrapper[4812]: E1124 19:37:15.168841 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c84290-c737-4cee-8ac2-cfe1bf02e656" containerName="glance-httpd" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.168849 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c84290-c737-4cee-8ac2-cfe1bf02e656" containerName="glance-httpd" Nov 24 19:37:15 crc kubenswrapper[4812]: E1124 19:37:15.168875 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b6e1e33-ebbe-4cfc-906f-7f9ac961b905" containerName="glance-log" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.168881 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b6e1e33-ebbe-4cfc-906f-7f9ac961b905" containerName="glance-log" Nov 24 19:37:15 crc kubenswrapper[4812]: E1124 19:37:15.168895 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4073e4e-f8f0-4ef9-a813-87920edaa840" containerName="neutron-api" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.168901 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4073e4e-f8f0-4ef9-a813-87920edaa840" containerName="neutron-api" Nov 24 19:37:15 crc kubenswrapper[4812]: E1124 19:37:15.168909 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b6e1e33-ebbe-4cfc-906f-7f9ac961b905" containerName="glance-httpd" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.168915 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b6e1e33-ebbe-4cfc-906f-7f9ac961b905" containerName="glance-httpd" Nov 24 19:37:15 crc kubenswrapper[4812]: E1124 19:37:15.168926 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4073e4e-f8f0-4ef9-a813-87920edaa840" containerName="neutron-httpd" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.168932 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4073e4e-f8f0-4ef9-a813-87920edaa840" containerName="neutron-httpd" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.169082 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b6e1e33-ebbe-4cfc-906f-7f9ac961b905" containerName="glance-httpd" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.169097 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b6e1e33-ebbe-4cfc-906f-7f9ac961b905" containerName="glance-log" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.169114 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4073e4e-f8f0-4ef9-a813-87920edaa840" containerName="neutron-httpd" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.169122 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c84290-c737-4cee-8ac2-cfe1bf02e656" containerName="glance-log" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.169129 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c84290-c737-4cee-8ac2-cfe1bf02e656" containerName="glance-httpd" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.169140 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4073e4e-f8f0-4ef9-a813-87920edaa840" containerName="neutron-api" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.170013 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.180513 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.181075 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-z5qr5" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.181171 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.181218 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.192423 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.203251 4812 scope.go:117] "RemoveContainer" containerID="8bdc38b71291931cd55ce60ff4be4f9cb9e8f79ff63fb83d416f76649df38c59" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.212703 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.229863 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.243399 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-84884b67b4-mzcbv"] Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.247383 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.253705 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-84884b67b4-mzcbv"] Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.254555 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.257882 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.257998 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.265651 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.286680 4812 scope.go:117] "RemoveContainer" containerID="e7b27c07f1634e844989fa1c25a4ba21f6b40df5db57e3e377b5bf27b2d39a29" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.315309 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89ksh\" (UniqueName: \"kubernetes.io/projected/cf01449d-a52d-488f-bd93-b5f84b57fb13-kube-api-access-89ksh\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.315447 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.315522 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf01449d-a52d-488f-bd93-b5f84b57fb13-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.315585 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.315667 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.315760 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.315827 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-scripts\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.315910 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.316000 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-config-data\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.316085 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.316173 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-scripts\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.316250 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.316322 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf01449d-a52d-488f-bd93-b5f84b57fb13-logs\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.316410 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnjdq\" (UniqueName: \"kubernetes.io/projected/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-kube-api-access-mnjdq\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.316493 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-logs\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.316567 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-config-data\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.353632 4812 scope.go:117] "RemoveContainer" containerID="79b5f87c486798f95647660266b1dbff5d59afbb11903583a975e0ccee7cb993" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.418510 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnjdq\" (UniqueName: \"kubernetes.io/projected/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-kube-api-access-mnjdq\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.418768 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-logs\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.418808 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-config-data\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.419052 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89ksh\" (UniqueName: \"kubernetes.io/projected/cf01449d-a52d-488f-bd93-b5f84b57fb13-kube-api-access-89ksh\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.419178 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.419206 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf01449d-a52d-488f-bd93-b5f84b57fb13-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.419523 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.419669 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.419705 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.419722 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-scripts\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.419743 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.419760 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-config-data\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.419779 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.419804 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-scripts\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.419820 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.419840 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf01449d-a52d-488f-bd93-b5f84b57fb13-logs\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.420166 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf01449d-a52d-488f-bd93-b5f84b57fb13-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.420521 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf01449d-a52d-488f-bd93-b5f84b57fb13-logs\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.421541 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.421679 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.424410 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-logs\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.424426 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.426769 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.429195 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-config-data\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.438921 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.446381 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.446489 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-config-data\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.448125 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-scripts\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.451634 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.451807 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnjdq\" (UniqueName: \"kubernetes.io/projected/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-kube-api-access-mnjdq\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.457841 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89ksh\" (UniqueName: \"kubernetes.io/projected/cf01449d-a52d-488f-bd93-b5f84b57fb13-kube-api-access-89ksh\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.460486 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-scripts\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.463646 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-86f4846457-t2fl4" podUID="418d4320-50e7-4767-bed7-db43af1583f0" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.491389 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.598016 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.598124 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " pod="openstack/glance-default-internal-api-0" Nov 24 19:37:15 crc kubenswrapper[4812]: I1124 19:37:15.863567 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 19:37:16 crc kubenswrapper[4812]: E1124 19:37:16.055023 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53b4924c_df56_43c7_a032_83eb18d1eca0.slice/crio-1bf723392e2593de42b16d0252b5913f24c0fd77eff44f99268cc159a68845e0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53b4924c_df56_43c7_a032_83eb18d1eca0.slice/crio-conmon-1bf723392e2593de42b16d0252b5913f24c0fd77eff44f99268cc159a68845e0.scope\": RecentStats: unable to find data in memory cache]" Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.079242 4812 generic.go:334] "Generic (PLEG): container finished" podID="53b4924c-df56-43c7-a032-83eb18d1eca0" containerID="1bf723392e2593de42b16d0252b5913f24c0fd77eff44f99268cc159a68845e0" exitCode=0 Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.079402 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5538-account-create-b9jdq" event={"ID":"53b4924c-df56-43c7-a032-83eb18d1eca0","Type":"ContainerDied","Data":"1bf723392e2593de42b16d0252b5913f24c0fd77eff44f99268cc159a68845e0"} Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.079697 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5538-account-create-b9jdq" event={"ID":"53b4924c-df56-43c7-a032-83eb18d1eca0","Type":"ContainerStarted","Data":"13885801b35903182a5b39b405dca909b7a7a365219b783cc69b7b240db73491"} Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.089210 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-24r56" event={"ID":"ff101b79-8551-4d29-b686-d476abad5900","Type":"ContainerStarted","Data":"6e758e24b2c99d6af7b60e8063d9a1fc7ae01657afe6750b9366e0330b759807"} Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.089246 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-24r56" event={"ID":"ff101b79-8551-4d29-b686-d476abad5900","Type":"ContainerStarted","Data":"6a86968970dc97a18964562e38620620b340205928021d789a4a2170cc031fa5"} Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.092320 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b22484e-ecb8-48ad-846a-9bf09c732ad3","Type":"ContainerStarted","Data":"a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e"} Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.092379 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b22484e-ecb8-48ad-846a-9bf09c732ad3","Type":"ContainerStarted","Data":"272f34fa0060ca352d818d1e4f94bbddaebac8efcf43ac55f50a665fd256e145"} Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.109559 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-63c1-account-create-zcbsq" event={"ID":"689ace22-4dce-411e-8558-8e84de48d105","Type":"ContainerStarted","Data":"0b125b25f534109552515427bd331b685d82ec3aed6a70917f3be56f55bbd100"} Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.109606 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-63c1-account-create-zcbsq" event={"ID":"689ace22-4dce-411e-8558-8e84de48d105","Type":"ContainerStarted","Data":"98f5c6521d919394053c202eb262a86d6c5401a8c7ebf0ee2123a87d7fede070"} Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.127006 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7063bb41-d0d1-4605-b265-1fb3adce77b5","Type":"ContainerStarted","Data":"6f3944b14538688157807d59761a609046da96a18831c2ce92e137c597678a95"} Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.138893 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-63c1-account-create-zcbsq" podStartSLOduration=7.138877287 podStartE2EDuration="7.138877287s" podCreationTimestamp="2025-11-24 19:37:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:37:16.138071974 +0000 UTC m=+1229.927024345" watchObservedRunningTime="2025-11-24 19:37:16.138877287 +0000 UTC m=+1229.927829658" Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.144027 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wv687" event={"ID":"1ff63a07-b659-4f65-a752-4b244ec5b470","Type":"ContainerStarted","Data":"bae6552008f168d97c89dd33e9696bdc1c9a7a3e98584a611c0be209602e0b54"} Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.144062 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wv687" event={"ID":"1ff63a07-b659-4f65-a752-4b244ec5b470","Type":"ContainerStarted","Data":"8a3ac6ce64fcb5dd615efa7361319478622586c579e7b91804b91bf382dd4b82"} Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.149384 4812 generic.go:334] "Generic (PLEG): container finished" podID="948e94ea-f146-46a9-81e7-1254f7e3661e" containerID="f1a398de16ac12139cac0d17939f95f60164b26c6180c9b237fc5e13c46dac37" exitCode=0 Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.149439 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2180-account-create-k94c2" event={"ID":"948e94ea-f146-46a9-81e7-1254f7e3661e","Type":"ContainerDied","Data":"f1a398de16ac12139cac0d17939f95f60164b26c6180c9b237fc5e13c46dac37"} Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.149530 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2180-account-create-k94c2" event={"ID":"948e94ea-f146-46a9-81e7-1254f7e3661e","Type":"ContainerStarted","Data":"a6436d15c02395a105507e5d2ae2d4b2af86bc1d80bfb366deac269eaf3e53e7"} Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.151403 4812 generic.go:334] "Generic (PLEG): container finished" podID="29f01750-5996-43aa-9799-1f08a3e68b53" containerID="529e2e8483b00d8d42ea5291410b0b7c3d3fa0e80bd3e854a931f9e02123e436" exitCode=0 Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.152417 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c5h9s" event={"ID":"29f01750-5996-43aa-9799-1f08a3e68b53","Type":"ContainerDied","Data":"529e2e8483b00d8d42ea5291410b0b7c3d3fa0e80bd3e854a931f9e02123e436"} Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.152434 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c5h9s" event={"ID":"29f01750-5996-43aa-9799-1f08a3e68b53","Type":"ContainerStarted","Data":"6ffa9c13a4e9f3cc246ee9874272c24915d800426e8368cb7be975a9f2e73deb"} Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.159745 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-wv687" podStartSLOduration=7.159724446 podStartE2EDuration="7.159724446s" podCreationTimestamp="2025-11-24 19:37:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:37:16.154882327 +0000 UTC m=+1229.943834698" watchObservedRunningTime="2025-11-24 19:37:16.159724446 +0000 UTC m=+1229.948676817" Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.166933 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.316651 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 19:37:16 crc kubenswrapper[4812]: W1124 19:37:16.323275 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf01449d_a52d_488f_bd93_b5f84b57fb13.slice/crio-c255d371213c86389ccde603aedf29d3762bbabbcf68894c469acb43318c4a54 WatchSource:0}: Error finding container c255d371213c86389ccde603aedf29d3762bbabbcf68894c469acb43318c4a54: Status 404 returned error can't find the container with id c255d371213c86389ccde603aedf29d3762bbabbcf68894c469acb43318c4a54 Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.459128 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 19:37:16 crc kubenswrapper[4812]: W1124 19:37:16.465420 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56f4002e_a9bd_462d_b5ed_ce5ea166ec16.slice/crio-2eaa5b687f6579881db404a651bac16524d0b6ab0ef469c215247b1c55749d79 WatchSource:0}: Error finding container 2eaa5b687f6579881db404a651bac16524d0b6ab0ef469c215247b1c55749d79: Status 404 returned error can't find the container with id 2eaa5b687f6579881db404a651bac16524d0b6ab0ef469c215247b1c55749d79 Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.983700 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b6e1e33-ebbe-4cfc-906f-7f9ac961b905" path="/var/lib/kubelet/pods/1b6e1e33-ebbe-4cfc-906f-7f9ac961b905/volumes" Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.984681 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61c84290-c737-4cee-8ac2-cfe1bf02e656" path="/var/lib/kubelet/pods/61c84290-c737-4cee-8ac2-cfe1bf02e656/volumes" Nov 24 19:37:16 crc kubenswrapper[4812]: I1124 19:37:16.985227 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4073e4e-f8f0-4ef9-a813-87920edaa840" path="/var/lib/kubelet/pods/b4073e4e-f8f0-4ef9-a813-87920edaa840/volumes" Nov 24 19:37:17 crc kubenswrapper[4812]: I1124 19:37:17.178500 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cf01449d-a52d-488f-bd93-b5f84b57fb13","Type":"ContainerStarted","Data":"289e772fea34cd9b5573b8563f47c8b619a399e0ab44c84b1b81de861433540a"} Nov 24 19:37:17 crc kubenswrapper[4812]: I1124 19:37:17.178824 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cf01449d-a52d-488f-bd93-b5f84b57fb13","Type":"ContainerStarted","Data":"c255d371213c86389ccde603aedf29d3762bbabbcf68894c469acb43318c4a54"} Nov 24 19:37:17 crc kubenswrapper[4812]: I1124 19:37:17.185257 4812 generic.go:334] "Generic (PLEG): container finished" podID="1ff63a07-b659-4f65-a752-4b244ec5b470" containerID="bae6552008f168d97c89dd33e9696bdc1c9a7a3e98584a611c0be209602e0b54" exitCode=0 Nov 24 19:37:17 crc kubenswrapper[4812]: I1124 19:37:17.185309 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wv687" event={"ID":"1ff63a07-b659-4f65-a752-4b244ec5b470","Type":"ContainerDied","Data":"bae6552008f168d97c89dd33e9696bdc1c9a7a3e98584a611c0be209602e0b54"} Nov 24 19:37:17 crc kubenswrapper[4812]: I1124 19:37:17.214171 4812 generic.go:334] "Generic (PLEG): container finished" podID="ff101b79-8551-4d29-b686-d476abad5900" containerID="6e758e24b2c99d6af7b60e8063d9a1fc7ae01657afe6750b9366e0330b759807" exitCode=0 Nov 24 19:37:17 crc kubenswrapper[4812]: I1124 19:37:17.214240 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-24r56" event={"ID":"ff101b79-8551-4d29-b686-d476abad5900","Type":"ContainerDied","Data":"6e758e24b2c99d6af7b60e8063d9a1fc7ae01657afe6750b9366e0330b759807"} Nov 24 19:37:17 crc kubenswrapper[4812]: I1124 19:37:17.219370 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"56f4002e-a9bd-462d-b5ed-ce5ea166ec16","Type":"ContainerStarted","Data":"2eaa5b687f6579881db404a651bac16524d0b6ab0ef469c215247b1c55749d79"} Nov 24 19:37:17 crc kubenswrapper[4812]: I1124 19:37:17.222088 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b22484e-ecb8-48ad-846a-9bf09c732ad3","Type":"ContainerStarted","Data":"41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1"} Nov 24 19:37:17 crc kubenswrapper[4812]: I1124 19:37:17.223986 4812 generic.go:334] "Generic (PLEG): container finished" podID="689ace22-4dce-411e-8558-8e84de48d105" containerID="0b125b25f534109552515427bd331b685d82ec3aed6a70917f3be56f55bbd100" exitCode=0 Nov 24 19:37:17 crc kubenswrapper[4812]: I1124 19:37:17.224033 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-63c1-account-create-zcbsq" event={"ID":"689ace22-4dce-411e-8558-8e84de48d105","Type":"ContainerDied","Data":"0b125b25f534109552515427bd331b685d82ec3aed6a70917f3be56f55bbd100"} Nov 24 19:37:17 crc kubenswrapper[4812]: I1124 19:37:17.260477 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7063bb41-d0d1-4605-b265-1fb3adce77b5","Type":"ContainerStarted","Data":"03b569a149de1654124dbb26b9b742abf3f34b40cd6556a3ae9992326e8dd347"} Nov 24 19:37:17 crc kubenswrapper[4812]: I1124 19:37:17.260554 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7063bb41-d0d1-4605-b265-1fb3adce77b5","Type":"ContainerStarted","Data":"de017585554f7c04efa116007ce5cd58c485707a3172724fa32769316d9a7c7b"} Nov 24 19:37:17 crc kubenswrapper[4812]: I1124 19:37:17.332594 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=10.332571377 podStartE2EDuration="10.332571377s" podCreationTimestamp="2025-11-24 19:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:37:17.289280753 +0000 UTC m=+1231.078233124" watchObservedRunningTime="2025-11-24 19:37:17.332571377 +0000 UTC m=+1231.121523748" Nov 24 19:37:17 crc kubenswrapper[4812]: I1124 19:37:17.576907 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 19:37:17 crc kubenswrapper[4812]: I1124 19:37:17.812290 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-24r56" Nov 24 19:37:17 crc kubenswrapper[4812]: I1124 19:37:17.918343 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jxpr\" (UniqueName: \"kubernetes.io/projected/ff101b79-8551-4d29-b686-d476abad5900-kube-api-access-2jxpr\") pod \"ff101b79-8551-4d29-b686-d476abad5900\" (UID: \"ff101b79-8551-4d29-b686-d476abad5900\") " Nov 24 19:37:17 crc kubenswrapper[4812]: I1124 19:37:17.918390 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff101b79-8551-4d29-b686-d476abad5900-operator-scripts\") pod \"ff101b79-8551-4d29-b686-d476abad5900\" (UID: \"ff101b79-8551-4d29-b686-d476abad5900\") " Nov 24 19:37:17 crc kubenswrapper[4812]: I1124 19:37:17.919262 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff101b79-8551-4d29-b686-d476abad5900-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff101b79-8551-4d29-b686-d476abad5900" (UID: "ff101b79-8551-4d29-b686-d476abad5900"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:37:17 crc kubenswrapper[4812]: I1124 19:37:17.924354 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff101b79-8551-4d29-b686-d476abad5900-kube-api-access-2jxpr" (OuterVolumeSpecName: "kube-api-access-2jxpr") pod "ff101b79-8551-4d29-b686-d476abad5900" (UID: "ff101b79-8551-4d29-b686-d476abad5900"). InnerVolumeSpecName "kube-api-access-2jxpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.014411 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2180-account-create-k94c2" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.020430 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jxpr\" (UniqueName: \"kubernetes.io/projected/ff101b79-8551-4d29-b686-d476abad5900-kube-api-access-2jxpr\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.020461 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff101b79-8551-4d29-b686-d476abad5900-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.021024 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5538-account-create-b9jdq" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.055110 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c5h9s" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.121156 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/948e94ea-f146-46a9-81e7-1254f7e3661e-operator-scripts\") pod \"948e94ea-f146-46a9-81e7-1254f7e3661e\" (UID: \"948e94ea-f146-46a9-81e7-1254f7e3661e\") " Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.121541 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9vs2\" (UniqueName: \"kubernetes.io/projected/948e94ea-f146-46a9-81e7-1254f7e3661e-kube-api-access-z9vs2\") pod \"948e94ea-f146-46a9-81e7-1254f7e3661e\" (UID: \"948e94ea-f146-46a9-81e7-1254f7e3661e\") " Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.121595 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psndm\" (UniqueName: \"kubernetes.io/projected/29f01750-5996-43aa-9799-1f08a3e68b53-kube-api-access-psndm\") pod \"29f01750-5996-43aa-9799-1f08a3e68b53\" (UID: \"29f01750-5996-43aa-9799-1f08a3e68b53\") " Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.121631 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53b4924c-df56-43c7-a032-83eb18d1eca0-operator-scripts\") pod \"53b4924c-df56-43c7-a032-83eb18d1eca0\" (UID: \"53b4924c-df56-43c7-a032-83eb18d1eca0\") " Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.121667 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75ct7\" (UniqueName: \"kubernetes.io/projected/53b4924c-df56-43c7-a032-83eb18d1eca0-kube-api-access-75ct7\") pod \"53b4924c-df56-43c7-a032-83eb18d1eca0\" (UID: \"53b4924c-df56-43c7-a032-83eb18d1eca0\") " Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.121689 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29f01750-5996-43aa-9799-1f08a3e68b53-operator-scripts\") pod \"29f01750-5996-43aa-9799-1f08a3e68b53\" (UID: \"29f01750-5996-43aa-9799-1f08a3e68b53\") " Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.121713 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/948e94ea-f146-46a9-81e7-1254f7e3661e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "948e94ea-f146-46a9-81e7-1254f7e3661e" (UID: "948e94ea-f146-46a9-81e7-1254f7e3661e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.122127 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/948e94ea-f146-46a9-81e7-1254f7e3661e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.123368 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53b4924c-df56-43c7-a032-83eb18d1eca0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53b4924c-df56-43c7-a032-83eb18d1eca0" (UID: "53b4924c-df56-43c7-a032-83eb18d1eca0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.124068 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29f01750-5996-43aa-9799-1f08a3e68b53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "29f01750-5996-43aa-9799-1f08a3e68b53" (UID: "29f01750-5996-43aa-9799-1f08a3e68b53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.125895 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29f01750-5996-43aa-9799-1f08a3e68b53-kube-api-access-psndm" (OuterVolumeSpecName: "kube-api-access-psndm") pod "29f01750-5996-43aa-9799-1f08a3e68b53" (UID: "29f01750-5996-43aa-9799-1f08a3e68b53"). InnerVolumeSpecName "kube-api-access-psndm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.128593 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/948e94ea-f146-46a9-81e7-1254f7e3661e-kube-api-access-z9vs2" (OuterVolumeSpecName: "kube-api-access-z9vs2") pod "948e94ea-f146-46a9-81e7-1254f7e3661e" (UID: "948e94ea-f146-46a9-81e7-1254f7e3661e"). InnerVolumeSpecName "kube-api-access-z9vs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.128907 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53b4924c-df56-43c7-a032-83eb18d1eca0-kube-api-access-75ct7" (OuterVolumeSpecName: "kube-api-access-75ct7") pod "53b4924c-df56-43c7-a032-83eb18d1eca0" (UID: "53b4924c-df56-43c7-a032-83eb18d1eca0"). InnerVolumeSpecName "kube-api-access-75ct7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.224058 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9vs2\" (UniqueName: \"kubernetes.io/projected/948e94ea-f146-46a9-81e7-1254f7e3661e-kube-api-access-z9vs2\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.224102 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psndm\" (UniqueName: \"kubernetes.io/projected/29f01750-5996-43aa-9799-1f08a3e68b53-kube-api-access-psndm\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.224116 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53b4924c-df56-43c7-a032-83eb18d1eca0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.224127 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75ct7\" (UniqueName: \"kubernetes.io/projected/53b4924c-df56-43c7-a032-83eb18d1eca0-kube-api-access-75ct7\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.224139 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29f01750-5996-43aa-9799-1f08a3e68b53-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.268676 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"56f4002e-a9bd-462d-b5ed-ce5ea166ec16","Type":"ContainerStarted","Data":"a37153fef35e2218bd72e8d349e3157d2b4261a45df7d03e4deb1d5ee538a030"} Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.268713 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"56f4002e-a9bd-462d-b5ed-ce5ea166ec16","Type":"ContainerStarted","Data":"421d219729fffc3e229f6f163aa63a1f6a6b2a4dd6565bac6b4f81cad77dfa57"} Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.270574 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b22484e-ecb8-48ad-846a-9bf09c732ad3","Type":"ContainerStarted","Data":"b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb"} Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.272515 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cf01449d-a52d-488f-bd93-b5f84b57fb13","Type":"ContainerStarted","Data":"0246dd3196f30fe8b017001268ba69633ac8f5832eaad85f9a757939e217b9fb"} Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.275454 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2180-account-create-k94c2" event={"ID":"948e94ea-f146-46a9-81e7-1254f7e3661e","Type":"ContainerDied","Data":"a6436d15c02395a105507e5d2ae2d4b2af86bc1d80bfb366deac269eaf3e53e7"} Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.275484 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6436d15c02395a105507e5d2ae2d4b2af86bc1d80bfb366deac269eaf3e53e7" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.275472 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2180-account-create-k94c2" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.279218 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5538-account-create-b9jdq" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.279227 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5538-account-create-b9jdq" event={"ID":"53b4924c-df56-43c7-a032-83eb18d1eca0","Type":"ContainerDied","Data":"13885801b35903182a5b39b405dca909b7a7a365219b783cc69b7b240db73491"} Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.279259 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13885801b35903182a5b39b405dca909b7a7a365219b783cc69b7b240db73491" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.280916 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-c5h9s" event={"ID":"29f01750-5996-43aa-9799-1f08a3e68b53","Type":"ContainerDied","Data":"6ffa9c13a4e9f3cc246ee9874272c24915d800426e8368cb7be975a9f2e73deb"} Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.280935 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-c5h9s" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.280944 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ffa9c13a4e9f3cc246ee9874272c24915d800426e8368cb7be975a9f2e73deb" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.282138 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-24r56" event={"ID":"ff101b79-8551-4d29-b686-d476abad5900","Type":"ContainerDied","Data":"6a86968970dc97a18964562e38620620b340205928021d789a4a2170cc031fa5"} Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.282182 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a86968970dc97a18964562e38620620b340205928021d789a4a2170cc031fa5" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.282222 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-24r56" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.308719 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.308703217 podStartE2EDuration="3.308703217s" podCreationTimestamp="2025-11-24 19:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:37:18.29872384 +0000 UTC m=+1232.087676211" watchObservedRunningTime="2025-11-24 19:37:18.308703217 +0000 UTC m=+1232.097655578" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.334278 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.3342606200000002 podStartE2EDuration="3.33426062s" podCreationTimestamp="2025-11-24 19:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:37:18.326164077 +0000 UTC m=+1232.115116448" watchObservedRunningTime="2025-11-24 19:37:18.33426062 +0000 UTC m=+1232.123212991" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.791941 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wv687" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.797804 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-63c1-account-create-zcbsq" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.837639 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr9sr\" (UniqueName: \"kubernetes.io/projected/1ff63a07-b659-4f65-a752-4b244ec5b470-kube-api-access-cr9sr\") pod \"1ff63a07-b659-4f65-a752-4b244ec5b470\" (UID: \"1ff63a07-b659-4f65-a752-4b244ec5b470\") " Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.837822 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff63a07-b659-4f65-a752-4b244ec5b470-operator-scripts\") pod \"1ff63a07-b659-4f65-a752-4b244ec5b470\" (UID: \"1ff63a07-b659-4f65-a752-4b244ec5b470\") " Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.838671 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ff63a07-b659-4f65-a752-4b244ec5b470-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ff63a07-b659-4f65-a752-4b244ec5b470" (UID: "1ff63a07-b659-4f65-a752-4b244ec5b470"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.845572 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ff63a07-b659-4f65-a752-4b244ec5b470-kube-api-access-cr9sr" (OuterVolumeSpecName: "kube-api-access-cr9sr") pod "1ff63a07-b659-4f65-a752-4b244ec5b470" (UID: "1ff63a07-b659-4f65-a752-4b244ec5b470"). InnerVolumeSpecName "kube-api-access-cr9sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.940358 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5hjd\" (UniqueName: \"kubernetes.io/projected/689ace22-4dce-411e-8558-8e84de48d105-kube-api-access-m5hjd\") pod \"689ace22-4dce-411e-8558-8e84de48d105\" (UID: \"689ace22-4dce-411e-8558-8e84de48d105\") " Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.940410 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/689ace22-4dce-411e-8558-8e84de48d105-operator-scripts\") pod \"689ace22-4dce-411e-8558-8e84de48d105\" (UID: \"689ace22-4dce-411e-8558-8e84de48d105\") " Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.940739 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr9sr\" (UniqueName: \"kubernetes.io/projected/1ff63a07-b659-4f65-a752-4b244ec5b470-kube-api-access-cr9sr\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.940758 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ff63a07-b659-4f65-a752-4b244ec5b470-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.940826 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/689ace22-4dce-411e-8558-8e84de48d105-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "689ace22-4dce-411e-8558-8e84de48d105" (UID: "689ace22-4dce-411e-8558-8e84de48d105"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:37:18 crc kubenswrapper[4812]: I1124 19:37:18.944501 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/689ace22-4dce-411e-8558-8e84de48d105-kube-api-access-m5hjd" (OuterVolumeSpecName: "kube-api-access-m5hjd") pod "689ace22-4dce-411e-8558-8e84de48d105" (UID: "689ace22-4dce-411e-8558-8e84de48d105"). InnerVolumeSpecName "kube-api-access-m5hjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:37:19 crc kubenswrapper[4812]: I1124 19:37:19.042979 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5hjd\" (UniqueName: \"kubernetes.io/projected/689ace22-4dce-411e-8558-8e84de48d105-kube-api-access-m5hjd\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:19 crc kubenswrapper[4812]: I1124 19:37:19.043315 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/689ace22-4dce-411e-8558-8e84de48d105-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:19 crc kubenswrapper[4812]: I1124 19:37:19.292251 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b22484e-ecb8-48ad-846a-9bf09c732ad3","Type":"ContainerStarted","Data":"08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267"} Nov 24 19:37:19 crc kubenswrapper[4812]: I1124 19:37:19.292401 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b22484e-ecb8-48ad-846a-9bf09c732ad3" containerName="ceilometer-central-agent" containerID="cri-o://a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e" gracePeriod=30 Nov 24 19:37:19 crc kubenswrapper[4812]: I1124 19:37:19.292624 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 19:37:19 crc kubenswrapper[4812]: I1124 19:37:19.292832 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b22484e-ecb8-48ad-846a-9bf09c732ad3" containerName="proxy-httpd" containerID="cri-o://08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267" gracePeriod=30 Nov 24 19:37:19 crc kubenswrapper[4812]: I1124 19:37:19.292868 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b22484e-ecb8-48ad-846a-9bf09c732ad3" containerName="sg-core" containerID="cri-o://b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb" gracePeriod=30 Nov 24 19:37:19 crc kubenswrapper[4812]: I1124 19:37:19.292897 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5b22484e-ecb8-48ad-846a-9bf09c732ad3" containerName="ceilometer-notification-agent" containerID="cri-o://41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1" gracePeriod=30 Nov 24 19:37:19 crc kubenswrapper[4812]: I1124 19:37:19.297443 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-63c1-account-create-zcbsq" event={"ID":"689ace22-4dce-411e-8558-8e84de48d105","Type":"ContainerDied","Data":"98f5c6521d919394053c202eb262a86d6c5401a8c7ebf0ee2123a87d7fede070"} Nov 24 19:37:19 crc kubenswrapper[4812]: I1124 19:37:19.297480 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98f5c6521d919394053c202eb262a86d6c5401a8c7ebf0ee2123a87d7fede070" Nov 24 19:37:19 crc kubenswrapper[4812]: I1124 19:37:19.297524 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-63c1-account-create-zcbsq" Nov 24 19:37:19 crc kubenswrapper[4812]: I1124 19:37:19.299837 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-wv687" Nov 24 19:37:19 crc kubenswrapper[4812]: I1124 19:37:19.300153 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-wv687" event={"ID":"1ff63a07-b659-4f65-a752-4b244ec5b470","Type":"ContainerDied","Data":"8a3ac6ce64fcb5dd615efa7361319478622586c579e7b91804b91bf382dd4b82"} Nov 24 19:37:19 crc kubenswrapper[4812]: I1124 19:37:19.300173 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a3ac6ce64fcb5dd615efa7361319478622586c579e7b91804b91bf382dd4b82" Nov 24 19:37:19 crc kubenswrapper[4812]: I1124 19:37:19.325736 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=8.810934047 podStartE2EDuration="12.32572033s" podCreationTimestamp="2025-11-24 19:37:07 +0000 UTC" firstStartedPulling="2025-11-24 19:37:15.073581516 +0000 UTC m=+1228.862533887" lastFinishedPulling="2025-11-24 19:37:18.588367789 +0000 UTC m=+1232.377320170" observedRunningTime="2025-11-24 19:37:19.323509256 +0000 UTC m=+1233.112461657" watchObservedRunningTime="2025-11-24 19:37:19.32572033 +0000 UTC m=+1233.114672701" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.085437 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.185881 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-combined-ca-bundle\") pod \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.185933 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-config-data\") pod \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.185971 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpklf\" (UniqueName: \"kubernetes.io/projected/5b22484e-ecb8-48ad-846a-9bf09c732ad3-kube-api-access-zpklf\") pod \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.186124 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b22484e-ecb8-48ad-846a-9bf09c732ad3-run-httpd\") pod \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.186154 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-scripts\") pod \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.186180 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-sg-core-conf-yaml\") pod \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.186234 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b22484e-ecb8-48ad-846a-9bf09c732ad3-log-httpd\") pod \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\" (UID: \"5b22484e-ecb8-48ad-846a-9bf09c732ad3\") " Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.187226 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b22484e-ecb8-48ad-846a-9bf09c732ad3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5b22484e-ecb8-48ad-846a-9bf09c732ad3" (UID: "5b22484e-ecb8-48ad-846a-9bf09c732ad3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.187468 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b22484e-ecb8-48ad-846a-9bf09c732ad3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5b22484e-ecb8-48ad-846a-9bf09c732ad3" (UID: "5b22484e-ecb8-48ad-846a-9bf09c732ad3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.199589 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b22484e-ecb8-48ad-846a-9bf09c732ad3-kube-api-access-zpklf" (OuterVolumeSpecName: "kube-api-access-zpklf") pod "5b22484e-ecb8-48ad-846a-9bf09c732ad3" (UID: "5b22484e-ecb8-48ad-846a-9bf09c732ad3"). InnerVolumeSpecName "kube-api-access-zpklf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.200617 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-scripts" (OuterVolumeSpecName: "scripts") pod "5b22484e-ecb8-48ad-846a-9bf09c732ad3" (UID: "5b22484e-ecb8-48ad-846a-9bf09c732ad3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.272570 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5b22484e-ecb8-48ad-846a-9bf09c732ad3" (UID: "5b22484e-ecb8-48ad-846a-9bf09c732ad3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.283313 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zrdjt"] Nov 24 19:37:20 crc kubenswrapper[4812]: E1124 19:37:20.283682 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b22484e-ecb8-48ad-846a-9bf09c732ad3" containerName="ceilometer-notification-agent" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.283697 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b22484e-ecb8-48ad-846a-9bf09c732ad3" containerName="ceilometer-notification-agent" Nov 24 19:37:20 crc kubenswrapper[4812]: E1124 19:37:20.283718 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53b4924c-df56-43c7-a032-83eb18d1eca0" containerName="mariadb-account-create" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.283724 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="53b4924c-df56-43c7-a032-83eb18d1eca0" containerName="mariadb-account-create" Nov 24 19:37:20 crc kubenswrapper[4812]: E1124 19:37:20.283741 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b22484e-ecb8-48ad-846a-9bf09c732ad3" containerName="ceilometer-central-agent" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.283747 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b22484e-ecb8-48ad-846a-9bf09c732ad3" containerName="ceilometer-central-agent" Nov 24 19:37:20 crc kubenswrapper[4812]: E1124 19:37:20.283755 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f01750-5996-43aa-9799-1f08a3e68b53" containerName="mariadb-database-create" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.283760 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f01750-5996-43aa-9799-1f08a3e68b53" containerName="mariadb-database-create" Nov 24 19:37:20 crc kubenswrapper[4812]: E1124 19:37:20.283771 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948e94ea-f146-46a9-81e7-1254f7e3661e" containerName="mariadb-account-create" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.283777 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="948e94ea-f146-46a9-81e7-1254f7e3661e" containerName="mariadb-account-create" Nov 24 19:37:20 crc kubenswrapper[4812]: E1124 19:37:20.283791 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b22484e-ecb8-48ad-846a-9bf09c732ad3" containerName="proxy-httpd" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.283797 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b22484e-ecb8-48ad-846a-9bf09c732ad3" containerName="proxy-httpd" Nov 24 19:37:20 crc kubenswrapper[4812]: E1124 19:37:20.283807 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="689ace22-4dce-411e-8558-8e84de48d105" containerName="mariadb-account-create" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.283813 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="689ace22-4dce-411e-8558-8e84de48d105" containerName="mariadb-account-create" Nov 24 19:37:20 crc kubenswrapper[4812]: E1124 19:37:20.283823 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b22484e-ecb8-48ad-846a-9bf09c732ad3" containerName="sg-core" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.283828 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b22484e-ecb8-48ad-846a-9bf09c732ad3" containerName="sg-core" Nov 24 19:37:20 crc kubenswrapper[4812]: E1124 19:37:20.283838 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff101b79-8551-4d29-b686-d476abad5900" containerName="mariadb-database-create" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.283843 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff101b79-8551-4d29-b686-d476abad5900" containerName="mariadb-database-create" Nov 24 19:37:20 crc kubenswrapper[4812]: E1124 19:37:20.283854 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff63a07-b659-4f65-a752-4b244ec5b470" containerName="mariadb-database-create" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.283860 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff63a07-b659-4f65-a752-4b244ec5b470" containerName="mariadb-database-create" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.284030 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="29f01750-5996-43aa-9799-1f08a3e68b53" containerName="mariadb-database-create" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.284045 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b22484e-ecb8-48ad-846a-9bf09c732ad3" containerName="ceilometer-central-agent" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.284055 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff101b79-8551-4d29-b686-d476abad5900" containerName="mariadb-database-create" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.284062 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="53b4924c-df56-43c7-a032-83eb18d1eca0" containerName="mariadb-account-create" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.284072 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b22484e-ecb8-48ad-846a-9bf09c732ad3" containerName="sg-core" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.284083 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b22484e-ecb8-48ad-846a-9bf09c732ad3" containerName="ceilometer-notification-agent" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.284094 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="689ace22-4dce-411e-8558-8e84de48d105" containerName="mariadb-account-create" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.284105 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ff63a07-b659-4f65-a752-4b244ec5b470" containerName="mariadb-database-create" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.284116 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b22484e-ecb8-48ad-846a-9bf09c732ad3" containerName="proxy-httpd" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.284129 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="948e94ea-f146-46a9-81e7-1254f7e3661e" containerName="mariadb-account-create" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.284675 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zrdjt" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.289405 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b22484e-ecb8-48ad-846a-9bf09c732ad3-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.289437 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.289446 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.289456 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5b22484e-ecb8-48ad-846a-9bf09c732ad3-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.289464 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpklf\" (UniqueName: \"kubernetes.io/projected/5b22484e-ecb8-48ad-846a-9bf09c732ad3-kube-api-access-zpklf\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.293811 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.293978 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.309837 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-2rhhb" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.310480 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-config-data" (OuterVolumeSpecName: "config-data") pod "5b22484e-ecb8-48ad-846a-9bf09c732ad3" (UID: "5b22484e-ecb8-48ad-846a-9bf09c732ad3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.311742 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zrdjt"] Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.330572 4812 generic.go:334] "Generic (PLEG): container finished" podID="5b22484e-ecb8-48ad-846a-9bf09c732ad3" containerID="08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267" exitCode=0 Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.330602 4812 generic.go:334] "Generic (PLEG): container finished" podID="5b22484e-ecb8-48ad-846a-9bf09c732ad3" containerID="b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb" exitCode=2 Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.330611 4812 generic.go:334] "Generic (PLEG): container finished" podID="5b22484e-ecb8-48ad-846a-9bf09c732ad3" containerID="41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1" exitCode=0 Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.330619 4812 generic.go:334] "Generic (PLEG): container finished" podID="5b22484e-ecb8-48ad-846a-9bf09c732ad3" containerID="a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e" exitCode=0 Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.330639 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b22484e-ecb8-48ad-846a-9bf09c732ad3","Type":"ContainerDied","Data":"08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267"} Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.330666 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b22484e-ecb8-48ad-846a-9bf09c732ad3","Type":"ContainerDied","Data":"b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb"} Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.330677 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b22484e-ecb8-48ad-846a-9bf09c732ad3","Type":"ContainerDied","Data":"41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1"} Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.330686 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b22484e-ecb8-48ad-846a-9bf09c732ad3","Type":"ContainerDied","Data":"a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e"} Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.330695 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5b22484e-ecb8-48ad-846a-9bf09c732ad3","Type":"ContainerDied","Data":"272f34fa0060ca352d818d1e4f94bbddaebac8efcf43ac55f50a665fd256e145"} Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.330710 4812 scope.go:117] "RemoveContainer" containerID="08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.330855 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.336559 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b22484e-ecb8-48ad-846a-9bf09c732ad3" (UID: "5b22484e-ecb8-48ad-846a-9bf09c732ad3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.358517 4812 scope.go:117] "RemoveContainer" containerID="b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.380481 4812 scope.go:117] "RemoveContainer" containerID="41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.391368 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9v8w\" (UniqueName: \"kubernetes.io/projected/a238488e-998f-478f-8163-452bb47b4dfd-kube-api-access-g9v8w\") pod \"nova-cell0-conductor-db-sync-zrdjt\" (UID: \"a238488e-998f-478f-8163-452bb47b4dfd\") " pod="openstack/nova-cell0-conductor-db-sync-zrdjt" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.391402 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a238488e-998f-478f-8163-452bb47b4dfd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zrdjt\" (UID: \"a238488e-998f-478f-8163-452bb47b4dfd\") " pod="openstack/nova-cell0-conductor-db-sync-zrdjt" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.391444 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a238488e-998f-478f-8163-452bb47b4dfd-scripts\") pod \"nova-cell0-conductor-db-sync-zrdjt\" (UID: \"a238488e-998f-478f-8163-452bb47b4dfd\") " pod="openstack/nova-cell0-conductor-db-sync-zrdjt" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.391491 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a238488e-998f-478f-8163-452bb47b4dfd-config-data\") pod \"nova-cell0-conductor-db-sync-zrdjt\" (UID: \"a238488e-998f-478f-8163-452bb47b4dfd\") " pod="openstack/nova-cell0-conductor-db-sync-zrdjt" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.391546 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.391557 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b22484e-ecb8-48ad-846a-9bf09c732ad3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.412528 4812 scope.go:117] "RemoveContainer" containerID="a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.434408 4812 scope.go:117] "RemoveContainer" containerID="08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267" Nov 24 19:37:20 crc kubenswrapper[4812]: E1124 19:37:20.434889 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267\": container with ID starting with 08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267 not found: ID does not exist" containerID="08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.434921 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267"} err="failed to get container status \"08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267\": rpc error: code = NotFound desc = could not find container \"08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267\": container with ID starting with 08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267 not found: ID does not exist" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.434940 4812 scope.go:117] "RemoveContainer" containerID="b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb" Nov 24 19:37:20 crc kubenswrapper[4812]: E1124 19:37:20.435250 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb\": container with ID starting with b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb not found: ID does not exist" containerID="b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.435294 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb"} err="failed to get container status \"b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb\": rpc error: code = NotFound desc = could not find container \"b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb\": container with ID starting with b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb not found: ID does not exist" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.435321 4812 scope.go:117] "RemoveContainer" containerID="41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1" Nov 24 19:37:20 crc kubenswrapper[4812]: E1124 19:37:20.435742 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1\": container with ID starting with 41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1 not found: ID does not exist" containerID="41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.435783 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1"} err="failed to get container status \"41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1\": rpc error: code = NotFound desc = could not find container \"41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1\": container with ID starting with 41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1 not found: ID does not exist" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.435811 4812 scope.go:117] "RemoveContainer" containerID="a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e" Nov 24 19:37:20 crc kubenswrapper[4812]: E1124 19:37:20.436117 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e\": container with ID starting with a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e not found: ID does not exist" containerID="a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.436161 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e"} err="failed to get container status \"a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e\": rpc error: code = NotFound desc = could not find container \"a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e\": container with ID starting with a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e not found: ID does not exist" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.436193 4812 scope.go:117] "RemoveContainer" containerID="08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.436577 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267"} err="failed to get container status \"08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267\": rpc error: code = NotFound desc = could not find container \"08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267\": container with ID starting with 08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267 not found: ID does not exist" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.436605 4812 scope.go:117] "RemoveContainer" containerID="b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.436880 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb"} err="failed to get container status \"b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb\": rpc error: code = NotFound desc = could not find container \"b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb\": container with ID starting with b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb not found: ID does not exist" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.436908 4812 scope.go:117] "RemoveContainer" containerID="41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.437560 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1"} err="failed to get container status \"41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1\": rpc error: code = NotFound desc = could not find container \"41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1\": container with ID starting with 41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1 not found: ID does not exist" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.437598 4812 scope.go:117] "RemoveContainer" containerID="a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.437787 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e"} err="failed to get container status \"a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e\": rpc error: code = NotFound desc = could not find container \"a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e\": container with ID starting with a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e not found: ID does not exist" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.437807 4812 scope.go:117] "RemoveContainer" containerID="08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.437990 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267"} err="failed to get container status \"08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267\": rpc error: code = NotFound desc = could not find container \"08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267\": container with ID starting with 08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267 not found: ID does not exist" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.438009 4812 scope.go:117] "RemoveContainer" containerID="b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.438186 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb"} err="failed to get container status \"b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb\": rpc error: code = NotFound desc = could not find container \"b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb\": container with ID starting with b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb not found: ID does not exist" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.438206 4812 scope.go:117] "RemoveContainer" containerID="41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.438391 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1"} err="failed to get container status \"41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1\": rpc error: code = NotFound desc = could not find container \"41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1\": container with ID starting with 41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1 not found: ID does not exist" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.438420 4812 scope.go:117] "RemoveContainer" containerID="a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.438585 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e"} err="failed to get container status \"a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e\": rpc error: code = NotFound desc = could not find container \"a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e\": container with ID starting with a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e not found: ID does not exist" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.438602 4812 scope.go:117] "RemoveContainer" containerID="08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.438767 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267"} err="failed to get container status \"08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267\": rpc error: code = NotFound desc = could not find container \"08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267\": container with ID starting with 08630e2bd9d08102eda25ae100afc2011d097cb13a442d580fd1f0bc939e0267 not found: ID does not exist" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.438787 4812 scope.go:117] "RemoveContainer" containerID="b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.438965 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb"} err="failed to get container status \"b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb\": rpc error: code = NotFound desc = could not find container \"b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb\": container with ID starting with b1f03b10e19b61f6730b42c14379dab33d5c02ed9ea8287bac695a72ff8289fb not found: ID does not exist" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.438982 4812 scope.go:117] "RemoveContainer" containerID="41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.439155 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1"} err="failed to get container status \"41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1\": rpc error: code = NotFound desc = could not find container \"41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1\": container with ID starting with 41a3d5d0d8ec6e16ffb6cb4a4c3c4aa65ed25f55fb749c5973fdd00dfe8433a1 not found: ID does not exist" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.439174 4812 scope.go:117] "RemoveContainer" containerID="a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.439352 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e"} err="failed to get container status \"a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e\": rpc error: code = NotFound desc = could not find container \"a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e\": container with ID starting with a19c66d1c9c0370e3a4e0fc0429874e491ac0734536730c3db6b9f312195e89e not found: ID does not exist" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.500092 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9v8w\" (UniqueName: \"kubernetes.io/projected/a238488e-998f-478f-8163-452bb47b4dfd-kube-api-access-g9v8w\") pod \"nova-cell0-conductor-db-sync-zrdjt\" (UID: \"a238488e-998f-478f-8163-452bb47b4dfd\") " pod="openstack/nova-cell0-conductor-db-sync-zrdjt" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.500142 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a238488e-998f-478f-8163-452bb47b4dfd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zrdjt\" (UID: \"a238488e-998f-478f-8163-452bb47b4dfd\") " pod="openstack/nova-cell0-conductor-db-sync-zrdjt" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.500192 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a238488e-998f-478f-8163-452bb47b4dfd-scripts\") pod \"nova-cell0-conductor-db-sync-zrdjt\" (UID: \"a238488e-998f-478f-8163-452bb47b4dfd\") " pod="openstack/nova-cell0-conductor-db-sync-zrdjt" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.500243 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a238488e-998f-478f-8163-452bb47b4dfd-config-data\") pod \"nova-cell0-conductor-db-sync-zrdjt\" (UID: \"a238488e-998f-478f-8163-452bb47b4dfd\") " pod="openstack/nova-cell0-conductor-db-sync-zrdjt" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.504427 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a238488e-998f-478f-8163-452bb47b4dfd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zrdjt\" (UID: \"a238488e-998f-478f-8163-452bb47b4dfd\") " pod="openstack/nova-cell0-conductor-db-sync-zrdjt" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.504850 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a238488e-998f-478f-8163-452bb47b4dfd-scripts\") pod \"nova-cell0-conductor-db-sync-zrdjt\" (UID: \"a238488e-998f-478f-8163-452bb47b4dfd\") " pod="openstack/nova-cell0-conductor-db-sync-zrdjt" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.505809 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a238488e-998f-478f-8163-452bb47b4dfd-config-data\") pod \"nova-cell0-conductor-db-sync-zrdjt\" (UID: \"a238488e-998f-478f-8163-452bb47b4dfd\") " pod="openstack/nova-cell0-conductor-db-sync-zrdjt" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.519791 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9v8w\" (UniqueName: \"kubernetes.io/projected/a238488e-998f-478f-8163-452bb47b4dfd-kube-api-access-g9v8w\") pod \"nova-cell0-conductor-db-sync-zrdjt\" (UID: \"a238488e-998f-478f-8163-452bb47b4dfd\") " pod="openstack/nova-cell0-conductor-db-sync-zrdjt" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.623183 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zrdjt" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.679090 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.694202 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.709900 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.712570 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.718726 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.718896 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.727673 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.809242 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.809289 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-scripts\") pod \"ceilometer-0\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.809317 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4htjk\" (UniqueName: \"kubernetes.io/projected/546ca8a3-fd87-4b43-9f22-418f3b5619d8-kube-api-access-4htjk\") pod \"ceilometer-0\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.809355 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.809457 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-config-data\") pod \"ceilometer-0\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.809607 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/546ca8a3-fd87-4b43-9f22-418f3b5619d8-run-httpd\") pod \"ceilometer-0\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.809778 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/546ca8a3-fd87-4b43-9f22-418f3b5619d8-log-httpd\") pod \"ceilometer-0\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.911956 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.912838 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-config-data\") pod \"ceilometer-0\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.912872 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/546ca8a3-fd87-4b43-9f22-418f3b5619d8-run-httpd\") pod \"ceilometer-0\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.912942 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/546ca8a3-fd87-4b43-9f22-418f3b5619d8-log-httpd\") pod \"ceilometer-0\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.912994 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.913017 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-scripts\") pod \"ceilometer-0\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.913047 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4htjk\" (UniqueName: \"kubernetes.io/projected/546ca8a3-fd87-4b43-9f22-418f3b5619d8-kube-api-access-4htjk\") pod \"ceilometer-0\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.914000 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/546ca8a3-fd87-4b43-9f22-418f3b5619d8-run-httpd\") pod \"ceilometer-0\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.914564 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/546ca8a3-fd87-4b43-9f22-418f3b5619d8-log-httpd\") pod \"ceilometer-0\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.919449 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-scripts\") pod \"ceilometer-0\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.923431 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.928783 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-config-data\") pod \"ceilometer-0\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.935060 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4htjk\" (UniqueName: \"kubernetes.io/projected/546ca8a3-fd87-4b43-9f22-418f3b5619d8-kube-api-access-4htjk\") pod \"ceilometer-0\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.938060 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " pod="openstack/ceilometer-0" Nov 24 19:37:20 crc kubenswrapper[4812]: I1124 19:37:20.979146 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b22484e-ecb8-48ad-846a-9bf09c732ad3" path="/var/lib/kubelet/pods/5b22484e-ecb8-48ad-846a-9bf09c732ad3/volumes" Nov 24 19:37:21 crc kubenswrapper[4812]: I1124 19:37:21.084296 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:37:21 crc kubenswrapper[4812]: I1124 19:37:21.186946 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zrdjt"] Nov 24 19:37:21 crc kubenswrapper[4812]: I1124 19:37:21.344315 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zrdjt" event={"ID":"a238488e-998f-478f-8163-452bb47b4dfd","Type":"ContainerStarted","Data":"de2b2700493c1385c5ac4623f9add14cb2be7417b78e308ca7d8e082963d68ea"} Nov 24 19:37:21 crc kubenswrapper[4812]: I1124 19:37:21.462828 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:37:21 crc kubenswrapper[4812]: I1124 19:37:21.551146 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:22 crc kubenswrapper[4812]: I1124 19:37:22.357028 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"546ca8a3-fd87-4b43-9f22-418f3b5619d8","Type":"ContainerStarted","Data":"1ff0234494f14c92b9ecfe982f60d54d55cbd1c3af84366672e26793de07fa15"} Nov 24 19:37:22 crc kubenswrapper[4812]: I1124 19:37:22.357297 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"546ca8a3-fd87-4b43-9f22-418f3b5619d8","Type":"ContainerStarted","Data":"6e76fe01dc3e634e60b69c37c3b36d66c915536afda1b586fae45af2cc9b9c4a"} Nov 24 19:37:22 crc kubenswrapper[4812]: I1124 19:37:22.807236 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 19:37:23 crc kubenswrapper[4812]: I1124 19:37:23.367359 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"546ca8a3-fd87-4b43-9f22-418f3b5619d8","Type":"ContainerStarted","Data":"08083ba830a4083a7a82c7cd6266359e761c3e7e4bfc62ce453a7aebf2a1fad0"} Nov 24 19:37:24 crc kubenswrapper[4812]: I1124 19:37:24.382221 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"546ca8a3-fd87-4b43-9f22-418f3b5619d8","Type":"ContainerStarted","Data":"b6745802a2b53e022f8145a0e1a4e371fd418b338e572c8ea6666f018a8ded4f"} Nov 24 19:37:25 crc kubenswrapper[4812]: I1124 19:37:25.393480 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"546ca8a3-fd87-4b43-9f22-418f3b5619d8","Type":"ContainerStarted","Data":"f23fe6e380be2dcc8e0f864b63ef1264108aff30be0349db6e3d42b71a9f9753"} Nov 24 19:37:25 crc kubenswrapper[4812]: I1124 19:37:25.393833 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 19:37:25 crc kubenswrapper[4812]: I1124 19:37:25.428941 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.274433943 podStartE2EDuration="5.428921256s" podCreationTimestamp="2025-11-24 19:37:20 +0000 UTC" firstStartedPulling="2025-11-24 19:37:21.563237524 +0000 UTC m=+1235.352189915" lastFinishedPulling="2025-11-24 19:37:24.717724867 +0000 UTC m=+1238.506677228" observedRunningTime="2025-11-24 19:37:25.418046714 +0000 UTC m=+1239.206999095" watchObservedRunningTime="2025-11-24 19:37:25.428921256 +0000 UTC m=+1239.217873637" Nov 24 19:37:25 crc kubenswrapper[4812]: I1124 19:37:25.599230 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 19:37:25 crc kubenswrapper[4812]: I1124 19:37:25.599281 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 19:37:25 crc kubenswrapper[4812]: I1124 19:37:25.643687 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 19:37:25 crc kubenswrapper[4812]: I1124 19:37:25.655798 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 19:37:25 crc kubenswrapper[4812]: I1124 19:37:25.864846 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 19:37:25 crc kubenswrapper[4812]: I1124 19:37:25.864900 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 19:37:25 crc kubenswrapper[4812]: I1124 19:37:25.902212 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 19:37:25 crc kubenswrapper[4812]: I1124 19:37:25.912797 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 19:37:26 crc kubenswrapper[4812]: I1124 19:37:26.403399 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 19:37:26 crc kubenswrapper[4812]: I1124 19:37:26.403661 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 19:37:26 crc kubenswrapper[4812]: I1124 19:37:26.403672 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 19:37:26 crc kubenswrapper[4812]: I1124 19:37:26.403681 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 19:37:28 crc kubenswrapper[4812]: I1124 19:37:28.630304 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 19:37:28 crc kubenswrapper[4812]: I1124 19:37:28.631035 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 19:37:28 crc kubenswrapper[4812]: I1124 19:37:28.929216 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 19:37:28 crc kubenswrapper[4812]: I1124 19:37:28.929358 4812 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 19:37:28 crc kubenswrapper[4812]: I1124 19:37:28.936912 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 19:37:32 crc kubenswrapper[4812]: I1124 19:37:32.188425 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:32 crc kubenswrapper[4812]: I1124 19:37:32.189218 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="546ca8a3-fd87-4b43-9f22-418f3b5619d8" containerName="ceilometer-central-agent" containerID="cri-o://1ff0234494f14c92b9ecfe982f60d54d55cbd1c3af84366672e26793de07fa15" gracePeriod=30 Nov 24 19:37:32 crc kubenswrapper[4812]: I1124 19:37:32.189377 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="546ca8a3-fd87-4b43-9f22-418f3b5619d8" containerName="proxy-httpd" containerID="cri-o://f23fe6e380be2dcc8e0f864b63ef1264108aff30be0349db6e3d42b71a9f9753" gracePeriod=30 Nov 24 19:37:32 crc kubenswrapper[4812]: I1124 19:37:32.189431 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="546ca8a3-fd87-4b43-9f22-418f3b5619d8" containerName="sg-core" containerID="cri-o://b6745802a2b53e022f8145a0e1a4e371fd418b338e572c8ea6666f018a8ded4f" gracePeriod=30 Nov 24 19:37:32 crc kubenswrapper[4812]: I1124 19:37:32.189469 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="546ca8a3-fd87-4b43-9f22-418f3b5619d8" containerName="ceilometer-notification-agent" containerID="cri-o://08083ba830a4083a7a82c7cd6266359e761c3e7e4bfc62ce453a7aebf2a1fad0" gracePeriod=30 Nov 24 19:37:32 crc kubenswrapper[4812]: I1124 19:37:32.461665 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zrdjt" event={"ID":"a238488e-998f-478f-8163-452bb47b4dfd","Type":"ContainerStarted","Data":"56e92d62401e0fb5fecfcb29bd586b700c363e86804cbadf37038c174b1e81f1"} Nov 24 19:37:32 crc kubenswrapper[4812]: I1124 19:37:32.465137 4812 generic.go:334] "Generic (PLEG): container finished" podID="546ca8a3-fd87-4b43-9f22-418f3b5619d8" containerID="f23fe6e380be2dcc8e0f864b63ef1264108aff30be0349db6e3d42b71a9f9753" exitCode=0 Nov 24 19:37:32 crc kubenswrapper[4812]: I1124 19:37:32.465161 4812 generic.go:334] "Generic (PLEG): container finished" podID="546ca8a3-fd87-4b43-9f22-418f3b5619d8" containerID="b6745802a2b53e022f8145a0e1a4e371fd418b338e572c8ea6666f018a8ded4f" exitCode=2 Nov 24 19:37:32 crc kubenswrapper[4812]: I1124 19:37:32.465179 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"546ca8a3-fd87-4b43-9f22-418f3b5619d8","Type":"ContainerDied","Data":"f23fe6e380be2dcc8e0f864b63ef1264108aff30be0349db6e3d42b71a9f9753"} Nov 24 19:37:32 crc kubenswrapper[4812]: I1124 19:37:32.465200 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"546ca8a3-fd87-4b43-9f22-418f3b5619d8","Type":"ContainerDied","Data":"b6745802a2b53e022f8145a0e1a4e371fd418b338e572c8ea6666f018a8ded4f"} Nov 24 19:37:32 crc kubenswrapper[4812]: I1124 19:37:32.482150 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zrdjt" podStartSLOduration=2.07067301 podStartE2EDuration="12.482135411s" podCreationTimestamp="2025-11-24 19:37:20 +0000 UTC" firstStartedPulling="2025-11-24 19:37:21.199002761 +0000 UTC m=+1234.987955132" lastFinishedPulling="2025-11-24 19:37:31.610465162 +0000 UTC m=+1245.399417533" observedRunningTime="2025-11-24 19:37:32.477429616 +0000 UTC m=+1246.266382007" watchObservedRunningTime="2025-11-24 19:37:32.482135411 +0000 UTC m=+1246.271087782" Nov 24 19:37:32 crc kubenswrapper[4812]: I1124 19:37:32.998864 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:37:32 crc kubenswrapper[4812]: I1124 19:37:32.998920 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.178096 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.269898 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-scripts\") pod \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.269950 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-combined-ca-bundle\") pod \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.270013 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/546ca8a3-fd87-4b43-9f22-418f3b5619d8-log-httpd\") pod \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.270076 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-sg-core-conf-yaml\") pod \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.270201 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-config-data\") pod \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.270235 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/546ca8a3-fd87-4b43-9f22-418f3b5619d8-run-httpd\") pod \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.270302 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4htjk\" (UniqueName: \"kubernetes.io/projected/546ca8a3-fd87-4b43-9f22-418f3b5619d8-kube-api-access-4htjk\") pod \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\" (UID: \"546ca8a3-fd87-4b43-9f22-418f3b5619d8\") " Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.270453 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/546ca8a3-fd87-4b43-9f22-418f3b5619d8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "546ca8a3-fd87-4b43-9f22-418f3b5619d8" (UID: "546ca8a3-fd87-4b43-9f22-418f3b5619d8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.270534 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/546ca8a3-fd87-4b43-9f22-418f3b5619d8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "546ca8a3-fd87-4b43-9f22-418f3b5619d8" (UID: "546ca8a3-fd87-4b43-9f22-418f3b5619d8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.270954 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/546ca8a3-fd87-4b43-9f22-418f3b5619d8-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.270971 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/546ca8a3-fd87-4b43-9f22-418f3b5619d8-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.275265 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-scripts" (OuterVolumeSpecName: "scripts") pod "546ca8a3-fd87-4b43-9f22-418f3b5619d8" (UID: "546ca8a3-fd87-4b43-9f22-418f3b5619d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.280440 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/546ca8a3-fd87-4b43-9f22-418f3b5619d8-kube-api-access-4htjk" (OuterVolumeSpecName: "kube-api-access-4htjk") pod "546ca8a3-fd87-4b43-9f22-418f3b5619d8" (UID: "546ca8a3-fd87-4b43-9f22-418f3b5619d8"). InnerVolumeSpecName "kube-api-access-4htjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.322459 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "546ca8a3-fd87-4b43-9f22-418f3b5619d8" (UID: "546ca8a3-fd87-4b43-9f22-418f3b5619d8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.347496 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "546ca8a3-fd87-4b43-9f22-418f3b5619d8" (UID: "546ca8a3-fd87-4b43-9f22-418f3b5619d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.374096 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4htjk\" (UniqueName: \"kubernetes.io/projected/546ca8a3-fd87-4b43-9f22-418f3b5619d8-kube-api-access-4htjk\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.374125 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.374135 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.374144 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.388722 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-config-data" (OuterVolumeSpecName: "config-data") pod "546ca8a3-fd87-4b43-9f22-418f3b5619d8" (UID: "546ca8a3-fd87-4b43-9f22-418f3b5619d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.475927 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/546ca8a3-fd87-4b43-9f22-418f3b5619d8-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.480121 4812 generic.go:334] "Generic (PLEG): container finished" podID="546ca8a3-fd87-4b43-9f22-418f3b5619d8" containerID="08083ba830a4083a7a82c7cd6266359e761c3e7e4bfc62ce453a7aebf2a1fad0" exitCode=0 Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.480155 4812 generic.go:334] "Generic (PLEG): container finished" podID="546ca8a3-fd87-4b43-9f22-418f3b5619d8" containerID="1ff0234494f14c92b9ecfe982f60d54d55cbd1c3af84366672e26793de07fa15" exitCode=0 Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.480531 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"546ca8a3-fd87-4b43-9f22-418f3b5619d8","Type":"ContainerDied","Data":"08083ba830a4083a7a82c7cd6266359e761c3e7e4bfc62ce453a7aebf2a1fad0"} Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.480573 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.480601 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"546ca8a3-fd87-4b43-9f22-418f3b5619d8","Type":"ContainerDied","Data":"1ff0234494f14c92b9ecfe982f60d54d55cbd1c3af84366672e26793de07fa15"} Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.480623 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"546ca8a3-fd87-4b43-9f22-418f3b5619d8","Type":"ContainerDied","Data":"6e76fe01dc3e634e60b69c37c3b36d66c915536afda1b586fae45af2cc9b9c4a"} Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.480650 4812 scope.go:117] "RemoveContainer" containerID="f23fe6e380be2dcc8e0f864b63ef1264108aff30be0349db6e3d42b71a9f9753" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.500457 4812 scope.go:117] "RemoveContainer" containerID="b6745802a2b53e022f8145a0e1a4e371fd418b338e572c8ea6666f018a8ded4f" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.536216 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.539175 4812 scope.go:117] "RemoveContainer" containerID="08083ba830a4083a7a82c7cd6266359e761c3e7e4bfc62ce453a7aebf2a1fad0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.547780 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.563712 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:33 crc kubenswrapper[4812]: E1124 19:37:33.564053 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="546ca8a3-fd87-4b43-9f22-418f3b5619d8" containerName="proxy-httpd" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.564068 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="546ca8a3-fd87-4b43-9f22-418f3b5619d8" containerName="proxy-httpd" Nov 24 19:37:33 crc kubenswrapper[4812]: E1124 19:37:33.564087 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="546ca8a3-fd87-4b43-9f22-418f3b5619d8" containerName="ceilometer-notification-agent" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.564093 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="546ca8a3-fd87-4b43-9f22-418f3b5619d8" containerName="ceilometer-notification-agent" Nov 24 19:37:33 crc kubenswrapper[4812]: E1124 19:37:33.564102 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="546ca8a3-fd87-4b43-9f22-418f3b5619d8" containerName="sg-core" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.564107 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="546ca8a3-fd87-4b43-9f22-418f3b5619d8" containerName="sg-core" Nov 24 19:37:33 crc kubenswrapper[4812]: E1124 19:37:33.564127 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="546ca8a3-fd87-4b43-9f22-418f3b5619d8" containerName="ceilometer-central-agent" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.564133 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="546ca8a3-fd87-4b43-9f22-418f3b5619d8" containerName="ceilometer-central-agent" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.564286 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="546ca8a3-fd87-4b43-9f22-418f3b5619d8" containerName="ceilometer-central-agent" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.564302 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="546ca8a3-fd87-4b43-9f22-418f3b5619d8" containerName="sg-core" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.564313 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="546ca8a3-fd87-4b43-9f22-418f3b5619d8" containerName="proxy-httpd" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.564322 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="546ca8a3-fd87-4b43-9f22-418f3b5619d8" containerName="ceilometer-notification-agent" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.565968 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.568701 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.569052 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.573364 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.578213 4812 scope.go:117] "RemoveContainer" containerID="1ff0234494f14c92b9ecfe982f60d54d55cbd1c3af84366672e26793de07fa15" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.602460 4812 scope.go:117] "RemoveContainer" containerID="f23fe6e380be2dcc8e0f864b63ef1264108aff30be0349db6e3d42b71a9f9753" Nov 24 19:37:33 crc kubenswrapper[4812]: E1124 19:37:33.606423 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f23fe6e380be2dcc8e0f864b63ef1264108aff30be0349db6e3d42b71a9f9753\": container with ID starting with f23fe6e380be2dcc8e0f864b63ef1264108aff30be0349db6e3d42b71a9f9753 not found: ID does not exist" containerID="f23fe6e380be2dcc8e0f864b63ef1264108aff30be0349db6e3d42b71a9f9753" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.606458 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f23fe6e380be2dcc8e0f864b63ef1264108aff30be0349db6e3d42b71a9f9753"} err="failed to get container status \"f23fe6e380be2dcc8e0f864b63ef1264108aff30be0349db6e3d42b71a9f9753\": rpc error: code = NotFound desc = could not find container \"f23fe6e380be2dcc8e0f864b63ef1264108aff30be0349db6e3d42b71a9f9753\": container with ID starting with f23fe6e380be2dcc8e0f864b63ef1264108aff30be0349db6e3d42b71a9f9753 not found: ID does not exist" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.606479 4812 scope.go:117] "RemoveContainer" containerID="b6745802a2b53e022f8145a0e1a4e371fd418b338e572c8ea6666f018a8ded4f" Nov 24 19:37:33 crc kubenswrapper[4812]: E1124 19:37:33.607024 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6745802a2b53e022f8145a0e1a4e371fd418b338e572c8ea6666f018a8ded4f\": container with ID starting with b6745802a2b53e022f8145a0e1a4e371fd418b338e572c8ea6666f018a8ded4f not found: ID does not exist" containerID="b6745802a2b53e022f8145a0e1a4e371fd418b338e572c8ea6666f018a8ded4f" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.607068 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6745802a2b53e022f8145a0e1a4e371fd418b338e572c8ea6666f018a8ded4f"} err="failed to get container status \"b6745802a2b53e022f8145a0e1a4e371fd418b338e572c8ea6666f018a8ded4f\": rpc error: code = NotFound desc = could not find container \"b6745802a2b53e022f8145a0e1a4e371fd418b338e572c8ea6666f018a8ded4f\": container with ID starting with b6745802a2b53e022f8145a0e1a4e371fd418b338e572c8ea6666f018a8ded4f not found: ID does not exist" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.607094 4812 scope.go:117] "RemoveContainer" containerID="08083ba830a4083a7a82c7cd6266359e761c3e7e4bfc62ce453a7aebf2a1fad0" Nov 24 19:37:33 crc kubenswrapper[4812]: E1124 19:37:33.607690 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08083ba830a4083a7a82c7cd6266359e761c3e7e4bfc62ce453a7aebf2a1fad0\": container with ID starting with 08083ba830a4083a7a82c7cd6266359e761c3e7e4bfc62ce453a7aebf2a1fad0 not found: ID does not exist" containerID="08083ba830a4083a7a82c7cd6266359e761c3e7e4bfc62ce453a7aebf2a1fad0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.607733 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08083ba830a4083a7a82c7cd6266359e761c3e7e4bfc62ce453a7aebf2a1fad0"} err="failed to get container status \"08083ba830a4083a7a82c7cd6266359e761c3e7e4bfc62ce453a7aebf2a1fad0\": rpc error: code = NotFound desc = could not find container \"08083ba830a4083a7a82c7cd6266359e761c3e7e4bfc62ce453a7aebf2a1fad0\": container with ID starting with 08083ba830a4083a7a82c7cd6266359e761c3e7e4bfc62ce453a7aebf2a1fad0 not found: ID does not exist" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.607783 4812 scope.go:117] "RemoveContainer" containerID="1ff0234494f14c92b9ecfe982f60d54d55cbd1c3af84366672e26793de07fa15" Nov 24 19:37:33 crc kubenswrapper[4812]: E1124 19:37:33.608169 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ff0234494f14c92b9ecfe982f60d54d55cbd1c3af84366672e26793de07fa15\": container with ID starting with 1ff0234494f14c92b9ecfe982f60d54d55cbd1c3af84366672e26793de07fa15 not found: ID does not exist" containerID="1ff0234494f14c92b9ecfe982f60d54d55cbd1c3af84366672e26793de07fa15" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.608191 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ff0234494f14c92b9ecfe982f60d54d55cbd1c3af84366672e26793de07fa15"} err="failed to get container status \"1ff0234494f14c92b9ecfe982f60d54d55cbd1c3af84366672e26793de07fa15\": rpc error: code = NotFound desc = could not find container \"1ff0234494f14c92b9ecfe982f60d54d55cbd1c3af84366672e26793de07fa15\": container with ID starting with 1ff0234494f14c92b9ecfe982f60d54d55cbd1c3af84366672e26793de07fa15 not found: ID does not exist" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.608205 4812 scope.go:117] "RemoveContainer" containerID="f23fe6e380be2dcc8e0f864b63ef1264108aff30be0349db6e3d42b71a9f9753" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.608467 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f23fe6e380be2dcc8e0f864b63ef1264108aff30be0349db6e3d42b71a9f9753"} err="failed to get container status \"f23fe6e380be2dcc8e0f864b63ef1264108aff30be0349db6e3d42b71a9f9753\": rpc error: code = NotFound desc = could not find container \"f23fe6e380be2dcc8e0f864b63ef1264108aff30be0349db6e3d42b71a9f9753\": container with ID starting with f23fe6e380be2dcc8e0f864b63ef1264108aff30be0349db6e3d42b71a9f9753 not found: ID does not exist" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.608485 4812 scope.go:117] "RemoveContainer" containerID="b6745802a2b53e022f8145a0e1a4e371fd418b338e572c8ea6666f018a8ded4f" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.608757 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6745802a2b53e022f8145a0e1a4e371fd418b338e572c8ea6666f018a8ded4f"} err="failed to get container status \"b6745802a2b53e022f8145a0e1a4e371fd418b338e572c8ea6666f018a8ded4f\": rpc error: code = NotFound desc = could not find container \"b6745802a2b53e022f8145a0e1a4e371fd418b338e572c8ea6666f018a8ded4f\": container with ID starting with b6745802a2b53e022f8145a0e1a4e371fd418b338e572c8ea6666f018a8ded4f not found: ID does not exist" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.608775 4812 scope.go:117] "RemoveContainer" containerID="08083ba830a4083a7a82c7cd6266359e761c3e7e4bfc62ce453a7aebf2a1fad0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.609015 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08083ba830a4083a7a82c7cd6266359e761c3e7e4bfc62ce453a7aebf2a1fad0"} err="failed to get container status \"08083ba830a4083a7a82c7cd6266359e761c3e7e4bfc62ce453a7aebf2a1fad0\": rpc error: code = NotFound desc = could not find container \"08083ba830a4083a7a82c7cd6266359e761c3e7e4bfc62ce453a7aebf2a1fad0\": container with ID starting with 08083ba830a4083a7a82c7cd6266359e761c3e7e4bfc62ce453a7aebf2a1fad0 not found: ID does not exist" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.609044 4812 scope.go:117] "RemoveContainer" containerID="1ff0234494f14c92b9ecfe982f60d54d55cbd1c3af84366672e26793de07fa15" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.609237 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ff0234494f14c92b9ecfe982f60d54d55cbd1c3af84366672e26793de07fa15"} err="failed to get container status \"1ff0234494f14c92b9ecfe982f60d54d55cbd1c3af84366672e26793de07fa15\": rpc error: code = NotFound desc = could not find container \"1ff0234494f14c92b9ecfe982f60d54d55cbd1c3af84366672e26793de07fa15\": container with ID starting with 1ff0234494f14c92b9ecfe982f60d54d55cbd1c3af84366672e26793de07fa15 not found: ID does not exist" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.680591 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29djt\" (UniqueName: \"kubernetes.io/projected/4818f397-a711-498e-a599-9bb1dfc54e4b-kube-api-access-29djt\") pod \"ceilometer-0\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.680662 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.680739 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4818f397-a711-498e-a599-9bb1dfc54e4b-log-httpd\") pod \"ceilometer-0\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.680789 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.680897 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4818f397-a711-498e-a599-9bb1dfc54e4b-run-httpd\") pod \"ceilometer-0\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.680929 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-config-data\") pod \"ceilometer-0\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.681063 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-scripts\") pod \"ceilometer-0\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.782175 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4818f397-a711-498e-a599-9bb1dfc54e4b-run-httpd\") pod \"ceilometer-0\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.782228 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-config-data\") pod \"ceilometer-0\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.782276 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-scripts\") pod \"ceilometer-0\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.782322 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29djt\" (UniqueName: \"kubernetes.io/projected/4818f397-a711-498e-a599-9bb1dfc54e4b-kube-api-access-29djt\") pod \"ceilometer-0\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.782388 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.782415 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4818f397-a711-498e-a599-9bb1dfc54e4b-log-httpd\") pod \"ceilometer-0\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.782429 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.782700 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4818f397-a711-498e-a599-9bb1dfc54e4b-run-httpd\") pod \"ceilometer-0\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.783153 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4818f397-a711-498e-a599-9bb1dfc54e4b-log-httpd\") pod \"ceilometer-0\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.787229 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.788290 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-config-data\") pod \"ceilometer-0\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.792017 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.797063 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-scripts\") pod \"ceilometer-0\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.807443 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29djt\" (UniqueName: \"kubernetes.io/projected/4818f397-a711-498e-a599-9bb1dfc54e4b-kube-api-access-29djt\") pod \"ceilometer-0\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " pod="openstack/ceilometer-0" Nov 24 19:37:33 crc kubenswrapper[4812]: I1124 19:37:33.884290 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:37:34 crc kubenswrapper[4812]: I1124 19:37:34.369458 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:34 crc kubenswrapper[4812]: W1124 19:37:34.375639 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4818f397_a711_498e_a599_9bb1dfc54e4b.slice/crio-5c9f27895e427c5b3d8da2ba981436c94dcd563006d04f4de34bbb9a2efc836a WatchSource:0}: Error finding container 5c9f27895e427c5b3d8da2ba981436c94dcd563006d04f4de34bbb9a2efc836a: Status 404 returned error can't find the container with id 5c9f27895e427c5b3d8da2ba981436c94dcd563006d04f4de34bbb9a2efc836a Nov 24 19:37:34 crc kubenswrapper[4812]: I1124 19:37:34.488638 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4818f397-a711-498e-a599-9bb1dfc54e4b","Type":"ContainerStarted","Data":"5c9f27895e427c5b3d8da2ba981436c94dcd563006d04f4de34bbb9a2efc836a"} Nov 24 19:37:34 crc kubenswrapper[4812]: I1124 19:37:34.979319 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="546ca8a3-fd87-4b43-9f22-418f3b5619d8" path="/var/lib/kubelet/pods/546ca8a3-fd87-4b43-9f22-418f3b5619d8/volumes" Nov 24 19:37:35 crc kubenswrapper[4812]: I1124 19:37:35.501314 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4818f397-a711-498e-a599-9bb1dfc54e4b","Type":"ContainerStarted","Data":"f6e86ccfe8280e8b4661715813b4fc5b4e15bf214e1cc651c8276828f74fc77f"} Nov 24 19:37:36 crc kubenswrapper[4812]: I1124 19:37:36.056610 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:36 crc kubenswrapper[4812]: I1124 19:37:36.521866 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4818f397-a711-498e-a599-9bb1dfc54e4b","Type":"ContainerStarted","Data":"f7f3c6e1226d11b051db70b3502c405223fad26eb15d48b902027cb34a3a6e82"} Nov 24 19:37:37 crc kubenswrapper[4812]: I1124 19:37:37.531487 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4818f397-a711-498e-a599-9bb1dfc54e4b","Type":"ContainerStarted","Data":"52cfaebfa2f8da85cdd7ea7f23419eab31c4b68c157e07a83a51e7a2fbf063c3"} Nov 24 19:37:39 crc kubenswrapper[4812]: I1124 19:37:39.558000 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4818f397-a711-498e-a599-9bb1dfc54e4b","Type":"ContainerStarted","Data":"b61ba7ee65febd82c278b3a43c69845525df982992ca9a00349a4c05e2162859"} Nov 24 19:37:39 crc kubenswrapper[4812]: I1124 19:37:39.558804 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 19:37:39 crc kubenswrapper[4812]: I1124 19:37:39.558197 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4818f397-a711-498e-a599-9bb1dfc54e4b" containerName="proxy-httpd" containerID="cri-o://b61ba7ee65febd82c278b3a43c69845525df982992ca9a00349a4c05e2162859" gracePeriod=30 Nov 24 19:37:39 crc kubenswrapper[4812]: I1124 19:37:39.558136 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4818f397-a711-498e-a599-9bb1dfc54e4b" containerName="ceilometer-central-agent" containerID="cri-o://f6e86ccfe8280e8b4661715813b4fc5b4e15bf214e1cc651c8276828f74fc77f" gracePeriod=30 Nov 24 19:37:39 crc kubenswrapper[4812]: I1124 19:37:39.558332 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4818f397-a711-498e-a599-9bb1dfc54e4b" containerName="sg-core" containerID="cri-o://52cfaebfa2f8da85cdd7ea7f23419eab31c4b68c157e07a83a51e7a2fbf063c3" gracePeriod=30 Nov 24 19:37:39 crc kubenswrapper[4812]: I1124 19:37:39.558281 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4818f397-a711-498e-a599-9bb1dfc54e4b" containerName="ceilometer-notification-agent" containerID="cri-o://f7f3c6e1226d11b051db70b3502c405223fad26eb15d48b902027cb34a3a6e82" gracePeriod=30 Nov 24 19:37:39 crc kubenswrapper[4812]: I1124 19:37:39.605999 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.424059728 podStartE2EDuration="6.605978985s" podCreationTimestamp="2025-11-24 19:37:33 +0000 UTC" firstStartedPulling="2025-11-24 19:37:34.377804934 +0000 UTC m=+1248.166757315" lastFinishedPulling="2025-11-24 19:37:38.559724201 +0000 UTC m=+1252.348676572" observedRunningTime="2025-11-24 19:37:39.599839978 +0000 UTC m=+1253.388792419" watchObservedRunningTime="2025-11-24 19:37:39.605978985 +0000 UTC m=+1253.394931356" Nov 24 19:37:40 crc kubenswrapper[4812]: I1124 19:37:40.569686 4812 generic.go:334] "Generic (PLEG): container finished" podID="4818f397-a711-498e-a599-9bb1dfc54e4b" containerID="b61ba7ee65febd82c278b3a43c69845525df982992ca9a00349a4c05e2162859" exitCode=0 Nov 24 19:37:40 crc kubenswrapper[4812]: I1124 19:37:40.570011 4812 generic.go:334] "Generic (PLEG): container finished" podID="4818f397-a711-498e-a599-9bb1dfc54e4b" containerID="52cfaebfa2f8da85cdd7ea7f23419eab31c4b68c157e07a83a51e7a2fbf063c3" exitCode=2 Nov 24 19:37:40 crc kubenswrapper[4812]: I1124 19:37:40.570023 4812 generic.go:334] "Generic (PLEG): container finished" podID="4818f397-a711-498e-a599-9bb1dfc54e4b" containerID="f7f3c6e1226d11b051db70b3502c405223fad26eb15d48b902027cb34a3a6e82" exitCode=0 Nov 24 19:37:40 crc kubenswrapper[4812]: I1124 19:37:40.569763 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4818f397-a711-498e-a599-9bb1dfc54e4b","Type":"ContainerDied","Data":"b61ba7ee65febd82c278b3a43c69845525df982992ca9a00349a4c05e2162859"} Nov 24 19:37:40 crc kubenswrapper[4812]: I1124 19:37:40.570064 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4818f397-a711-498e-a599-9bb1dfc54e4b","Type":"ContainerDied","Data":"52cfaebfa2f8da85cdd7ea7f23419eab31c4b68c157e07a83a51e7a2fbf063c3"} Nov 24 19:37:40 crc kubenswrapper[4812]: I1124 19:37:40.570080 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4818f397-a711-498e-a599-9bb1dfc54e4b","Type":"ContainerDied","Data":"f7f3c6e1226d11b051db70b3502c405223fad26eb15d48b902027cb34a3a6e82"} Nov 24 19:37:42 crc kubenswrapper[4812]: I1124 19:37:42.591184 4812 generic.go:334] "Generic (PLEG): container finished" podID="a238488e-998f-478f-8163-452bb47b4dfd" containerID="56e92d62401e0fb5fecfcb29bd586b700c363e86804cbadf37038c174b1e81f1" exitCode=0 Nov 24 19:37:42 crc kubenswrapper[4812]: I1124 19:37:42.591428 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zrdjt" event={"ID":"a238488e-998f-478f-8163-452bb47b4dfd","Type":"ContainerDied","Data":"56e92d62401e0fb5fecfcb29bd586b700c363e86804cbadf37038c174b1e81f1"} Nov 24 19:37:43 crc kubenswrapper[4812]: I1124 19:37:43.602806 4812 generic.go:334] "Generic (PLEG): container finished" podID="4818f397-a711-498e-a599-9bb1dfc54e4b" containerID="f6e86ccfe8280e8b4661715813b4fc5b4e15bf214e1cc651c8276828f74fc77f" exitCode=0 Nov 24 19:37:43 crc kubenswrapper[4812]: I1124 19:37:43.602871 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4818f397-a711-498e-a599-9bb1dfc54e4b","Type":"ContainerDied","Data":"f6e86ccfe8280e8b4661715813b4fc5b4e15bf214e1cc651c8276828f74fc77f"} Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.030060 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.035216 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zrdjt" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.199950 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-combined-ca-bundle\") pod \"4818f397-a711-498e-a599-9bb1dfc54e4b\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.200055 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-sg-core-conf-yaml\") pod \"4818f397-a711-498e-a599-9bb1dfc54e4b\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.200103 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9v8w\" (UniqueName: \"kubernetes.io/projected/a238488e-998f-478f-8163-452bb47b4dfd-kube-api-access-g9v8w\") pod \"a238488e-998f-478f-8163-452bb47b4dfd\" (UID: \"a238488e-998f-478f-8163-452bb47b4dfd\") " Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.200131 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-scripts\") pod \"4818f397-a711-498e-a599-9bb1dfc54e4b\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.200194 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29djt\" (UniqueName: \"kubernetes.io/projected/4818f397-a711-498e-a599-9bb1dfc54e4b-kube-api-access-29djt\") pod \"4818f397-a711-498e-a599-9bb1dfc54e4b\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.200249 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a238488e-998f-478f-8163-452bb47b4dfd-combined-ca-bundle\") pod \"a238488e-998f-478f-8163-452bb47b4dfd\" (UID: \"a238488e-998f-478f-8163-452bb47b4dfd\") " Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.200287 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a238488e-998f-478f-8163-452bb47b4dfd-config-data\") pod \"a238488e-998f-478f-8163-452bb47b4dfd\" (UID: \"a238488e-998f-478f-8163-452bb47b4dfd\") " Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.200400 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-config-data\") pod \"4818f397-a711-498e-a599-9bb1dfc54e4b\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.200450 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a238488e-998f-478f-8163-452bb47b4dfd-scripts\") pod \"a238488e-998f-478f-8163-452bb47b4dfd\" (UID: \"a238488e-998f-478f-8163-452bb47b4dfd\") " Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.200521 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4818f397-a711-498e-a599-9bb1dfc54e4b-run-httpd\") pod \"4818f397-a711-498e-a599-9bb1dfc54e4b\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.200563 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4818f397-a711-498e-a599-9bb1dfc54e4b-log-httpd\") pod \"4818f397-a711-498e-a599-9bb1dfc54e4b\" (UID: \"4818f397-a711-498e-a599-9bb1dfc54e4b\") " Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.204523 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4818f397-a711-498e-a599-9bb1dfc54e4b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4818f397-a711-498e-a599-9bb1dfc54e4b" (UID: "4818f397-a711-498e-a599-9bb1dfc54e4b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.204573 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4818f397-a711-498e-a599-9bb1dfc54e4b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4818f397-a711-498e-a599-9bb1dfc54e4b" (UID: "4818f397-a711-498e-a599-9bb1dfc54e4b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.208272 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a238488e-998f-478f-8163-452bb47b4dfd-scripts" (OuterVolumeSpecName: "scripts") pod "a238488e-998f-478f-8163-452bb47b4dfd" (UID: "a238488e-998f-478f-8163-452bb47b4dfd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.210393 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a238488e-998f-478f-8163-452bb47b4dfd-kube-api-access-g9v8w" (OuterVolumeSpecName: "kube-api-access-g9v8w") pod "a238488e-998f-478f-8163-452bb47b4dfd" (UID: "a238488e-998f-478f-8163-452bb47b4dfd"). InnerVolumeSpecName "kube-api-access-g9v8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.212294 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-scripts" (OuterVolumeSpecName: "scripts") pod "4818f397-a711-498e-a599-9bb1dfc54e4b" (UID: "4818f397-a711-498e-a599-9bb1dfc54e4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.220507 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4818f397-a711-498e-a599-9bb1dfc54e4b-kube-api-access-29djt" (OuterVolumeSpecName: "kube-api-access-29djt") pod "4818f397-a711-498e-a599-9bb1dfc54e4b" (UID: "4818f397-a711-498e-a599-9bb1dfc54e4b"). InnerVolumeSpecName "kube-api-access-29djt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.231063 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a238488e-998f-478f-8163-452bb47b4dfd-config-data" (OuterVolumeSpecName: "config-data") pod "a238488e-998f-478f-8163-452bb47b4dfd" (UID: "a238488e-998f-478f-8163-452bb47b4dfd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.234293 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a238488e-998f-478f-8163-452bb47b4dfd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a238488e-998f-478f-8163-452bb47b4dfd" (UID: "a238488e-998f-478f-8163-452bb47b4dfd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.250854 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4818f397-a711-498e-a599-9bb1dfc54e4b" (UID: "4818f397-a711-498e-a599-9bb1dfc54e4b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.304803 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a238488e-998f-478f-8163-452bb47b4dfd-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.305035 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4818f397-a711-498e-a599-9bb1dfc54e4b-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.305168 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4818f397-a711-498e-a599-9bb1dfc54e4b-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.305252 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.305359 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9v8w\" (UniqueName: \"kubernetes.io/projected/a238488e-998f-478f-8163-452bb47b4dfd-kube-api-access-g9v8w\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.305450 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.305522 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29djt\" (UniqueName: \"kubernetes.io/projected/4818f397-a711-498e-a599-9bb1dfc54e4b-kube-api-access-29djt\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.305591 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a238488e-998f-478f-8163-452bb47b4dfd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.305667 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a238488e-998f-478f-8163-452bb47b4dfd-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.317137 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4818f397-a711-498e-a599-9bb1dfc54e4b" (UID: "4818f397-a711-498e-a599-9bb1dfc54e4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.341859 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-config-data" (OuterVolumeSpecName: "config-data") pod "4818f397-a711-498e-a599-9bb1dfc54e4b" (UID: "4818f397-a711-498e-a599-9bb1dfc54e4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.407025 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.407067 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4818f397-a711-498e-a599-9bb1dfc54e4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.615864 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zrdjt" event={"ID":"a238488e-998f-478f-8163-452bb47b4dfd","Type":"ContainerDied","Data":"de2b2700493c1385c5ac4623f9add14cb2be7417b78e308ca7d8e082963d68ea"} Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.615888 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zrdjt" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.615907 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de2b2700493c1385c5ac4623f9add14cb2be7417b78e308ca7d8e082963d68ea" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.619693 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4818f397-a711-498e-a599-9bb1dfc54e4b","Type":"ContainerDied","Data":"5c9f27895e427c5b3d8da2ba981436c94dcd563006d04f4de34bbb9a2efc836a"} Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.619741 4812 scope.go:117] "RemoveContainer" containerID="b61ba7ee65febd82c278b3a43c69845525df982992ca9a00349a4c05e2162859" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.620094 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.710555 4812 scope.go:117] "RemoveContainer" containerID="52cfaebfa2f8da85cdd7ea7f23419eab31c4b68c157e07a83a51e7a2fbf063c3" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.732838 4812 scope.go:117] "RemoveContainer" containerID="f7f3c6e1226d11b051db70b3502c405223fad26eb15d48b902027cb34a3a6e82" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.747886 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.767759 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.768066 4812 scope.go:117] "RemoveContainer" containerID="f6e86ccfe8280e8b4661715813b4fc5b4e15bf214e1cc651c8276828f74fc77f" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.779141 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 19:37:44 crc kubenswrapper[4812]: E1124 19:37:44.779582 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4818f397-a711-498e-a599-9bb1dfc54e4b" containerName="ceilometer-notification-agent" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.779606 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="4818f397-a711-498e-a599-9bb1dfc54e4b" containerName="ceilometer-notification-agent" Nov 24 19:37:44 crc kubenswrapper[4812]: E1124 19:37:44.779623 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a238488e-998f-478f-8163-452bb47b4dfd" containerName="nova-cell0-conductor-db-sync" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.779630 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a238488e-998f-478f-8163-452bb47b4dfd" containerName="nova-cell0-conductor-db-sync" Nov 24 19:37:44 crc kubenswrapper[4812]: E1124 19:37:44.779653 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4818f397-a711-498e-a599-9bb1dfc54e4b" containerName="proxy-httpd" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.779658 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="4818f397-a711-498e-a599-9bb1dfc54e4b" containerName="proxy-httpd" Nov 24 19:37:44 crc kubenswrapper[4812]: E1124 19:37:44.779668 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4818f397-a711-498e-a599-9bb1dfc54e4b" containerName="ceilometer-central-agent" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.779675 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="4818f397-a711-498e-a599-9bb1dfc54e4b" containerName="ceilometer-central-agent" Nov 24 19:37:44 crc kubenswrapper[4812]: E1124 19:37:44.779688 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4818f397-a711-498e-a599-9bb1dfc54e4b" containerName="sg-core" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.779694 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="4818f397-a711-498e-a599-9bb1dfc54e4b" containerName="sg-core" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.779852 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="4818f397-a711-498e-a599-9bb1dfc54e4b" containerName="ceilometer-central-agent" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.779868 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="4818f397-a711-498e-a599-9bb1dfc54e4b" containerName="proxy-httpd" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.779880 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a238488e-998f-478f-8163-452bb47b4dfd" containerName="nova-cell0-conductor-db-sync" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.779893 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="4818f397-a711-498e-a599-9bb1dfc54e4b" containerName="ceilometer-notification-agent" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.779905 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="4818f397-a711-498e-a599-9bb1dfc54e4b" containerName="sg-core" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.780478 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.783098 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-2rhhb" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.795967 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.800804 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.823920 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.828946 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.832946 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.833163 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.851389 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.915742 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znch9\" (UniqueName: \"kubernetes.io/projected/ceb2200c-b4b1-4664-8daf-8cd31df808d7-kube-api-access-znch9\") pod \"ceilometer-0\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " pod="openstack/ceilometer-0" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.915787 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " pod="openstack/ceilometer-0" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.915834 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceb2200c-b4b1-4664-8daf-8cd31df808d7-log-httpd\") pod \"ceilometer-0\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " pod="openstack/ceilometer-0" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.915867 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvwmd\" (UniqueName: \"kubernetes.io/projected/93240875-2cab-44e0-b475-20f46cb4850e-kube-api-access-xvwmd\") pod \"nova-cell0-conductor-0\" (UID: \"93240875-2cab-44e0-b475-20f46cb4850e\") " pod="openstack/nova-cell0-conductor-0" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.915894 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " pod="openstack/ceilometer-0" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.915919 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93240875-2cab-44e0-b475-20f46cb4850e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"93240875-2cab-44e0-b475-20f46cb4850e\") " pod="openstack/nova-cell0-conductor-0" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.915947 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceb2200c-b4b1-4664-8daf-8cd31df808d7-run-httpd\") pod \"ceilometer-0\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " pod="openstack/ceilometer-0" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.915976 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93240875-2cab-44e0-b475-20f46cb4850e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"93240875-2cab-44e0-b475-20f46cb4850e\") " pod="openstack/nova-cell0-conductor-0" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.916004 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-scripts\") pod \"ceilometer-0\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " pod="openstack/ceilometer-0" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.916036 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-config-data\") pod \"ceilometer-0\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " pod="openstack/ceilometer-0" Nov 24 19:37:44 crc kubenswrapper[4812]: I1124 19:37:44.983279 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4818f397-a711-498e-a599-9bb1dfc54e4b" path="/var/lib/kubelet/pods/4818f397-a711-498e-a599-9bb1dfc54e4b/volumes" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.017995 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwmd\" (UniqueName: \"kubernetes.io/projected/93240875-2cab-44e0-b475-20f46cb4850e-kube-api-access-xvwmd\") pod \"nova-cell0-conductor-0\" (UID: \"93240875-2cab-44e0-b475-20f46cb4850e\") " pod="openstack/nova-cell0-conductor-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.018064 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " pod="openstack/ceilometer-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.018093 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93240875-2cab-44e0-b475-20f46cb4850e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"93240875-2cab-44e0-b475-20f46cb4850e\") " pod="openstack/nova-cell0-conductor-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.018123 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceb2200c-b4b1-4664-8daf-8cd31df808d7-run-httpd\") pod \"ceilometer-0\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " pod="openstack/ceilometer-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.018155 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93240875-2cab-44e0-b475-20f46cb4850e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"93240875-2cab-44e0-b475-20f46cb4850e\") " pod="openstack/nova-cell0-conductor-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.018183 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-scripts\") pod \"ceilometer-0\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " pod="openstack/ceilometer-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.018214 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-config-data\") pod \"ceilometer-0\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " pod="openstack/ceilometer-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.018243 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znch9\" (UniqueName: \"kubernetes.io/projected/ceb2200c-b4b1-4664-8daf-8cd31df808d7-kube-api-access-znch9\") pod \"ceilometer-0\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " pod="openstack/ceilometer-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.018259 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " pod="openstack/ceilometer-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.018284 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceb2200c-b4b1-4664-8daf-8cd31df808d7-log-httpd\") pod \"ceilometer-0\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " pod="openstack/ceilometer-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.018682 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceb2200c-b4b1-4664-8daf-8cd31df808d7-log-httpd\") pod \"ceilometer-0\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " pod="openstack/ceilometer-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.019683 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceb2200c-b4b1-4664-8daf-8cd31df808d7-run-httpd\") pod \"ceilometer-0\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " pod="openstack/ceilometer-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.025201 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93240875-2cab-44e0-b475-20f46cb4850e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"93240875-2cab-44e0-b475-20f46cb4850e\") " pod="openstack/nova-cell0-conductor-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.025578 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-config-data\") pod \"ceilometer-0\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " pod="openstack/ceilometer-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.026321 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " pod="openstack/ceilometer-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.027280 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-scripts\") pod \"ceilometer-0\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " pod="openstack/ceilometer-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.027905 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93240875-2cab-44e0-b475-20f46cb4850e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"93240875-2cab-44e0-b475-20f46cb4850e\") " pod="openstack/nova-cell0-conductor-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.028769 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " pod="openstack/ceilometer-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.052921 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvwmd\" (UniqueName: \"kubernetes.io/projected/93240875-2cab-44e0-b475-20f46cb4850e-kube-api-access-xvwmd\") pod \"nova-cell0-conductor-0\" (UID: \"93240875-2cab-44e0-b475-20f46cb4850e\") " pod="openstack/nova-cell0-conductor-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.054614 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znch9\" (UniqueName: \"kubernetes.io/projected/ceb2200c-b4b1-4664-8daf-8cd31df808d7-kube-api-access-znch9\") pod \"ceilometer-0\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " pod="openstack/ceilometer-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.122301 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.153186 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.654501 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 19:37:45 crc kubenswrapper[4812]: I1124 19:37:45.710665 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:37:45 crc kubenswrapper[4812]: W1124 19:37:45.715322 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podceb2200c_b4b1_4664_8daf_8cd31df808d7.slice/crio-1ffb00e3e124b72ca81b93a4506bbc7dbdd4b9414dbf4e8d866a2cfe71f928bc WatchSource:0}: Error finding container 1ffb00e3e124b72ca81b93a4506bbc7dbdd4b9414dbf4e8d866a2cfe71f928bc: Status 404 returned error can't find the container with id 1ffb00e3e124b72ca81b93a4506bbc7dbdd4b9414dbf4e8d866a2cfe71f928bc Nov 24 19:37:46 crc kubenswrapper[4812]: I1124 19:37:46.641762 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"93240875-2cab-44e0-b475-20f46cb4850e","Type":"ContainerStarted","Data":"a1d0c0172507606f6c4fa928a3e9234afb2ec885eb8b341a5de04336c4f109af"} Nov 24 19:37:46 crc kubenswrapper[4812]: I1124 19:37:46.642063 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"93240875-2cab-44e0-b475-20f46cb4850e","Type":"ContainerStarted","Data":"a91aa1ed2e3a138c6cd18478434b1ec1518f2e76b65517d1a8b48e280f559523"} Nov 24 19:37:46 crc kubenswrapper[4812]: I1124 19:37:46.642551 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 24 19:37:46 crc kubenswrapper[4812]: I1124 19:37:46.645304 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceb2200c-b4b1-4664-8daf-8cd31df808d7","Type":"ContainerStarted","Data":"0ce6851bafb5524ea4e01da18fb3bd216ba7cc4dcb648829219517ac8572e299"} Nov 24 19:37:46 crc kubenswrapper[4812]: I1124 19:37:46.645343 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceb2200c-b4b1-4664-8daf-8cd31df808d7","Type":"ContainerStarted","Data":"1ffb00e3e124b72ca81b93a4506bbc7dbdd4b9414dbf4e8d866a2cfe71f928bc"} Nov 24 19:37:46 crc kubenswrapper[4812]: I1124 19:37:46.997556 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.997534808 podStartE2EDuration="2.997534808s" podCreationTimestamp="2025-11-24 19:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:37:46.662810333 +0000 UTC m=+1260.451762744" watchObservedRunningTime="2025-11-24 19:37:46.997534808 +0000 UTC m=+1260.786487189" Nov 24 19:37:47 crc kubenswrapper[4812]: I1124 19:37:47.657060 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceb2200c-b4b1-4664-8daf-8cd31df808d7","Type":"ContainerStarted","Data":"9c52b55c2a544f96764f8a80d1b80e4b1e79340ecc278983508ab62023d06b0b"} Nov 24 19:37:47 crc kubenswrapper[4812]: I1124 19:37:47.657312 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceb2200c-b4b1-4664-8daf-8cd31df808d7","Type":"ContainerStarted","Data":"b28a85b61a046fe18eb46083ce2596dfa0fa12630af7e68f3ecf8d7def7a12eb"} Nov 24 19:37:49 crc kubenswrapper[4812]: I1124 19:37:49.687329 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceb2200c-b4b1-4664-8daf-8cd31df808d7","Type":"ContainerStarted","Data":"637db67528ce6aed7f9f40f0fb19ce29de2305e95a27aef6f8cdfbb2c6b828c8"} Nov 24 19:37:49 crc kubenswrapper[4812]: I1124 19:37:49.689215 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 19:37:49 crc kubenswrapper[4812]: I1124 19:37:49.716089 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.9625763750000003 podStartE2EDuration="5.716073499s" podCreationTimestamp="2025-11-24 19:37:44 +0000 UTC" firstStartedPulling="2025-11-24 19:37:45.717805198 +0000 UTC m=+1259.506757569" lastFinishedPulling="2025-11-24 19:37:48.471302312 +0000 UTC m=+1262.260254693" observedRunningTime="2025-11-24 19:37:49.714069041 +0000 UTC m=+1263.503021442" watchObservedRunningTime="2025-11-24 19:37:49.716073499 +0000 UTC m=+1263.505025880" Nov 24 19:37:50 crc kubenswrapper[4812]: I1124 19:37:50.168438 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 24 19:37:50 crc kubenswrapper[4812]: I1124 19:37:50.706119 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-2zpr9"] Nov 24 19:37:50 crc kubenswrapper[4812]: I1124 19:37:50.707960 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2zpr9" Nov 24 19:37:50 crc kubenswrapper[4812]: I1124 19:37:50.724049 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2zpr9"] Nov 24 19:37:50 crc kubenswrapper[4812]: I1124 19:37:50.724819 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 24 19:37:50 crc kubenswrapper[4812]: I1124 19:37:50.725184 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 24 19:37:50 crc kubenswrapper[4812]: I1124 19:37:50.844624 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd9d1fd1-4439-473b-a619-dd107ae950ff-scripts\") pod \"nova-cell0-cell-mapping-2zpr9\" (UID: \"cd9d1fd1-4439-473b-a619-dd107ae950ff\") " pod="openstack/nova-cell0-cell-mapping-2zpr9" Nov 24 19:37:50 crc kubenswrapper[4812]: I1124 19:37:50.844698 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9d1fd1-4439-473b-a619-dd107ae950ff-config-data\") pod \"nova-cell0-cell-mapping-2zpr9\" (UID: \"cd9d1fd1-4439-473b-a619-dd107ae950ff\") " pod="openstack/nova-cell0-cell-mapping-2zpr9" Nov 24 19:37:50 crc kubenswrapper[4812]: I1124 19:37:50.844768 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w6gb\" (UniqueName: \"kubernetes.io/projected/cd9d1fd1-4439-473b-a619-dd107ae950ff-kube-api-access-8w6gb\") pod \"nova-cell0-cell-mapping-2zpr9\" (UID: \"cd9d1fd1-4439-473b-a619-dd107ae950ff\") " pod="openstack/nova-cell0-cell-mapping-2zpr9" Nov 24 19:37:50 crc kubenswrapper[4812]: I1124 19:37:50.844811 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9d1fd1-4439-473b-a619-dd107ae950ff-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2zpr9\" (UID: \"cd9d1fd1-4439-473b-a619-dd107ae950ff\") " pod="openstack/nova-cell0-cell-mapping-2zpr9" Nov 24 19:37:50 crc kubenswrapper[4812]: I1124 19:37:50.946165 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9d1fd1-4439-473b-a619-dd107ae950ff-config-data\") pod \"nova-cell0-cell-mapping-2zpr9\" (UID: \"cd9d1fd1-4439-473b-a619-dd107ae950ff\") " pod="openstack/nova-cell0-cell-mapping-2zpr9" Nov 24 19:37:50 crc kubenswrapper[4812]: I1124 19:37:50.946466 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8w6gb\" (UniqueName: \"kubernetes.io/projected/cd9d1fd1-4439-473b-a619-dd107ae950ff-kube-api-access-8w6gb\") pod \"nova-cell0-cell-mapping-2zpr9\" (UID: \"cd9d1fd1-4439-473b-a619-dd107ae950ff\") " pod="openstack/nova-cell0-cell-mapping-2zpr9" Nov 24 19:37:50 crc kubenswrapper[4812]: I1124 19:37:50.946515 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9d1fd1-4439-473b-a619-dd107ae950ff-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2zpr9\" (UID: \"cd9d1fd1-4439-473b-a619-dd107ae950ff\") " pod="openstack/nova-cell0-cell-mapping-2zpr9" Nov 24 19:37:50 crc kubenswrapper[4812]: I1124 19:37:50.946576 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd9d1fd1-4439-473b-a619-dd107ae950ff-scripts\") pod \"nova-cell0-cell-mapping-2zpr9\" (UID: \"cd9d1fd1-4439-473b-a619-dd107ae950ff\") " pod="openstack/nova-cell0-cell-mapping-2zpr9" Nov 24 19:37:50 crc kubenswrapper[4812]: I1124 19:37:50.962154 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9d1fd1-4439-473b-a619-dd107ae950ff-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-2zpr9\" (UID: \"cd9d1fd1-4439-473b-a619-dd107ae950ff\") " pod="openstack/nova-cell0-cell-mapping-2zpr9" Nov 24 19:37:50 crc kubenswrapper[4812]: I1124 19:37:50.979807 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9d1fd1-4439-473b-a619-dd107ae950ff-config-data\") pod \"nova-cell0-cell-mapping-2zpr9\" (UID: \"cd9d1fd1-4439-473b-a619-dd107ae950ff\") " pod="openstack/nova-cell0-cell-mapping-2zpr9" Nov 24 19:37:50 crc kubenswrapper[4812]: I1124 19:37:50.980061 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd9d1fd1-4439-473b-a619-dd107ae950ff-scripts\") pod \"nova-cell0-cell-mapping-2zpr9\" (UID: \"cd9d1fd1-4439-473b-a619-dd107ae950ff\") " pod="openstack/nova-cell0-cell-mapping-2zpr9" Nov 24 19:37:50 crc kubenswrapper[4812]: I1124 19:37:50.985625 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8w6gb\" (UniqueName: \"kubernetes.io/projected/cd9d1fd1-4439-473b-a619-dd107ae950ff-kube-api-access-8w6gb\") pod \"nova-cell0-cell-mapping-2zpr9\" (UID: \"cd9d1fd1-4439-473b-a619-dd107ae950ff\") " pod="openstack/nova-cell0-cell-mapping-2zpr9" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.027173 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.028981 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.034890 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.036383 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.078761 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2zpr9" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.097395 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.098937 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.105655 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.149386 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.150427 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/353fa880-de21-4dcc-b244-7a98c079de58-logs\") pod \"nova-metadata-0\" (UID: \"353fa880-de21-4dcc-b244-7a98c079de58\") " pod="openstack/nova-metadata-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.150470 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353fa880-de21-4dcc-b244-7a98c079de58-config-data\") pod \"nova-metadata-0\" (UID: \"353fa880-de21-4dcc-b244-7a98c079de58\") " pod="openstack/nova-metadata-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.150522 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj5z2\" (UniqueName: \"kubernetes.io/projected/353fa880-de21-4dcc-b244-7a98c079de58-kube-api-access-mj5z2\") pod \"nova-metadata-0\" (UID: \"353fa880-de21-4dcc-b244-7a98c079de58\") " pod="openstack/nova-metadata-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.150550 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbg7x\" (UniqueName: \"kubernetes.io/projected/0b1eff93-3e0b-440b-8baf-56e84e60ded1-kube-api-access-xbg7x\") pod \"nova-api-0\" (UID: \"0b1eff93-3e0b-440b-8baf-56e84e60ded1\") " pod="openstack/nova-api-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.150566 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b1eff93-3e0b-440b-8baf-56e84e60ded1-config-data\") pod \"nova-api-0\" (UID: \"0b1eff93-3e0b-440b-8baf-56e84e60ded1\") " pod="openstack/nova-api-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.150590 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353fa880-de21-4dcc-b244-7a98c079de58-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"353fa880-de21-4dcc-b244-7a98c079de58\") " pod="openstack/nova-metadata-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.150614 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b1eff93-3e0b-440b-8baf-56e84e60ded1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0b1eff93-3e0b-440b-8baf-56e84e60ded1\") " pod="openstack/nova-api-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.150639 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b1eff93-3e0b-440b-8baf-56e84e60ded1-logs\") pod \"nova-api-0\" (UID: \"0b1eff93-3e0b-440b-8baf-56e84e60ded1\") " pod="openstack/nova-api-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.218774 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-s9tzq"] Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.231910 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-s9tzq"] Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.232013 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.256290 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbg7x\" (UniqueName: \"kubernetes.io/projected/0b1eff93-3e0b-440b-8baf-56e84e60ded1-kube-api-access-xbg7x\") pod \"nova-api-0\" (UID: \"0b1eff93-3e0b-440b-8baf-56e84e60ded1\") " pod="openstack/nova-api-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.256345 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b1eff93-3e0b-440b-8baf-56e84e60ded1-config-data\") pod \"nova-api-0\" (UID: \"0b1eff93-3e0b-440b-8baf-56e84e60ded1\") " pod="openstack/nova-api-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.256383 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353fa880-de21-4dcc-b244-7a98c079de58-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"353fa880-de21-4dcc-b244-7a98c079de58\") " pod="openstack/nova-metadata-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.256412 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b1eff93-3e0b-440b-8baf-56e84e60ded1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0b1eff93-3e0b-440b-8baf-56e84e60ded1\") " pod="openstack/nova-api-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.256438 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b1eff93-3e0b-440b-8baf-56e84e60ded1-logs\") pod \"nova-api-0\" (UID: \"0b1eff93-3e0b-440b-8baf-56e84e60ded1\") " pod="openstack/nova-api-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.256485 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/353fa880-de21-4dcc-b244-7a98c079de58-logs\") pod \"nova-metadata-0\" (UID: \"353fa880-de21-4dcc-b244-7a98c079de58\") " pod="openstack/nova-metadata-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.256518 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353fa880-de21-4dcc-b244-7a98c079de58-config-data\") pod \"nova-metadata-0\" (UID: \"353fa880-de21-4dcc-b244-7a98c079de58\") " pod="openstack/nova-metadata-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.256570 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj5z2\" (UniqueName: \"kubernetes.io/projected/353fa880-de21-4dcc-b244-7a98c079de58-kube-api-access-mj5z2\") pod \"nova-metadata-0\" (UID: \"353fa880-de21-4dcc-b244-7a98c079de58\") " pod="openstack/nova-metadata-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.260049 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b1eff93-3e0b-440b-8baf-56e84e60ded1-logs\") pod \"nova-api-0\" (UID: \"0b1eff93-3e0b-440b-8baf-56e84e60ded1\") " pod="openstack/nova-api-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.261554 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/353fa880-de21-4dcc-b244-7a98c079de58-logs\") pod \"nova-metadata-0\" (UID: \"353fa880-de21-4dcc-b244-7a98c079de58\") " pod="openstack/nova-metadata-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.274184 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353fa880-de21-4dcc-b244-7a98c079de58-config-data\") pod \"nova-metadata-0\" (UID: \"353fa880-de21-4dcc-b244-7a98c079de58\") " pod="openstack/nova-metadata-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.294243 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj5z2\" (UniqueName: \"kubernetes.io/projected/353fa880-de21-4dcc-b244-7a98c079de58-kube-api-access-mj5z2\") pod \"nova-metadata-0\" (UID: \"353fa880-de21-4dcc-b244-7a98c079de58\") " pod="openstack/nova-metadata-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.303280 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b1eff93-3e0b-440b-8baf-56e84e60ded1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0b1eff93-3e0b-440b-8baf-56e84e60ded1\") " pod="openstack/nova-api-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.310955 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353fa880-de21-4dcc-b244-7a98c079de58-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"353fa880-de21-4dcc-b244-7a98c079de58\") " pod="openstack/nova-metadata-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.316992 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbg7x\" (UniqueName: \"kubernetes.io/projected/0b1eff93-3e0b-440b-8baf-56e84e60ded1-kube-api-access-xbg7x\") pod \"nova-api-0\" (UID: \"0b1eff93-3e0b-440b-8baf-56e84e60ded1\") " pod="openstack/nova-api-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.324106 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b1eff93-3e0b-440b-8baf-56e84e60ded1-config-data\") pod \"nova-api-0\" (UID: \"0b1eff93-3e0b-440b-8baf-56e84e60ded1\") " pod="openstack/nova-api-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.337401 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.338676 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.345426 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.360226 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.360585 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-dns-swift-storage-0\") pod \"dnsmasq-dns-5dd7c4987f-s9tzq\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.360646 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-config\") pod \"dnsmasq-dns-5dd7c4987f-s9tzq\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.360686 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-dns-svc\") pod \"dnsmasq-dns-5dd7c4987f-s9tzq\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.360727 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-ovsdbserver-nb\") pod \"dnsmasq-dns-5dd7c4987f-s9tzq\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.360799 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snrck\" (UniqueName: \"kubernetes.io/projected/90a24987-2482-402a-bd74-c4edd9e8c7a5-kube-api-access-snrck\") pod \"dnsmasq-dns-5dd7c4987f-s9tzq\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.360836 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-ovsdbserver-sb\") pod \"dnsmasq-dns-5dd7c4987f-s9tzq\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.381585 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.435848 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.437563 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.445321 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.451316 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.473866 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9585686c-a5f0-4e0e-a761-c0319546fb74-config-data\") pod \"nova-scheduler-0\" (UID: \"9585686c-a5f0-4e0e-a761-c0319546fb74\") " pod="openstack/nova-scheduler-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.474235 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-dns-swift-storage-0\") pod \"dnsmasq-dns-5dd7c4987f-s9tzq\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.474377 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-config\") pod \"dnsmasq-dns-5dd7c4987f-s9tzq\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.474477 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-dns-svc\") pod \"dnsmasq-dns-5dd7c4987f-s9tzq\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.474638 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-ovsdbserver-nb\") pod \"dnsmasq-dns-5dd7c4987f-s9tzq\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.474756 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnjgh\" (UniqueName: \"kubernetes.io/projected/9585686c-a5f0-4e0e-a761-c0319546fb74-kube-api-access-lnjgh\") pod \"nova-scheduler-0\" (UID: \"9585686c-a5f0-4e0e-a761-c0319546fb74\") " pod="openstack/nova-scheduler-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.474897 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snrck\" (UniqueName: \"kubernetes.io/projected/90a24987-2482-402a-bd74-c4edd9e8c7a5-kube-api-access-snrck\") pod \"dnsmasq-dns-5dd7c4987f-s9tzq\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.475001 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-ovsdbserver-sb\") pod \"dnsmasq-dns-5dd7c4987f-s9tzq\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.475135 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9585686c-a5f0-4e0e-a761-c0319546fb74-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9585686c-a5f0-4e0e-a761-c0319546fb74\") " pod="openstack/nova-scheduler-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.475943 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-dns-svc\") pod \"dnsmasq-dns-5dd7c4987f-s9tzq\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.476052 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-ovsdbserver-nb\") pod \"dnsmasq-dns-5dd7c4987f-s9tzq\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.476517 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-dns-swift-storage-0\") pod \"dnsmasq-dns-5dd7c4987f-s9tzq\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.476734 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-config\") pod \"dnsmasq-dns-5dd7c4987f-s9tzq\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.477246 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-ovsdbserver-sb\") pod \"dnsmasq-dns-5dd7c4987f-s9tzq\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.507661 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snrck\" (UniqueName: \"kubernetes.io/projected/90a24987-2482-402a-bd74-c4edd9e8c7a5-kube-api-access-snrck\") pod \"dnsmasq-dns-5dd7c4987f-s9tzq\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.532054 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.576488 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9585686c-a5f0-4e0e-a761-c0319546fb74-config-data\") pod \"nova-scheduler-0\" (UID: \"9585686c-a5f0-4e0e-a761-c0319546fb74\") " pod="openstack/nova-scheduler-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.576939 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0feedb89-3882-482b-82d0-24de3d81f892-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0feedb89-3882-482b-82d0-24de3d81f892\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.577075 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnjgh\" (UniqueName: \"kubernetes.io/projected/9585686c-a5f0-4e0e-a761-c0319546fb74-kube-api-access-lnjgh\") pod \"nova-scheduler-0\" (UID: \"9585686c-a5f0-4e0e-a761-c0319546fb74\") " pod="openstack/nova-scheduler-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.577301 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6rlp\" (UniqueName: \"kubernetes.io/projected/0feedb89-3882-482b-82d0-24de3d81f892-kube-api-access-b6rlp\") pod \"nova-cell1-novncproxy-0\" (UID: \"0feedb89-3882-482b-82d0-24de3d81f892\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.577430 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9585686c-a5f0-4e0e-a761-c0319546fb74-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9585686c-a5f0-4e0e-a761-c0319546fb74\") " pod="openstack/nova-scheduler-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.577492 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0feedb89-3882-482b-82d0-24de3d81f892-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0feedb89-3882-482b-82d0-24de3d81f892\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.580611 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9585686c-a5f0-4e0e-a761-c0319546fb74-config-data\") pod \"nova-scheduler-0\" (UID: \"9585686c-a5f0-4e0e-a761-c0319546fb74\") " pod="openstack/nova-scheduler-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.580999 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9585686c-a5f0-4e0e-a761-c0319546fb74-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9585686c-a5f0-4e0e-a761-c0319546fb74\") " pod="openstack/nova-scheduler-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.601877 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnjgh\" (UniqueName: \"kubernetes.io/projected/9585686c-a5f0-4e0e-a761-c0319546fb74-kube-api-access-lnjgh\") pod \"nova-scheduler-0\" (UID: \"9585686c-a5f0-4e0e-a761-c0319546fb74\") " pod="openstack/nova-scheduler-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.648725 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.667805 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.679499 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0feedb89-3882-482b-82d0-24de3d81f892-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0feedb89-3882-482b-82d0-24de3d81f892\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.679644 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6rlp\" (UniqueName: \"kubernetes.io/projected/0feedb89-3882-482b-82d0-24de3d81f892-kube-api-access-b6rlp\") pod \"nova-cell1-novncproxy-0\" (UID: \"0feedb89-3882-482b-82d0-24de3d81f892\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.679716 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0feedb89-3882-482b-82d0-24de3d81f892-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0feedb89-3882-482b-82d0-24de3d81f892\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.688658 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0feedb89-3882-482b-82d0-24de3d81f892-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0feedb89-3882-482b-82d0-24de3d81f892\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.693658 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0feedb89-3882-482b-82d0-24de3d81f892-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0feedb89-3882-482b-82d0-24de3d81f892\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.697183 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6rlp\" (UniqueName: \"kubernetes.io/projected/0feedb89-3882-482b-82d0-24de3d81f892-kube-api-access-b6rlp\") pod \"nova-cell1-novncproxy-0\" (UID: \"0feedb89-3882-482b-82d0-24de3d81f892\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.710310 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-2zpr9"] Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.756203 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:37:51 crc kubenswrapper[4812]: W1124 19:37:51.766986 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd9d1fd1_4439_473b_a619_dd107ae950ff.slice/crio-cb19bd053f111c86bf88fa3eeb1db0ac20f3fef677264f1df4e0522a1f14d659 WatchSource:0}: Error finding container cb19bd053f111c86bf88fa3eeb1db0ac20f3fef677264f1df4e0522a1f14d659: Status 404 returned error can't find the container with id cb19bd053f111c86bf88fa3eeb1db0ac20f3fef677264f1df4e0522a1f14d659 Nov 24 19:37:51 crc kubenswrapper[4812]: I1124 19:37:51.924567 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.268103 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jv5mj"] Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.269551 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jv5mj" Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.278850 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.278906 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 24 19:37:52 crc kubenswrapper[4812]: W1124 19:37:52.296943 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod353fa880_de21_4dcc_b244_7a98c079de58.slice/crio-bdde4a09b4397b483248424cbb381c1f4dcfc5b21b1586170ca007ac814dce89 WatchSource:0}: Error finding container bdde4a09b4397b483248424cbb381c1f4dcfc5b21b1586170ca007ac814dce89: Status 404 returned error can't find the container with id bdde4a09b4397b483248424cbb381c1f4dcfc5b21b1586170ca007ac814dce89 Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.314386 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.330148 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jv5mj"] Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.341404 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.411241 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jv5mj\" (UID: \"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844\") " pod="openstack/nova-cell1-conductor-db-sync-jv5mj" Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.411382 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-config-data\") pod \"nova-cell1-conductor-db-sync-jv5mj\" (UID: \"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844\") " pod="openstack/nova-cell1-conductor-db-sync-jv5mj" Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.411463 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db5wg\" (UniqueName: \"kubernetes.io/projected/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-kube-api-access-db5wg\") pod \"nova-cell1-conductor-db-sync-jv5mj\" (UID: \"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844\") " pod="openstack/nova-cell1-conductor-db-sync-jv5mj" Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.411546 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-scripts\") pod \"nova-cell1-conductor-db-sync-jv5mj\" (UID: \"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844\") " pod="openstack/nova-cell1-conductor-db-sync-jv5mj" Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.430962 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-s9tzq"] Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.501885 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 19:37:52 crc kubenswrapper[4812]: W1124 19:37:52.507532 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0feedb89_3882_482b_82d0_24de3d81f892.slice/crio-b3f8f78ad659646a1ac1d514ff1e2321791c7fe8f94b08f7a24a4f5862a350a0 WatchSource:0}: Error finding container b3f8f78ad659646a1ac1d514ff1e2321791c7fe8f94b08f7a24a4f5862a350a0: Status 404 returned error can't find the container with id b3f8f78ad659646a1ac1d514ff1e2321791c7fe8f94b08f7a24a4f5862a350a0 Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.512931 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-config-data\") pod \"nova-cell1-conductor-db-sync-jv5mj\" (UID: \"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844\") " pod="openstack/nova-cell1-conductor-db-sync-jv5mj" Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.513038 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db5wg\" (UniqueName: \"kubernetes.io/projected/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-kube-api-access-db5wg\") pod \"nova-cell1-conductor-db-sync-jv5mj\" (UID: \"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844\") " pod="openstack/nova-cell1-conductor-db-sync-jv5mj" Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.513070 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-scripts\") pod \"nova-cell1-conductor-db-sync-jv5mj\" (UID: \"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844\") " pod="openstack/nova-cell1-conductor-db-sync-jv5mj" Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.513109 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jv5mj\" (UID: \"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844\") " pod="openstack/nova-cell1-conductor-db-sync-jv5mj" Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.517747 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jv5mj\" (UID: \"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844\") " pod="openstack/nova-cell1-conductor-db-sync-jv5mj" Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.518425 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-config-data\") pod \"nova-cell1-conductor-db-sync-jv5mj\" (UID: \"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844\") " pod="openstack/nova-cell1-conductor-db-sync-jv5mj" Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.527890 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-scripts\") pod \"nova-cell1-conductor-db-sync-jv5mj\" (UID: \"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844\") " pod="openstack/nova-cell1-conductor-db-sync-jv5mj" Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.532567 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db5wg\" (UniqueName: \"kubernetes.io/projected/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-kube-api-access-db5wg\") pod \"nova-cell1-conductor-db-sync-jv5mj\" (UID: \"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844\") " pod="openstack/nova-cell1-conductor-db-sync-jv5mj" Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.693878 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jv5mj" Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.733583 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9585686c-a5f0-4e0e-a761-c0319546fb74","Type":"ContainerStarted","Data":"85cea4fb0718f8077475110dcbceeb81289aa683001d23602934c7f2354bad6a"} Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.734947 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2zpr9" event={"ID":"cd9d1fd1-4439-473b-a619-dd107ae950ff","Type":"ContainerStarted","Data":"22145cb6161b542de832198498201f037566ff179f93263c471dd93f46869d47"} Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.734969 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2zpr9" event={"ID":"cd9d1fd1-4439-473b-a619-dd107ae950ff","Type":"ContainerStarted","Data":"cb19bd053f111c86bf88fa3eeb1db0ac20f3fef677264f1df4e0522a1f14d659"} Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.735927 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0b1eff93-3e0b-440b-8baf-56e84e60ded1","Type":"ContainerStarted","Data":"913529e5f40fcc24c829b7543360eb46509826b376d7c00887889336d0a05cd6"} Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.738660 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"353fa880-de21-4dcc-b244-7a98c079de58","Type":"ContainerStarted","Data":"bdde4a09b4397b483248424cbb381c1f4dcfc5b21b1586170ca007ac814dce89"} Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.741923 4812 generic.go:334] "Generic (PLEG): container finished" podID="90a24987-2482-402a-bd74-c4edd9e8c7a5" containerID="d1ccc8c5e188f97cc9beaea3ba95e24061b68b7713092914ac0cbe2887f9ad26" exitCode=0 Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.741979 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" event={"ID":"90a24987-2482-402a-bd74-c4edd9e8c7a5","Type":"ContainerDied","Data":"d1ccc8c5e188f97cc9beaea3ba95e24061b68b7713092914ac0cbe2887f9ad26"} Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.742001 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" event={"ID":"90a24987-2482-402a-bd74-c4edd9e8c7a5","Type":"ContainerStarted","Data":"258b547381b9743c23e116f2e76ed22188a22bfbf426a46945e957510dc3bcc9"} Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.749136 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0feedb89-3882-482b-82d0-24de3d81f892","Type":"ContainerStarted","Data":"b3f8f78ad659646a1ac1d514ff1e2321791c7fe8f94b08f7a24a4f5862a350a0"} Nov 24 19:37:52 crc kubenswrapper[4812]: I1124 19:37:52.753031 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-2zpr9" podStartSLOduration=2.753020686 podStartE2EDuration="2.753020686s" podCreationTimestamp="2025-11-24 19:37:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:37:52.747676922 +0000 UTC m=+1266.536629293" watchObservedRunningTime="2025-11-24 19:37:52.753020686 +0000 UTC m=+1266.541973057" Nov 24 19:37:53 crc kubenswrapper[4812]: I1124 19:37:53.310079 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jv5mj"] Nov 24 19:37:53 crc kubenswrapper[4812]: I1124 19:37:53.761955 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jv5mj" event={"ID":"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844","Type":"ContainerStarted","Data":"705fac9235fc051bf3375b13e8f6298b3a7ac75c7c6fb5d5d491e30790d9dd51"} Nov 24 19:37:53 crc kubenswrapper[4812]: I1124 19:37:53.762266 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jv5mj" event={"ID":"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844","Type":"ContainerStarted","Data":"c3cadf35780ad0ab3de592649b463e3a4c4e606ec39f0caf92afb53c2503e3c3"} Nov 24 19:37:53 crc kubenswrapper[4812]: I1124 19:37:53.764888 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" event={"ID":"90a24987-2482-402a-bd74-c4edd9e8c7a5","Type":"ContainerStarted","Data":"f70d149ddfd46c785755b9349dba5981dd2385aef715ad1622f4b420f03e5534"} Nov 24 19:37:53 crc kubenswrapper[4812]: I1124 19:37:53.765116 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:37:53 crc kubenswrapper[4812]: I1124 19:37:53.779303 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-jv5mj" podStartSLOduration=1.779284676 podStartE2EDuration="1.779284676s" podCreationTimestamp="2025-11-24 19:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:37:53.779176812 +0000 UTC m=+1267.568129183" watchObservedRunningTime="2025-11-24 19:37:53.779284676 +0000 UTC m=+1267.568237047" Nov 24 19:37:53 crc kubenswrapper[4812]: I1124 19:37:53.809228 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" podStartSLOduration=2.809208805 podStartE2EDuration="2.809208805s" podCreationTimestamp="2025-11-24 19:37:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:37:53.803938714 +0000 UTC m=+1267.592891085" watchObservedRunningTime="2025-11-24 19:37:53.809208805 +0000 UTC m=+1267.598161166" Nov 24 19:37:54 crc kubenswrapper[4812]: I1124 19:37:54.668210 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 19:37:54 crc kubenswrapper[4812]: I1124 19:37:54.678795 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:37:56 crc kubenswrapper[4812]: I1124 19:37:56.794771 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0b1eff93-3e0b-440b-8baf-56e84e60ded1","Type":"ContainerStarted","Data":"52b419af68e37b67d492f27613feebf5748a485faf9628cf18b1c904b42f313e"} Nov 24 19:37:56 crc kubenswrapper[4812]: I1124 19:37:56.795598 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0b1eff93-3e0b-440b-8baf-56e84e60ded1","Type":"ContainerStarted","Data":"45c40b45fbbab6eb9594c78d319db459feaa5567c3e3aa78f5440922ff7f46cc"} Nov 24 19:37:56 crc kubenswrapper[4812]: I1124 19:37:56.798468 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"353fa880-de21-4dcc-b244-7a98c079de58","Type":"ContainerStarted","Data":"1f1793fbdb894127bcaef71570507d95429b65cdfbe2b21db76b1d6d0e9f7124"} Nov 24 19:37:56 crc kubenswrapper[4812]: I1124 19:37:56.798518 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"353fa880-de21-4dcc-b244-7a98c079de58","Type":"ContainerStarted","Data":"0a3f68f8c6ede033ee458e3eda0910f1bb207efaffbe6d58fd872df14b6671bd"} Nov 24 19:37:56 crc kubenswrapper[4812]: I1124 19:37:56.798650 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="353fa880-de21-4dcc-b244-7a98c079de58" containerName="nova-metadata-log" containerID="cri-o://0a3f68f8c6ede033ee458e3eda0910f1bb207efaffbe6d58fd872df14b6671bd" gracePeriod=30 Nov 24 19:37:56 crc kubenswrapper[4812]: I1124 19:37:56.798773 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="353fa880-de21-4dcc-b244-7a98c079de58" containerName="nova-metadata-metadata" containerID="cri-o://1f1793fbdb894127bcaef71570507d95429b65cdfbe2b21db76b1d6d0e9f7124" gracePeriod=30 Nov 24 19:37:56 crc kubenswrapper[4812]: I1124 19:37:56.803287 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0feedb89-3882-482b-82d0-24de3d81f892","Type":"ContainerStarted","Data":"7722d3ce2d1af69114e8e9df886108d0349e108a42086aac479457603d1c072a"} Nov 24 19:37:56 crc kubenswrapper[4812]: I1124 19:37:56.803465 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="0feedb89-3882-482b-82d0-24de3d81f892" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://7722d3ce2d1af69114e8e9df886108d0349e108a42086aac479457603d1c072a" gracePeriod=30 Nov 24 19:37:56 crc kubenswrapper[4812]: I1124 19:37:56.811139 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9585686c-a5f0-4e0e-a761-c0319546fb74","Type":"ContainerStarted","Data":"ea217e4bdb8791c949788eb7182c922a5bf82a679b9e10c9f2d96d98da2f4196"} Nov 24 19:37:56 crc kubenswrapper[4812]: I1124 19:37:56.820491 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.103850003 podStartE2EDuration="6.820470734s" podCreationTimestamp="2025-11-24 19:37:50 +0000 UTC" firstStartedPulling="2025-11-24 19:37:51.968206491 +0000 UTC m=+1265.757158862" lastFinishedPulling="2025-11-24 19:37:55.684827202 +0000 UTC m=+1269.473779593" observedRunningTime="2025-11-24 19:37:56.811845776 +0000 UTC m=+1270.600798157" watchObservedRunningTime="2025-11-24 19:37:56.820470734 +0000 UTC m=+1270.609423125" Nov 24 19:37:56 crc kubenswrapper[4812]: I1124 19:37:56.836371 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.449048248 podStartE2EDuration="5.836324689s" podCreationTimestamp="2025-11-24 19:37:51 +0000 UTC" firstStartedPulling="2025-11-24 19:37:52.298502339 +0000 UTC m=+1266.087454710" lastFinishedPulling="2025-11-24 19:37:55.68577878 +0000 UTC m=+1269.474731151" observedRunningTime="2025-11-24 19:37:56.833656773 +0000 UTC m=+1270.622609154" watchObservedRunningTime="2025-11-24 19:37:56.836324689 +0000 UTC m=+1270.625277070" Nov 24 19:37:56 crc kubenswrapper[4812]: I1124 19:37:56.858137 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.678411868 podStartE2EDuration="5.858123176s" podCreationTimestamp="2025-11-24 19:37:51 +0000 UTC" firstStartedPulling="2025-11-24 19:37:52.510752606 +0000 UTC m=+1266.299704977" lastFinishedPulling="2025-11-24 19:37:55.690463914 +0000 UTC m=+1269.479416285" observedRunningTime="2025-11-24 19:37:56.852385461 +0000 UTC m=+1270.641337882" watchObservedRunningTime="2025-11-24 19:37:56.858123176 +0000 UTC m=+1270.647075537" Nov 24 19:37:56 crc kubenswrapper[4812]: I1124 19:37:56.873013 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.46996398 podStartE2EDuration="5.872995233s" podCreationTimestamp="2025-11-24 19:37:51 +0000 UTC" firstStartedPulling="2025-11-24 19:37:52.281434329 +0000 UTC m=+1266.070386700" lastFinishedPulling="2025-11-24 19:37:55.684465582 +0000 UTC m=+1269.473417953" observedRunningTime="2025-11-24 19:37:56.867214837 +0000 UTC m=+1270.656167218" watchObservedRunningTime="2025-11-24 19:37:56.872995233 +0000 UTC m=+1270.661947604" Nov 24 19:37:57 crc kubenswrapper[4812]: E1124 19:37:57.089386 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod353fa880_de21_4dcc_b244_7a98c079de58.slice/crio-conmon-1f1793fbdb894127bcaef71570507d95429b65cdfbe2b21db76b1d6d0e9f7124.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod353fa880_de21_4dcc_b244_7a98c079de58.slice/crio-1f1793fbdb894127bcaef71570507d95429b65cdfbe2b21db76b1d6d0e9f7124.scope\": RecentStats: unable to find data in memory cache]" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.463637 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.524021 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353fa880-de21-4dcc-b244-7a98c079de58-config-data\") pod \"353fa880-de21-4dcc-b244-7a98c079de58\" (UID: \"353fa880-de21-4dcc-b244-7a98c079de58\") " Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.524088 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj5z2\" (UniqueName: \"kubernetes.io/projected/353fa880-de21-4dcc-b244-7a98c079de58-kube-api-access-mj5z2\") pod \"353fa880-de21-4dcc-b244-7a98c079de58\" (UID: \"353fa880-de21-4dcc-b244-7a98c079de58\") " Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.524120 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353fa880-de21-4dcc-b244-7a98c079de58-combined-ca-bundle\") pod \"353fa880-de21-4dcc-b244-7a98c079de58\" (UID: \"353fa880-de21-4dcc-b244-7a98c079de58\") " Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.524272 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/353fa880-de21-4dcc-b244-7a98c079de58-logs\") pod \"353fa880-de21-4dcc-b244-7a98c079de58\" (UID: \"353fa880-de21-4dcc-b244-7a98c079de58\") " Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.525104 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/353fa880-de21-4dcc-b244-7a98c079de58-logs" (OuterVolumeSpecName: "logs") pod "353fa880-de21-4dcc-b244-7a98c079de58" (UID: "353fa880-de21-4dcc-b244-7a98c079de58"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.532849 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/353fa880-de21-4dcc-b244-7a98c079de58-kube-api-access-mj5z2" (OuterVolumeSpecName: "kube-api-access-mj5z2") pod "353fa880-de21-4dcc-b244-7a98c079de58" (UID: "353fa880-de21-4dcc-b244-7a98c079de58"). InnerVolumeSpecName "kube-api-access-mj5z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.555927 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/353fa880-de21-4dcc-b244-7a98c079de58-config-data" (OuterVolumeSpecName: "config-data") pod "353fa880-de21-4dcc-b244-7a98c079de58" (UID: "353fa880-de21-4dcc-b244-7a98c079de58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.580151 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/353fa880-de21-4dcc-b244-7a98c079de58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "353fa880-de21-4dcc-b244-7a98c079de58" (UID: "353fa880-de21-4dcc-b244-7a98c079de58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.627502 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj5z2\" (UniqueName: \"kubernetes.io/projected/353fa880-de21-4dcc-b244-7a98c079de58-kube-api-access-mj5z2\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.627561 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353fa880-de21-4dcc-b244-7a98c079de58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.627580 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/353fa880-de21-4dcc-b244-7a98c079de58-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.627597 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/353fa880-de21-4dcc-b244-7a98c079de58-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.825772 4812 generic.go:334] "Generic (PLEG): container finished" podID="353fa880-de21-4dcc-b244-7a98c079de58" containerID="1f1793fbdb894127bcaef71570507d95429b65cdfbe2b21db76b1d6d0e9f7124" exitCode=0 Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.825818 4812 generic.go:334] "Generic (PLEG): container finished" podID="353fa880-de21-4dcc-b244-7a98c079de58" containerID="0a3f68f8c6ede033ee458e3eda0910f1bb207efaffbe6d58fd872df14b6671bd" exitCode=143 Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.825844 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.825863 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"353fa880-de21-4dcc-b244-7a98c079de58","Type":"ContainerDied","Data":"1f1793fbdb894127bcaef71570507d95429b65cdfbe2b21db76b1d6d0e9f7124"} Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.825937 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"353fa880-de21-4dcc-b244-7a98c079de58","Type":"ContainerDied","Data":"0a3f68f8c6ede033ee458e3eda0910f1bb207efaffbe6d58fd872df14b6671bd"} Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.825960 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"353fa880-de21-4dcc-b244-7a98c079de58","Type":"ContainerDied","Data":"bdde4a09b4397b483248424cbb381c1f4dcfc5b21b1586170ca007ac814dce89"} Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.825986 4812 scope.go:117] "RemoveContainer" containerID="1f1793fbdb894127bcaef71570507d95429b65cdfbe2b21db76b1d6d0e9f7124" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.877949 4812 scope.go:117] "RemoveContainer" containerID="0a3f68f8c6ede033ee458e3eda0910f1bb207efaffbe6d58fd872df14b6671bd" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.883487 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.907562 4812 scope.go:117] "RemoveContainer" containerID="1f1793fbdb894127bcaef71570507d95429b65cdfbe2b21db76b1d6d0e9f7124" Nov 24 19:37:57 crc kubenswrapper[4812]: E1124 19:37:57.908152 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f1793fbdb894127bcaef71570507d95429b65cdfbe2b21db76b1d6d0e9f7124\": container with ID starting with 1f1793fbdb894127bcaef71570507d95429b65cdfbe2b21db76b1d6d0e9f7124 not found: ID does not exist" containerID="1f1793fbdb894127bcaef71570507d95429b65cdfbe2b21db76b1d6d0e9f7124" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.908207 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f1793fbdb894127bcaef71570507d95429b65cdfbe2b21db76b1d6d0e9f7124"} err="failed to get container status \"1f1793fbdb894127bcaef71570507d95429b65cdfbe2b21db76b1d6d0e9f7124\": rpc error: code = NotFound desc = could not find container \"1f1793fbdb894127bcaef71570507d95429b65cdfbe2b21db76b1d6d0e9f7124\": container with ID starting with 1f1793fbdb894127bcaef71570507d95429b65cdfbe2b21db76b1d6d0e9f7124 not found: ID does not exist" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.908231 4812 scope.go:117] "RemoveContainer" containerID="0a3f68f8c6ede033ee458e3eda0910f1bb207efaffbe6d58fd872df14b6671bd" Nov 24 19:37:57 crc kubenswrapper[4812]: E1124 19:37:57.908773 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a3f68f8c6ede033ee458e3eda0910f1bb207efaffbe6d58fd872df14b6671bd\": container with ID starting with 0a3f68f8c6ede033ee458e3eda0910f1bb207efaffbe6d58fd872df14b6671bd not found: ID does not exist" containerID="0a3f68f8c6ede033ee458e3eda0910f1bb207efaffbe6d58fd872df14b6671bd" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.908796 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a3f68f8c6ede033ee458e3eda0910f1bb207efaffbe6d58fd872df14b6671bd"} err="failed to get container status \"0a3f68f8c6ede033ee458e3eda0910f1bb207efaffbe6d58fd872df14b6671bd\": rpc error: code = NotFound desc = could not find container \"0a3f68f8c6ede033ee458e3eda0910f1bb207efaffbe6d58fd872df14b6671bd\": container with ID starting with 0a3f68f8c6ede033ee458e3eda0910f1bb207efaffbe6d58fd872df14b6671bd not found: ID does not exist" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.908811 4812 scope.go:117] "RemoveContainer" containerID="1f1793fbdb894127bcaef71570507d95429b65cdfbe2b21db76b1d6d0e9f7124" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.909283 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f1793fbdb894127bcaef71570507d95429b65cdfbe2b21db76b1d6d0e9f7124"} err="failed to get container status \"1f1793fbdb894127bcaef71570507d95429b65cdfbe2b21db76b1d6d0e9f7124\": rpc error: code = NotFound desc = could not find container \"1f1793fbdb894127bcaef71570507d95429b65cdfbe2b21db76b1d6d0e9f7124\": container with ID starting with 1f1793fbdb894127bcaef71570507d95429b65cdfbe2b21db76b1d6d0e9f7124 not found: ID does not exist" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.909369 4812 scope.go:117] "RemoveContainer" containerID="0a3f68f8c6ede033ee458e3eda0910f1bb207efaffbe6d58fd872df14b6671bd" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.909836 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a3f68f8c6ede033ee458e3eda0910f1bb207efaffbe6d58fd872df14b6671bd"} err="failed to get container status \"0a3f68f8c6ede033ee458e3eda0910f1bb207efaffbe6d58fd872df14b6671bd\": rpc error: code = NotFound desc = could not find container \"0a3f68f8c6ede033ee458e3eda0910f1bb207efaffbe6d58fd872df14b6671bd\": container with ID starting with 0a3f68f8c6ede033ee458e3eda0910f1bb207efaffbe6d58fd872df14b6671bd not found: ID does not exist" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.919185 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.938953 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:37:57 crc kubenswrapper[4812]: E1124 19:37:57.939619 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="353fa880-de21-4dcc-b244-7a98c079de58" containerName="nova-metadata-metadata" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.939642 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="353fa880-de21-4dcc-b244-7a98c079de58" containerName="nova-metadata-metadata" Nov 24 19:37:57 crc kubenswrapper[4812]: E1124 19:37:57.939664 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="353fa880-de21-4dcc-b244-7a98c079de58" containerName="nova-metadata-log" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.939673 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="353fa880-de21-4dcc-b244-7a98c079de58" containerName="nova-metadata-log" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.939925 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="353fa880-de21-4dcc-b244-7a98c079de58" containerName="nova-metadata-metadata" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.939955 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="353fa880-de21-4dcc-b244-7a98c079de58" containerName="nova-metadata-log" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.941244 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.943892 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.944089 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 19:37:57 crc kubenswrapper[4812]: I1124 19:37:57.965361 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:37:58 crc kubenswrapper[4812]: I1124 19:37:58.035277 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22b0b100-431d-47ab-9085-8634d013e87f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"22b0b100-431d-47ab-9085-8634d013e87f\") " pod="openstack/nova-metadata-0" Nov 24 19:37:58 crc kubenswrapper[4812]: I1124 19:37:58.035385 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnrs8\" (UniqueName: \"kubernetes.io/projected/22b0b100-431d-47ab-9085-8634d013e87f-kube-api-access-tnrs8\") pod \"nova-metadata-0\" (UID: \"22b0b100-431d-47ab-9085-8634d013e87f\") " pod="openstack/nova-metadata-0" Nov 24 19:37:58 crc kubenswrapper[4812]: I1124 19:37:58.035601 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22b0b100-431d-47ab-9085-8634d013e87f-config-data\") pod \"nova-metadata-0\" (UID: \"22b0b100-431d-47ab-9085-8634d013e87f\") " pod="openstack/nova-metadata-0" Nov 24 19:37:58 crc kubenswrapper[4812]: I1124 19:37:58.035747 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22b0b100-431d-47ab-9085-8634d013e87f-logs\") pod \"nova-metadata-0\" (UID: \"22b0b100-431d-47ab-9085-8634d013e87f\") " pod="openstack/nova-metadata-0" Nov 24 19:37:58 crc kubenswrapper[4812]: I1124 19:37:58.035890 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b0b100-431d-47ab-9085-8634d013e87f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"22b0b100-431d-47ab-9085-8634d013e87f\") " pod="openstack/nova-metadata-0" Nov 24 19:37:58 crc kubenswrapper[4812]: I1124 19:37:58.137597 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22b0b100-431d-47ab-9085-8634d013e87f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"22b0b100-431d-47ab-9085-8634d013e87f\") " pod="openstack/nova-metadata-0" Nov 24 19:37:58 crc kubenswrapper[4812]: I1124 19:37:58.137670 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnrs8\" (UniqueName: \"kubernetes.io/projected/22b0b100-431d-47ab-9085-8634d013e87f-kube-api-access-tnrs8\") pod \"nova-metadata-0\" (UID: \"22b0b100-431d-47ab-9085-8634d013e87f\") " pod="openstack/nova-metadata-0" Nov 24 19:37:58 crc kubenswrapper[4812]: I1124 19:37:58.137697 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22b0b100-431d-47ab-9085-8634d013e87f-config-data\") pod \"nova-metadata-0\" (UID: \"22b0b100-431d-47ab-9085-8634d013e87f\") " pod="openstack/nova-metadata-0" Nov 24 19:37:58 crc kubenswrapper[4812]: I1124 19:37:58.137717 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22b0b100-431d-47ab-9085-8634d013e87f-logs\") pod \"nova-metadata-0\" (UID: \"22b0b100-431d-47ab-9085-8634d013e87f\") " pod="openstack/nova-metadata-0" Nov 24 19:37:58 crc kubenswrapper[4812]: I1124 19:37:58.137740 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b0b100-431d-47ab-9085-8634d013e87f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"22b0b100-431d-47ab-9085-8634d013e87f\") " pod="openstack/nova-metadata-0" Nov 24 19:37:58 crc kubenswrapper[4812]: I1124 19:37:58.138144 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22b0b100-431d-47ab-9085-8634d013e87f-logs\") pod \"nova-metadata-0\" (UID: \"22b0b100-431d-47ab-9085-8634d013e87f\") " pod="openstack/nova-metadata-0" Nov 24 19:37:58 crc kubenswrapper[4812]: I1124 19:37:58.143960 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22b0b100-431d-47ab-9085-8634d013e87f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"22b0b100-431d-47ab-9085-8634d013e87f\") " pod="openstack/nova-metadata-0" Nov 24 19:37:58 crc kubenswrapper[4812]: I1124 19:37:58.144019 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22b0b100-431d-47ab-9085-8634d013e87f-config-data\") pod \"nova-metadata-0\" (UID: \"22b0b100-431d-47ab-9085-8634d013e87f\") " pod="openstack/nova-metadata-0" Nov 24 19:37:58 crc kubenswrapper[4812]: I1124 19:37:58.154028 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b0b100-431d-47ab-9085-8634d013e87f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"22b0b100-431d-47ab-9085-8634d013e87f\") " pod="openstack/nova-metadata-0" Nov 24 19:37:58 crc kubenswrapper[4812]: I1124 19:37:58.157131 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnrs8\" (UniqueName: \"kubernetes.io/projected/22b0b100-431d-47ab-9085-8634d013e87f-kube-api-access-tnrs8\") pod \"nova-metadata-0\" (UID: \"22b0b100-431d-47ab-9085-8634d013e87f\") " pod="openstack/nova-metadata-0" Nov 24 19:37:58 crc kubenswrapper[4812]: I1124 19:37:58.268299 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 19:37:58 crc kubenswrapper[4812]: I1124 19:37:58.727394 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:37:58 crc kubenswrapper[4812]: W1124 19:37:58.731820 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22b0b100_431d_47ab_9085_8634d013e87f.slice/crio-6200ec55d02fd3b6ddfa04c410429e9ca1eaf6a64034c6b71d3ae1edc02bbbd3 WatchSource:0}: Error finding container 6200ec55d02fd3b6ddfa04c410429e9ca1eaf6a64034c6b71d3ae1edc02bbbd3: Status 404 returned error can't find the container with id 6200ec55d02fd3b6ddfa04c410429e9ca1eaf6a64034c6b71d3ae1edc02bbbd3 Nov 24 19:37:58 crc kubenswrapper[4812]: I1124 19:37:58.838215 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"22b0b100-431d-47ab-9085-8634d013e87f","Type":"ContainerStarted","Data":"6200ec55d02fd3b6ddfa04c410429e9ca1eaf6a64034c6b71d3ae1edc02bbbd3"} Nov 24 19:37:58 crc kubenswrapper[4812]: I1124 19:37:58.981978 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="353fa880-de21-4dcc-b244-7a98c079de58" path="/var/lib/kubelet/pods/353fa880-de21-4dcc-b244-7a98c079de58/volumes" Nov 24 19:37:59 crc kubenswrapper[4812]: I1124 19:37:59.852848 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"22b0b100-431d-47ab-9085-8634d013e87f","Type":"ContainerStarted","Data":"24b6a7fa3bdb761611b8b6295d5a5a998fe77a1d52639545f31e6c61a6fc7669"} Nov 24 19:37:59 crc kubenswrapper[4812]: I1124 19:37:59.853279 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"22b0b100-431d-47ab-9085-8634d013e87f","Type":"ContainerStarted","Data":"f762e688c1ce4412ce53a551e3a6c75d2d9046ce0de90f1de311ee11a56f9051"} Nov 24 19:37:59 crc kubenswrapper[4812]: I1124 19:37:59.883133 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.883114139 podStartE2EDuration="2.883114139s" podCreationTimestamp="2025-11-24 19:37:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:37:59.879157345 +0000 UTC m=+1273.668109736" watchObservedRunningTime="2025-11-24 19:37:59.883114139 +0000 UTC m=+1273.672066510" Nov 24 19:38:00 crc kubenswrapper[4812]: I1124 19:38:00.873287 4812 generic.go:334] "Generic (PLEG): container finished" podID="8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844" containerID="705fac9235fc051bf3375b13e8f6298b3a7ac75c7c6fb5d5d491e30790d9dd51" exitCode=0 Nov 24 19:38:00 crc kubenswrapper[4812]: I1124 19:38:00.873513 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jv5mj" event={"ID":"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844","Type":"ContainerDied","Data":"705fac9235fc051bf3375b13e8f6298b3a7ac75c7c6fb5d5d491e30790d9dd51"} Nov 24 19:38:00 crc kubenswrapper[4812]: I1124 19:38:00.876559 4812 generic.go:334] "Generic (PLEG): container finished" podID="cd9d1fd1-4439-473b-a619-dd107ae950ff" containerID="22145cb6161b542de832198498201f037566ff179f93263c471dd93f46869d47" exitCode=0 Nov 24 19:38:00 crc kubenswrapper[4812]: I1124 19:38:00.876617 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2zpr9" event={"ID":"cd9d1fd1-4439-473b-a619-dd107ae950ff","Type":"ContainerDied","Data":"22145cb6161b542de832198498201f037566ff179f93263c471dd93f46869d47"} Nov 24 19:38:01 crc kubenswrapper[4812]: I1124 19:38:01.361952 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 19:38:01 crc kubenswrapper[4812]: I1124 19:38:01.362023 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 19:38:01 crc kubenswrapper[4812]: I1124 19:38:01.651145 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:38:01 crc kubenswrapper[4812]: I1124 19:38:01.668715 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 19:38:01 crc kubenswrapper[4812]: I1124 19:38:01.668759 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 19:38:01 crc kubenswrapper[4812]: I1124 19:38:01.752684 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 19:38:01 crc kubenswrapper[4812]: I1124 19:38:01.757650 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:01 crc kubenswrapper[4812]: I1124 19:38:01.772583 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-797bbc649-65cf5"] Nov 24 19:38:01 crc kubenswrapper[4812]: I1124 19:38:01.772885 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-797bbc649-65cf5" podUID="43666515-aeb8-458b-a6a4-380d22e689ba" containerName="dnsmasq-dns" containerID="cri-o://cc384a033e885680ff57972eecad744044537cc51849f3362c88b04c86d90d0a" gracePeriod=10 Nov 24 19:38:01 crc kubenswrapper[4812]: I1124 19:38:01.920873 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.445561 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0b1eff93-3e0b-440b-8baf-56e84e60ded1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.445911 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0b1eff93-3e0b-440b-8baf-56e84e60ded1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.461137 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jv5mj" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.475595 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2zpr9" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.493635 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.576872 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9d1fd1-4439-473b-a619-dd107ae950ff-combined-ca-bundle\") pod \"cd9d1fd1-4439-473b-a619-dd107ae950ff\" (UID: \"cd9d1fd1-4439-473b-a619-dd107ae950ff\") " Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.576917 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-config-data\") pod \"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844\" (UID: \"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844\") " Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.576964 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db5wg\" (UniqueName: \"kubernetes.io/projected/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-kube-api-access-db5wg\") pod \"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844\" (UID: \"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844\") " Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.576988 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-scripts\") pod \"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844\" (UID: \"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844\") " Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.577017 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9d1fd1-4439-473b-a619-dd107ae950ff-config-data\") pod \"cd9d1fd1-4439-473b-a619-dd107ae950ff\" (UID: \"cd9d1fd1-4439-473b-a619-dd107ae950ff\") " Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.577043 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd9d1fd1-4439-473b-a619-dd107ae950ff-scripts\") pod \"cd9d1fd1-4439-473b-a619-dd107ae950ff\" (UID: \"cd9d1fd1-4439-473b-a619-dd107ae950ff\") " Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.577066 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8w6gb\" (UniqueName: \"kubernetes.io/projected/cd9d1fd1-4439-473b-a619-dd107ae950ff-kube-api-access-8w6gb\") pod \"cd9d1fd1-4439-473b-a619-dd107ae950ff\" (UID: \"cd9d1fd1-4439-473b-a619-dd107ae950ff\") " Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.577161 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-combined-ca-bundle\") pod \"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844\" (UID: \"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844\") " Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.582973 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9d1fd1-4439-473b-a619-dd107ae950ff-scripts" (OuterVolumeSpecName: "scripts") pod "cd9d1fd1-4439-473b-a619-dd107ae950ff" (UID: "cd9d1fd1-4439-473b-a619-dd107ae950ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.583353 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-scripts" (OuterVolumeSpecName: "scripts") pod "8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844" (UID: "8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.593086 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd9d1fd1-4439-473b-a619-dd107ae950ff-kube-api-access-8w6gb" (OuterVolumeSpecName: "kube-api-access-8w6gb") pod "cd9d1fd1-4439-473b-a619-dd107ae950ff" (UID: "cd9d1fd1-4439-473b-a619-dd107ae950ff"). InnerVolumeSpecName "kube-api-access-8w6gb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.593159 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-kube-api-access-db5wg" (OuterVolumeSpecName: "kube-api-access-db5wg") pod "8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844" (UID: "8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844"). InnerVolumeSpecName "kube-api-access-db5wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.619244 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-config-data" (OuterVolumeSpecName: "config-data") pod "8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844" (UID: "8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.635570 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844" (UID: "8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.636602 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9d1fd1-4439-473b-a619-dd107ae950ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd9d1fd1-4439-473b-a619-dd107ae950ff" (UID: "cd9d1fd1-4439-473b-a619-dd107ae950ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.637048 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9d1fd1-4439-473b-a619-dd107ae950ff-config-data" (OuterVolumeSpecName: "config-data") pod "cd9d1fd1-4439-473b-a619-dd107ae950ff" (UID: "cd9d1fd1-4439-473b-a619-dd107ae950ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.678789 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfcm6\" (UniqueName: \"kubernetes.io/projected/43666515-aeb8-458b-a6a4-380d22e689ba-kube-api-access-cfcm6\") pod \"43666515-aeb8-458b-a6a4-380d22e689ba\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.679058 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-ovsdbserver-sb\") pod \"43666515-aeb8-458b-a6a4-380d22e689ba\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.679170 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-dns-svc\") pod \"43666515-aeb8-458b-a6a4-380d22e689ba\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.679281 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-config\") pod \"43666515-aeb8-458b-a6a4-380d22e689ba\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.679406 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-ovsdbserver-nb\") pod \"43666515-aeb8-458b-a6a4-380d22e689ba\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.679509 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-dns-swift-storage-0\") pod \"43666515-aeb8-458b-a6a4-380d22e689ba\" (UID: \"43666515-aeb8-458b-a6a4-380d22e689ba\") " Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.679950 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.680021 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd9d1fd1-4439-473b-a619-dd107ae950ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.680087 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.680142 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db5wg\" (UniqueName: \"kubernetes.io/projected/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-kube-api-access-db5wg\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.680196 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.680255 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9d1fd1-4439-473b-a619-dd107ae950ff-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.680314 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd9d1fd1-4439-473b-a619-dd107ae950ff-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.680385 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8w6gb\" (UniqueName: \"kubernetes.io/projected/cd9d1fd1-4439-473b-a619-dd107ae950ff-kube-api-access-8w6gb\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.681966 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43666515-aeb8-458b-a6a4-380d22e689ba-kube-api-access-cfcm6" (OuterVolumeSpecName: "kube-api-access-cfcm6") pod "43666515-aeb8-458b-a6a4-380d22e689ba" (UID: "43666515-aeb8-458b-a6a4-380d22e689ba"). InnerVolumeSpecName "kube-api-access-cfcm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.725990 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-config" (OuterVolumeSpecName: "config") pod "43666515-aeb8-458b-a6a4-380d22e689ba" (UID: "43666515-aeb8-458b-a6a4-380d22e689ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.727093 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "43666515-aeb8-458b-a6a4-380d22e689ba" (UID: "43666515-aeb8-458b-a6a4-380d22e689ba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.728253 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "43666515-aeb8-458b-a6a4-380d22e689ba" (UID: "43666515-aeb8-458b-a6a4-380d22e689ba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.734174 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "43666515-aeb8-458b-a6a4-380d22e689ba" (UID: "43666515-aeb8-458b-a6a4-380d22e689ba"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.736943 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "43666515-aeb8-458b-a6a4-380d22e689ba" (UID: "43666515-aeb8-458b-a6a4-380d22e689ba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.782317 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.782376 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfcm6\" (UniqueName: \"kubernetes.io/projected/43666515-aeb8-458b-a6a4-380d22e689ba-kube-api-access-cfcm6\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.782391 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.782405 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.782416 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.782453 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/43666515-aeb8-458b-a6a4-380d22e689ba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.898089 4812 generic.go:334] "Generic (PLEG): container finished" podID="43666515-aeb8-458b-a6a4-380d22e689ba" containerID="cc384a033e885680ff57972eecad744044537cc51849f3362c88b04c86d90d0a" exitCode=0 Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.898161 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-797bbc649-65cf5" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.898185 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-797bbc649-65cf5" event={"ID":"43666515-aeb8-458b-a6a4-380d22e689ba","Type":"ContainerDied","Data":"cc384a033e885680ff57972eecad744044537cc51849f3362c88b04c86d90d0a"} Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.898242 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-797bbc649-65cf5" event={"ID":"43666515-aeb8-458b-a6a4-380d22e689ba","Type":"ContainerDied","Data":"0c88b8954410912b442cb480b40681f4bf15d8291f560677e86db1b4b586982f"} Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.898269 4812 scope.go:117] "RemoveContainer" containerID="cc384a033e885680ff57972eecad744044537cc51849f3362c88b04c86d90d0a" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.900394 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-2zpr9" event={"ID":"cd9d1fd1-4439-473b-a619-dd107ae950ff","Type":"ContainerDied","Data":"cb19bd053f111c86bf88fa3eeb1db0ac20f3fef677264f1df4e0522a1f14d659"} Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.900427 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb19bd053f111c86bf88fa3eeb1db0ac20f3fef677264f1df4e0522a1f14d659" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.900492 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-2zpr9" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.903825 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jv5mj" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.907189 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jv5mj" event={"ID":"8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844","Type":"ContainerDied","Data":"c3cadf35780ad0ab3de592649b463e3a4c4e606ec39f0caf92afb53c2503e3c3"} Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.907547 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3cadf35780ad0ab3de592649b463e3a4c4e606ec39f0caf92afb53c2503e3c3" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.995595 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 19:38:02 crc kubenswrapper[4812]: E1124 19:38:02.996318 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844" containerName="nova-cell1-conductor-db-sync" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.996374 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844" containerName="nova-cell1-conductor-db-sync" Nov 24 19:38:02 crc kubenswrapper[4812]: E1124 19:38:02.996412 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9d1fd1-4439-473b-a619-dd107ae950ff" containerName="nova-manage" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.996422 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9d1fd1-4439-473b-a619-dd107ae950ff" containerName="nova-manage" Nov 24 19:38:02 crc kubenswrapper[4812]: E1124 19:38:02.996441 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43666515-aeb8-458b-a6a4-380d22e689ba" containerName="init" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.996449 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="43666515-aeb8-458b-a6a4-380d22e689ba" containerName="init" Nov 24 19:38:02 crc kubenswrapper[4812]: E1124 19:38:02.996463 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43666515-aeb8-458b-a6a4-380d22e689ba" containerName="dnsmasq-dns" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.996470 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="43666515-aeb8-458b-a6a4-380d22e689ba" containerName="dnsmasq-dns" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.996704 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="43666515-aeb8-458b-a6a4-380d22e689ba" containerName="dnsmasq-dns" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.996726 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844" containerName="nova-cell1-conductor-db-sync" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.996746 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9d1fd1-4439-473b-a619-dd107ae950ff" containerName="nova-manage" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.998323 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.999068 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.999106 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.999145 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.999710 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b36a42e19accf5660fd1a99293b3671794dc547e12b8dcbfeada632fbf2982a3"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 19:38:02 crc kubenswrapper[4812]: I1124 19:38:02.999769 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://b36a42e19accf5660fd1a99293b3671794dc547e12b8dcbfeada632fbf2982a3" gracePeriod=600 Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.001598 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.008911 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.032820 4812 scope.go:117] "RemoveContainer" containerID="3bcc922155878f867131d2e51a1bc4805377db170cd2e3ef87a2f4071b3c3146" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.067524 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-797bbc649-65cf5"] Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.070623 4812 scope.go:117] "RemoveContainer" containerID="cc384a033e885680ff57972eecad744044537cc51849f3362c88b04c86d90d0a" Nov 24 19:38:03 crc kubenswrapper[4812]: E1124 19:38:03.071125 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc384a033e885680ff57972eecad744044537cc51849f3362c88b04c86d90d0a\": container with ID starting with cc384a033e885680ff57972eecad744044537cc51849f3362c88b04c86d90d0a not found: ID does not exist" containerID="cc384a033e885680ff57972eecad744044537cc51849f3362c88b04c86d90d0a" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.071164 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc384a033e885680ff57972eecad744044537cc51849f3362c88b04c86d90d0a"} err="failed to get container status \"cc384a033e885680ff57972eecad744044537cc51849f3362c88b04c86d90d0a\": rpc error: code = NotFound desc = could not find container \"cc384a033e885680ff57972eecad744044537cc51849f3362c88b04c86d90d0a\": container with ID starting with cc384a033e885680ff57972eecad744044537cc51849f3362c88b04c86d90d0a not found: ID does not exist" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.071189 4812 scope.go:117] "RemoveContainer" containerID="3bcc922155878f867131d2e51a1bc4805377db170cd2e3ef87a2f4071b3c3146" Nov 24 19:38:03 crc kubenswrapper[4812]: E1124 19:38:03.071609 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bcc922155878f867131d2e51a1bc4805377db170cd2e3ef87a2f4071b3c3146\": container with ID starting with 3bcc922155878f867131d2e51a1bc4805377db170cd2e3ef87a2f4071b3c3146 not found: ID does not exist" containerID="3bcc922155878f867131d2e51a1bc4805377db170cd2e3ef87a2f4071b3c3146" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.071635 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bcc922155878f867131d2e51a1bc4805377db170cd2e3ef87a2f4071b3c3146"} err="failed to get container status \"3bcc922155878f867131d2e51a1bc4805377db170cd2e3ef87a2f4071b3c3146\": rpc error: code = NotFound desc = could not find container \"3bcc922155878f867131d2e51a1bc4805377db170cd2e3ef87a2f4071b3c3146\": container with ID starting with 3bcc922155878f867131d2e51a1bc4805377db170cd2e3ef87a2f4071b3c3146 not found: ID does not exist" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.074811 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-797bbc649-65cf5"] Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.126152 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.126612 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0b1eff93-3e0b-440b-8baf-56e84e60ded1" containerName="nova-api-log" containerID="cri-o://45c40b45fbbab6eb9594c78d319db459feaa5567c3e3aa78f5440922ff7f46cc" gracePeriod=30 Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.127008 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0b1eff93-3e0b-440b-8baf-56e84e60ded1" containerName="nova-api-api" containerID="cri-o://52b419af68e37b67d492f27613feebf5748a485faf9628cf18b1c904b42f313e" gracePeriod=30 Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.142902 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.156268 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.156485 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="22b0b100-431d-47ab-9085-8634d013e87f" containerName="nova-metadata-log" containerID="cri-o://f762e688c1ce4412ce53a551e3a6c75d2d9046ce0de90f1de311ee11a56f9051" gracePeriod=30 Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.156868 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="22b0b100-431d-47ab-9085-8634d013e87f" containerName="nova-metadata-metadata" containerID="cri-o://24b6a7fa3bdb761611b8b6295d5a5a998fe77a1d52639545f31e6c61a6fc7669" gracePeriod=30 Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.190855 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b80396-f87d-435e-8478-0ecb34bccd94-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"87b80396-f87d-435e-8478-0ecb34bccd94\") " pod="openstack/nova-cell1-conductor-0" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.190930 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjf65\" (UniqueName: \"kubernetes.io/projected/87b80396-f87d-435e-8478-0ecb34bccd94-kube-api-access-gjf65\") pod \"nova-cell1-conductor-0\" (UID: \"87b80396-f87d-435e-8478-0ecb34bccd94\") " pod="openstack/nova-cell1-conductor-0" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.191013 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b80396-f87d-435e-8478-0ecb34bccd94-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"87b80396-f87d-435e-8478-0ecb34bccd94\") " pod="openstack/nova-cell1-conductor-0" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.268395 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.268466 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.293061 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjf65\" (UniqueName: \"kubernetes.io/projected/87b80396-f87d-435e-8478-0ecb34bccd94-kube-api-access-gjf65\") pod \"nova-cell1-conductor-0\" (UID: \"87b80396-f87d-435e-8478-0ecb34bccd94\") " pod="openstack/nova-cell1-conductor-0" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.293163 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b80396-f87d-435e-8478-0ecb34bccd94-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"87b80396-f87d-435e-8478-0ecb34bccd94\") " pod="openstack/nova-cell1-conductor-0" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.293237 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b80396-f87d-435e-8478-0ecb34bccd94-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"87b80396-f87d-435e-8478-0ecb34bccd94\") " pod="openstack/nova-cell1-conductor-0" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.300286 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b80396-f87d-435e-8478-0ecb34bccd94-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"87b80396-f87d-435e-8478-0ecb34bccd94\") " pod="openstack/nova-cell1-conductor-0" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.300806 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b80396-f87d-435e-8478-0ecb34bccd94-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"87b80396-f87d-435e-8478-0ecb34bccd94\") " pod="openstack/nova-cell1-conductor-0" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.308478 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjf65\" (UniqueName: \"kubernetes.io/projected/87b80396-f87d-435e-8478-0ecb34bccd94-kube-api-access-gjf65\") pod \"nova-cell1-conductor-0\" (UID: \"87b80396-f87d-435e-8478-0ecb34bccd94\") " pod="openstack/nova-cell1-conductor-0" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.452848 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.698583 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.803585 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b0b100-431d-47ab-9085-8634d013e87f-nova-metadata-tls-certs\") pod \"22b0b100-431d-47ab-9085-8634d013e87f\" (UID: \"22b0b100-431d-47ab-9085-8634d013e87f\") " Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.803638 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22b0b100-431d-47ab-9085-8634d013e87f-combined-ca-bundle\") pod \"22b0b100-431d-47ab-9085-8634d013e87f\" (UID: \"22b0b100-431d-47ab-9085-8634d013e87f\") " Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.803668 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnrs8\" (UniqueName: \"kubernetes.io/projected/22b0b100-431d-47ab-9085-8634d013e87f-kube-api-access-tnrs8\") pod \"22b0b100-431d-47ab-9085-8634d013e87f\" (UID: \"22b0b100-431d-47ab-9085-8634d013e87f\") " Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.803687 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22b0b100-431d-47ab-9085-8634d013e87f-config-data\") pod \"22b0b100-431d-47ab-9085-8634d013e87f\" (UID: \"22b0b100-431d-47ab-9085-8634d013e87f\") " Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.803706 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22b0b100-431d-47ab-9085-8634d013e87f-logs\") pod \"22b0b100-431d-47ab-9085-8634d013e87f\" (UID: \"22b0b100-431d-47ab-9085-8634d013e87f\") " Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.804427 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22b0b100-431d-47ab-9085-8634d013e87f-logs" (OuterVolumeSpecName: "logs") pod "22b0b100-431d-47ab-9085-8634d013e87f" (UID: "22b0b100-431d-47ab-9085-8634d013e87f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.809491 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22b0b100-431d-47ab-9085-8634d013e87f-kube-api-access-tnrs8" (OuterVolumeSpecName: "kube-api-access-tnrs8") pod "22b0b100-431d-47ab-9085-8634d013e87f" (UID: "22b0b100-431d-47ab-9085-8634d013e87f"). InnerVolumeSpecName "kube-api-access-tnrs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.835248 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22b0b100-431d-47ab-9085-8634d013e87f-config-data" (OuterVolumeSpecName: "config-data") pod "22b0b100-431d-47ab-9085-8634d013e87f" (UID: "22b0b100-431d-47ab-9085-8634d013e87f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.838276 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22b0b100-431d-47ab-9085-8634d013e87f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22b0b100-431d-47ab-9085-8634d013e87f" (UID: "22b0b100-431d-47ab-9085-8634d013e87f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.859924 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22b0b100-431d-47ab-9085-8634d013e87f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "22b0b100-431d-47ab-9085-8634d013e87f" (UID: "22b0b100-431d-47ab-9085-8634d013e87f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.905579 4812 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/22b0b100-431d-47ab-9085-8634d013e87f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.905633 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22b0b100-431d-47ab-9085-8634d013e87f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.905647 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnrs8\" (UniqueName: \"kubernetes.io/projected/22b0b100-431d-47ab-9085-8634d013e87f-kube-api-access-tnrs8\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.905660 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22b0b100-431d-47ab-9085-8634d013e87f-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.905672 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/22b0b100-431d-47ab-9085-8634d013e87f-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.918122 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="b36a42e19accf5660fd1a99293b3671794dc547e12b8dcbfeada632fbf2982a3" exitCode=0 Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.918192 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"b36a42e19accf5660fd1a99293b3671794dc547e12b8dcbfeada632fbf2982a3"} Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.918215 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"d0df34c9db895de19526e2349039e7168974e5c1bd66252317644188b384437d"} Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.918232 4812 scope.go:117] "RemoveContainer" containerID="d0e66c2294fc71bbe8db0b484c7769b46468b08b26476c3c5d5d976af6fcf62b" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.927831 4812 generic.go:334] "Generic (PLEG): container finished" podID="22b0b100-431d-47ab-9085-8634d013e87f" containerID="24b6a7fa3bdb761611b8b6295d5a5a998fe77a1d52639545f31e6c61a6fc7669" exitCode=0 Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.928008 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.928108 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"22b0b100-431d-47ab-9085-8634d013e87f","Type":"ContainerDied","Data":"24b6a7fa3bdb761611b8b6295d5a5a998fe77a1d52639545f31e6c61a6fc7669"} Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.928309 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"22b0b100-431d-47ab-9085-8634d013e87f","Type":"ContainerDied","Data":"f762e688c1ce4412ce53a551e3a6c75d2d9046ce0de90f1de311ee11a56f9051"} Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.928015 4812 generic.go:334] "Generic (PLEG): container finished" podID="22b0b100-431d-47ab-9085-8634d013e87f" containerID="f762e688c1ce4412ce53a551e3a6c75d2d9046ce0de90f1de311ee11a56f9051" exitCode=143 Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.928681 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"22b0b100-431d-47ab-9085-8634d013e87f","Type":"ContainerDied","Data":"6200ec55d02fd3b6ddfa04c410429e9ca1eaf6a64034c6b71d3ae1edc02bbbd3"} Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.931196 4812 generic.go:334] "Generic (PLEG): container finished" podID="0b1eff93-3e0b-440b-8baf-56e84e60ded1" containerID="45c40b45fbbab6eb9594c78d319db459feaa5567c3e3aa78f5440922ff7f46cc" exitCode=143 Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.931648 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9585686c-a5f0-4e0e-a761-c0319546fb74" containerName="nova-scheduler-scheduler" containerID="cri-o://ea217e4bdb8791c949788eb7182c922a5bf82a679b9e10c9f2d96d98da2f4196" gracePeriod=30 Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.932053 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0b1eff93-3e0b-440b-8baf-56e84e60ded1","Type":"ContainerDied","Data":"45c40b45fbbab6eb9594c78d319db459feaa5567c3e3aa78f5440922ff7f46cc"} Nov 24 19:38:03 crc kubenswrapper[4812]: I1124 19:38:03.984225 4812 scope.go:117] "RemoveContainer" containerID="24b6a7fa3bdb761611b8b6295d5a5a998fe77a1d52639545f31e6c61a6fc7669" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.044409 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.057875 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.062382 4812 scope.go:117] "RemoveContainer" containerID="f762e688c1ce4412ce53a551e3a6c75d2d9046ce0de90f1de311ee11a56f9051" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.068412 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.076252 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:38:04 crc kubenswrapper[4812]: E1124 19:38:04.076959 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22b0b100-431d-47ab-9085-8634d013e87f" containerName="nova-metadata-log" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.077023 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="22b0b100-431d-47ab-9085-8634d013e87f" containerName="nova-metadata-log" Nov 24 19:38:04 crc kubenswrapper[4812]: E1124 19:38:04.077090 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22b0b100-431d-47ab-9085-8634d013e87f" containerName="nova-metadata-metadata" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.077149 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="22b0b100-431d-47ab-9085-8634d013e87f" containerName="nova-metadata-metadata" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.077407 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="22b0b100-431d-47ab-9085-8634d013e87f" containerName="nova-metadata-log" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.077528 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="22b0b100-431d-47ab-9085-8634d013e87f" containerName="nova-metadata-metadata" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.078687 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.084059 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.084804 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.087207 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.164643 4812 scope.go:117] "RemoveContainer" containerID="24b6a7fa3bdb761611b8b6295d5a5a998fe77a1d52639545f31e6c61a6fc7669" Nov 24 19:38:04 crc kubenswrapper[4812]: E1124 19:38:04.165271 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24b6a7fa3bdb761611b8b6295d5a5a998fe77a1d52639545f31e6c61a6fc7669\": container with ID starting with 24b6a7fa3bdb761611b8b6295d5a5a998fe77a1d52639545f31e6c61a6fc7669 not found: ID does not exist" containerID="24b6a7fa3bdb761611b8b6295d5a5a998fe77a1d52639545f31e6c61a6fc7669" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.166298 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24b6a7fa3bdb761611b8b6295d5a5a998fe77a1d52639545f31e6c61a6fc7669"} err="failed to get container status \"24b6a7fa3bdb761611b8b6295d5a5a998fe77a1d52639545f31e6c61a6fc7669\": rpc error: code = NotFound desc = could not find container \"24b6a7fa3bdb761611b8b6295d5a5a998fe77a1d52639545f31e6c61a6fc7669\": container with ID starting with 24b6a7fa3bdb761611b8b6295d5a5a998fe77a1d52639545f31e6c61a6fc7669 not found: ID does not exist" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.166365 4812 scope.go:117] "RemoveContainer" containerID="f762e688c1ce4412ce53a551e3a6c75d2d9046ce0de90f1de311ee11a56f9051" Nov 24 19:38:04 crc kubenswrapper[4812]: E1124 19:38:04.168106 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f762e688c1ce4412ce53a551e3a6c75d2d9046ce0de90f1de311ee11a56f9051\": container with ID starting with f762e688c1ce4412ce53a551e3a6c75d2d9046ce0de90f1de311ee11a56f9051 not found: ID does not exist" containerID="f762e688c1ce4412ce53a551e3a6c75d2d9046ce0de90f1de311ee11a56f9051" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.168138 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f762e688c1ce4412ce53a551e3a6c75d2d9046ce0de90f1de311ee11a56f9051"} err="failed to get container status \"f762e688c1ce4412ce53a551e3a6c75d2d9046ce0de90f1de311ee11a56f9051\": rpc error: code = NotFound desc = could not find container \"f762e688c1ce4412ce53a551e3a6c75d2d9046ce0de90f1de311ee11a56f9051\": container with ID starting with f762e688c1ce4412ce53a551e3a6c75d2d9046ce0de90f1de311ee11a56f9051 not found: ID does not exist" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.168157 4812 scope.go:117] "RemoveContainer" containerID="24b6a7fa3bdb761611b8b6295d5a5a998fe77a1d52639545f31e6c61a6fc7669" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.173894 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24b6a7fa3bdb761611b8b6295d5a5a998fe77a1d52639545f31e6c61a6fc7669"} err="failed to get container status \"24b6a7fa3bdb761611b8b6295d5a5a998fe77a1d52639545f31e6c61a6fc7669\": rpc error: code = NotFound desc = could not find container \"24b6a7fa3bdb761611b8b6295d5a5a998fe77a1d52639545f31e6c61a6fc7669\": container with ID starting with 24b6a7fa3bdb761611b8b6295d5a5a998fe77a1d52639545f31e6c61a6fc7669 not found: ID does not exist" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.173971 4812 scope.go:117] "RemoveContainer" containerID="f762e688c1ce4412ce53a551e3a6c75d2d9046ce0de90f1de311ee11a56f9051" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.178763 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f762e688c1ce4412ce53a551e3a6c75d2d9046ce0de90f1de311ee11a56f9051"} err="failed to get container status \"f762e688c1ce4412ce53a551e3a6c75d2d9046ce0de90f1de311ee11a56f9051\": rpc error: code = NotFound desc = could not find container \"f762e688c1ce4412ce53a551e3a6c75d2d9046ce0de90f1de311ee11a56f9051\": container with ID starting with f762e688c1ce4412ce53a551e3a6c75d2d9046ce0de90f1de311ee11a56f9051 not found: ID does not exist" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.211796 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs2rn\" (UniqueName: \"kubernetes.io/projected/9f9b1104-0b19-40dd-b447-364a7b4dc79a-kube-api-access-rs2rn\") pod \"nova-metadata-0\" (UID: \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\") " pod="openstack/nova-metadata-0" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.211907 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f9b1104-0b19-40dd-b447-364a7b4dc79a-config-data\") pod \"nova-metadata-0\" (UID: \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\") " pod="openstack/nova-metadata-0" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.211937 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f9b1104-0b19-40dd-b447-364a7b4dc79a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\") " pod="openstack/nova-metadata-0" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.211977 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f9b1104-0b19-40dd-b447-364a7b4dc79a-logs\") pod \"nova-metadata-0\" (UID: \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\") " pod="openstack/nova-metadata-0" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.212003 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9b1104-0b19-40dd-b447-364a7b4dc79a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\") " pod="openstack/nova-metadata-0" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.313872 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f9b1104-0b19-40dd-b447-364a7b4dc79a-logs\") pod \"nova-metadata-0\" (UID: \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\") " pod="openstack/nova-metadata-0" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.313951 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9b1104-0b19-40dd-b447-364a7b4dc79a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\") " pod="openstack/nova-metadata-0" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.314027 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs2rn\" (UniqueName: \"kubernetes.io/projected/9f9b1104-0b19-40dd-b447-364a7b4dc79a-kube-api-access-rs2rn\") pod \"nova-metadata-0\" (UID: \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\") " pod="openstack/nova-metadata-0" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.314173 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f9b1104-0b19-40dd-b447-364a7b4dc79a-config-data\") pod \"nova-metadata-0\" (UID: \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\") " pod="openstack/nova-metadata-0" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.314214 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f9b1104-0b19-40dd-b447-364a7b4dc79a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\") " pod="openstack/nova-metadata-0" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.314813 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f9b1104-0b19-40dd-b447-364a7b4dc79a-logs\") pod \"nova-metadata-0\" (UID: \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\") " pod="openstack/nova-metadata-0" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.318653 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f9b1104-0b19-40dd-b447-364a7b4dc79a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\") " pod="openstack/nova-metadata-0" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.318689 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9b1104-0b19-40dd-b447-364a7b4dc79a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\") " pod="openstack/nova-metadata-0" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.319054 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f9b1104-0b19-40dd-b447-364a7b4dc79a-config-data\") pod \"nova-metadata-0\" (UID: \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\") " pod="openstack/nova-metadata-0" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.334846 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs2rn\" (UniqueName: \"kubernetes.io/projected/9f9b1104-0b19-40dd-b447-364a7b4dc79a-kube-api-access-rs2rn\") pod \"nova-metadata-0\" (UID: \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\") " pod="openstack/nova-metadata-0" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.490365 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.960949 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"87b80396-f87d-435e-8478-0ecb34bccd94","Type":"ContainerStarted","Data":"5589fa331fbb111d8059fa5d842a1f4818d7b185257f4ad95b60c65b7ba3a2fd"} Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.961287 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"87b80396-f87d-435e-8478-0ecb34bccd94","Type":"ContainerStarted","Data":"1430864e45d55ed01cabb5542b7ac160ca398a46ef223d63a7d5381c2de17d04"} Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.961391 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.964536 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:38:04 crc kubenswrapper[4812]: W1124 19:38:04.969070 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f9b1104_0b19_40dd_b447_364a7b4dc79a.slice/crio-28816508457fa5185267caf7d7776b4d6f84013206f209eee00a7d5985eac364 WatchSource:0}: Error finding container 28816508457fa5185267caf7d7776b4d6f84013206f209eee00a7d5985eac364: Status 404 returned error can't find the container with id 28816508457fa5185267caf7d7776b4d6f84013206f209eee00a7d5985eac364 Nov 24 19:38:04 crc kubenswrapper[4812]: I1124 19:38:04.995298 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.995276956 podStartE2EDuration="2.995276956s" podCreationTimestamp="2025-11-24 19:38:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:38:04.991246771 +0000 UTC m=+1278.780199142" watchObservedRunningTime="2025-11-24 19:38:04.995276956 +0000 UTC m=+1278.784229327" Nov 24 19:38:05 crc kubenswrapper[4812]: I1124 19:38:05.003853 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22b0b100-431d-47ab-9085-8634d013e87f" path="/var/lib/kubelet/pods/22b0b100-431d-47ab-9085-8634d013e87f/volumes" Nov 24 19:38:05 crc kubenswrapper[4812]: I1124 19:38:05.005383 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43666515-aeb8-458b-a6a4-380d22e689ba" path="/var/lib/kubelet/pods/43666515-aeb8-458b-a6a4-380d22e689ba/volumes" Nov 24 19:38:05 crc kubenswrapper[4812]: I1124 19:38:05.973975 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f9b1104-0b19-40dd-b447-364a7b4dc79a","Type":"ContainerStarted","Data":"841ce7af9f103f7288872040170406fd4b3092ec6cbda39b97ccb9b3c9554806"} Nov 24 19:38:05 crc kubenswrapper[4812]: I1124 19:38:05.974955 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f9b1104-0b19-40dd-b447-364a7b4dc79a","Type":"ContainerStarted","Data":"ef427c471848f1577a77b7610d7ce7a5bd3ceeb5261cd719fe52fb979636e32b"} Nov 24 19:38:05 crc kubenswrapper[4812]: I1124 19:38:05.975022 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f9b1104-0b19-40dd-b447-364a7b4dc79a","Type":"ContainerStarted","Data":"28816508457fa5185267caf7d7776b4d6f84013206f209eee00a7d5985eac364"} Nov 24 19:38:06 crc kubenswrapper[4812]: I1124 19:38:06.001538 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.001513731 podStartE2EDuration="3.001513731s" podCreationTimestamp="2025-11-24 19:38:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:38:05.998076702 +0000 UTC m=+1279.787029083" watchObservedRunningTime="2025-11-24 19:38:06.001513731 +0000 UTC m=+1279.790466142" Nov 24 19:38:06 crc kubenswrapper[4812]: E1124 19:38:06.671094 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ea217e4bdb8791c949788eb7182c922a5bf82a679b9e10c9f2d96d98da2f4196" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 19:38:06 crc kubenswrapper[4812]: E1124 19:38:06.673483 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ea217e4bdb8791c949788eb7182c922a5bf82a679b9e10c9f2d96d98da2f4196" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 19:38:06 crc kubenswrapper[4812]: E1124 19:38:06.676410 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ea217e4bdb8791c949788eb7182c922a5bf82a679b9e10c9f2d96d98da2f4196" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 19:38:06 crc kubenswrapper[4812]: E1124 19:38:06.676524 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9585686c-a5f0-4e0e-a761-c0319546fb74" containerName="nova-scheduler-scheduler" Nov 24 19:38:07 crc kubenswrapper[4812]: I1124 19:38:07.542442 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 19:38:07 crc kubenswrapper[4812]: I1124 19:38:07.582853 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9585686c-a5f0-4e0e-a761-c0319546fb74-combined-ca-bundle\") pod \"9585686c-a5f0-4e0e-a761-c0319546fb74\" (UID: \"9585686c-a5f0-4e0e-a761-c0319546fb74\") " Nov 24 19:38:07 crc kubenswrapper[4812]: I1124 19:38:07.582997 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnjgh\" (UniqueName: \"kubernetes.io/projected/9585686c-a5f0-4e0e-a761-c0319546fb74-kube-api-access-lnjgh\") pod \"9585686c-a5f0-4e0e-a761-c0319546fb74\" (UID: \"9585686c-a5f0-4e0e-a761-c0319546fb74\") " Nov 24 19:38:07 crc kubenswrapper[4812]: I1124 19:38:07.583159 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9585686c-a5f0-4e0e-a761-c0319546fb74-config-data\") pod \"9585686c-a5f0-4e0e-a761-c0319546fb74\" (UID: \"9585686c-a5f0-4e0e-a761-c0319546fb74\") " Nov 24 19:38:07 crc kubenswrapper[4812]: I1124 19:38:07.597712 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9585686c-a5f0-4e0e-a761-c0319546fb74-kube-api-access-lnjgh" (OuterVolumeSpecName: "kube-api-access-lnjgh") pod "9585686c-a5f0-4e0e-a761-c0319546fb74" (UID: "9585686c-a5f0-4e0e-a761-c0319546fb74"). InnerVolumeSpecName "kube-api-access-lnjgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:38:07 crc kubenswrapper[4812]: I1124 19:38:07.622261 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9585686c-a5f0-4e0e-a761-c0319546fb74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9585686c-a5f0-4e0e-a761-c0319546fb74" (UID: "9585686c-a5f0-4e0e-a761-c0319546fb74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:07 crc kubenswrapper[4812]: I1124 19:38:07.628591 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9585686c-a5f0-4e0e-a761-c0319546fb74-config-data" (OuterVolumeSpecName: "config-data") pod "9585686c-a5f0-4e0e-a761-c0319546fb74" (UID: "9585686c-a5f0-4e0e-a761-c0319546fb74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:07 crc kubenswrapper[4812]: I1124 19:38:07.685363 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnjgh\" (UniqueName: \"kubernetes.io/projected/9585686c-a5f0-4e0e-a761-c0319546fb74-kube-api-access-lnjgh\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:07 crc kubenswrapper[4812]: I1124 19:38:07.685406 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9585686c-a5f0-4e0e-a761-c0319546fb74-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:07 crc kubenswrapper[4812]: I1124 19:38:07.685420 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9585686c-a5f0-4e0e-a761-c0319546fb74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.000688 4812 generic.go:334] "Generic (PLEG): container finished" podID="9585686c-a5f0-4e0e-a761-c0319546fb74" containerID="ea217e4bdb8791c949788eb7182c922a5bf82a679b9e10c9f2d96d98da2f4196" exitCode=0 Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.000732 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.000748 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9585686c-a5f0-4e0e-a761-c0319546fb74","Type":"ContainerDied","Data":"ea217e4bdb8791c949788eb7182c922a5bf82a679b9e10c9f2d96d98da2f4196"} Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.001441 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9585686c-a5f0-4e0e-a761-c0319546fb74","Type":"ContainerDied","Data":"85cea4fb0718f8077475110dcbceeb81289aa683001d23602934c7f2354bad6a"} Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.001490 4812 scope.go:117] "RemoveContainer" containerID="ea217e4bdb8791c949788eb7182c922a5bf82a679b9e10c9f2d96d98da2f4196" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.050756 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.073588 4812 scope.go:117] "RemoveContainer" containerID="ea217e4bdb8791c949788eb7182c922a5bf82a679b9e10c9f2d96d98da2f4196" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.074184 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 19:38:08 crc kubenswrapper[4812]: E1124 19:38:08.074321 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea217e4bdb8791c949788eb7182c922a5bf82a679b9e10c9f2d96d98da2f4196\": container with ID starting with ea217e4bdb8791c949788eb7182c922a5bf82a679b9e10c9f2d96d98da2f4196 not found: ID does not exist" containerID="ea217e4bdb8791c949788eb7182c922a5bf82a679b9e10c9f2d96d98da2f4196" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.074386 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea217e4bdb8791c949788eb7182c922a5bf82a679b9e10c9f2d96d98da2f4196"} err="failed to get container status \"ea217e4bdb8791c949788eb7182c922a5bf82a679b9e10c9f2d96d98da2f4196\": rpc error: code = NotFound desc = could not find container \"ea217e4bdb8791c949788eb7182c922a5bf82a679b9e10c9f2d96d98da2f4196\": container with ID starting with ea217e4bdb8791c949788eb7182c922a5bf82a679b9e10c9f2d96d98da2f4196 not found: ID does not exist" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.087029 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 19:38:08 crc kubenswrapper[4812]: E1124 19:38:08.087694 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9585686c-a5f0-4e0e-a761-c0319546fb74" containerName="nova-scheduler-scheduler" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.087727 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9585686c-a5f0-4e0e-a761-c0319546fb74" containerName="nova-scheduler-scheduler" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.088044 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="9585686c-a5f0-4e0e-a761-c0319546fb74" containerName="nova-scheduler-scheduler" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.089025 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.096953 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.097566 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.194962 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbfqt\" (UniqueName: \"kubernetes.io/projected/a6e7e914-8687-44d0-ae3b-d1cbdf4e861c-kube-api-access-nbfqt\") pod \"nova-scheduler-0\" (UID: \"a6e7e914-8687-44d0-ae3b-d1cbdf4e861c\") " pod="openstack/nova-scheduler-0" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.195694 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6e7e914-8687-44d0-ae3b-d1cbdf4e861c-config-data\") pod \"nova-scheduler-0\" (UID: \"a6e7e914-8687-44d0-ae3b-d1cbdf4e861c\") " pod="openstack/nova-scheduler-0" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.195942 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e7e914-8687-44d0-ae3b-d1cbdf4e861c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a6e7e914-8687-44d0-ae3b-d1cbdf4e861c\") " pod="openstack/nova-scheduler-0" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.298047 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e7e914-8687-44d0-ae3b-d1cbdf4e861c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a6e7e914-8687-44d0-ae3b-d1cbdf4e861c\") " pod="openstack/nova-scheduler-0" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.298182 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbfqt\" (UniqueName: \"kubernetes.io/projected/a6e7e914-8687-44d0-ae3b-d1cbdf4e861c-kube-api-access-nbfqt\") pod \"nova-scheduler-0\" (UID: \"a6e7e914-8687-44d0-ae3b-d1cbdf4e861c\") " pod="openstack/nova-scheduler-0" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.298273 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6e7e914-8687-44d0-ae3b-d1cbdf4e861c-config-data\") pod \"nova-scheduler-0\" (UID: \"a6e7e914-8687-44d0-ae3b-d1cbdf4e861c\") " pod="openstack/nova-scheduler-0" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.306679 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e7e914-8687-44d0-ae3b-d1cbdf4e861c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a6e7e914-8687-44d0-ae3b-d1cbdf4e861c\") " pod="openstack/nova-scheduler-0" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.306792 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6e7e914-8687-44d0-ae3b-d1cbdf4e861c-config-data\") pod \"nova-scheduler-0\" (UID: \"a6e7e914-8687-44d0-ae3b-d1cbdf4e861c\") " pod="openstack/nova-scheduler-0" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.332408 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbfqt\" (UniqueName: \"kubernetes.io/projected/a6e7e914-8687-44d0-ae3b-d1cbdf4e861c-kube-api-access-nbfqt\") pod \"nova-scheduler-0\" (UID: \"a6e7e914-8687-44d0-ae3b-d1cbdf4e861c\") " pod="openstack/nova-scheduler-0" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.418367 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.891413 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 19:38:08 crc kubenswrapper[4812]: W1124 19:38:08.901851 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6e7e914_8687_44d0_ae3b_d1cbdf4e861c.slice/crio-06ae2986347285f357e0ecc6c04334edb5c452e2fc838a0fce4cc3b654076287 WatchSource:0}: Error finding container 06ae2986347285f357e0ecc6c04334edb5c452e2fc838a0fce4cc3b654076287: Status 404 returned error can't find the container with id 06ae2986347285f357e0ecc6c04334edb5c452e2fc838a0fce4cc3b654076287 Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.958899 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 19:38:08 crc kubenswrapper[4812]: I1124 19:38:08.980403 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9585686c-a5f0-4e0e-a761-c0319546fb74" path="/var/lib/kubelet/pods/9585686c-a5f0-4e0e-a761-c0319546fb74/volumes" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.011396 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b1eff93-3e0b-440b-8baf-56e84e60ded1-combined-ca-bundle\") pod \"0b1eff93-3e0b-440b-8baf-56e84e60ded1\" (UID: \"0b1eff93-3e0b-440b-8baf-56e84e60ded1\") " Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.011483 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b1eff93-3e0b-440b-8baf-56e84e60ded1-config-data\") pod \"0b1eff93-3e0b-440b-8baf-56e84e60ded1\" (UID: \"0b1eff93-3e0b-440b-8baf-56e84e60ded1\") " Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.011537 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbg7x\" (UniqueName: \"kubernetes.io/projected/0b1eff93-3e0b-440b-8baf-56e84e60ded1-kube-api-access-xbg7x\") pod \"0b1eff93-3e0b-440b-8baf-56e84e60ded1\" (UID: \"0b1eff93-3e0b-440b-8baf-56e84e60ded1\") " Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.011659 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b1eff93-3e0b-440b-8baf-56e84e60ded1-logs\") pod \"0b1eff93-3e0b-440b-8baf-56e84e60ded1\" (UID: \"0b1eff93-3e0b-440b-8baf-56e84e60ded1\") " Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.012697 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b1eff93-3e0b-440b-8baf-56e84e60ded1-logs" (OuterVolumeSpecName: "logs") pod "0b1eff93-3e0b-440b-8baf-56e84e60ded1" (UID: "0b1eff93-3e0b-440b-8baf-56e84e60ded1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.016119 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b1eff93-3e0b-440b-8baf-56e84e60ded1-kube-api-access-xbg7x" (OuterVolumeSpecName: "kube-api-access-xbg7x") pod "0b1eff93-3e0b-440b-8baf-56e84e60ded1" (UID: "0b1eff93-3e0b-440b-8baf-56e84e60ded1"). InnerVolumeSpecName "kube-api-access-xbg7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.016701 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a6e7e914-8687-44d0-ae3b-d1cbdf4e861c","Type":"ContainerStarted","Data":"06ae2986347285f357e0ecc6c04334edb5c452e2fc838a0fce4cc3b654076287"} Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.019586 4812 generic.go:334] "Generic (PLEG): container finished" podID="0b1eff93-3e0b-440b-8baf-56e84e60ded1" containerID="52b419af68e37b67d492f27613feebf5748a485faf9628cf18b1c904b42f313e" exitCode=0 Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.019625 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0b1eff93-3e0b-440b-8baf-56e84e60ded1","Type":"ContainerDied","Data":"52b419af68e37b67d492f27613feebf5748a485faf9628cf18b1c904b42f313e"} Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.019646 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.019653 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0b1eff93-3e0b-440b-8baf-56e84e60ded1","Type":"ContainerDied","Data":"913529e5f40fcc24c829b7543360eb46509826b376d7c00887889336d0a05cd6"} Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.019704 4812 scope.go:117] "RemoveContainer" containerID="52b419af68e37b67d492f27613feebf5748a485faf9628cf18b1c904b42f313e" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.044168 4812 scope.go:117] "RemoveContainer" containerID="45c40b45fbbab6eb9594c78d319db459feaa5567c3e3aa78f5440922ff7f46cc" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.052469 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b1eff93-3e0b-440b-8baf-56e84e60ded1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b1eff93-3e0b-440b-8baf-56e84e60ded1" (UID: "0b1eff93-3e0b-440b-8baf-56e84e60ded1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.055851 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b1eff93-3e0b-440b-8baf-56e84e60ded1-config-data" (OuterVolumeSpecName: "config-data") pod "0b1eff93-3e0b-440b-8baf-56e84e60ded1" (UID: "0b1eff93-3e0b-440b-8baf-56e84e60ded1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.064405 4812 scope.go:117] "RemoveContainer" containerID="52b419af68e37b67d492f27613feebf5748a485faf9628cf18b1c904b42f313e" Nov 24 19:38:09 crc kubenswrapper[4812]: E1124 19:38:09.064857 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52b419af68e37b67d492f27613feebf5748a485faf9628cf18b1c904b42f313e\": container with ID starting with 52b419af68e37b67d492f27613feebf5748a485faf9628cf18b1c904b42f313e not found: ID does not exist" containerID="52b419af68e37b67d492f27613feebf5748a485faf9628cf18b1c904b42f313e" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.064885 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52b419af68e37b67d492f27613feebf5748a485faf9628cf18b1c904b42f313e"} err="failed to get container status \"52b419af68e37b67d492f27613feebf5748a485faf9628cf18b1c904b42f313e\": rpc error: code = NotFound desc = could not find container \"52b419af68e37b67d492f27613feebf5748a485faf9628cf18b1c904b42f313e\": container with ID starting with 52b419af68e37b67d492f27613feebf5748a485faf9628cf18b1c904b42f313e not found: ID does not exist" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.064904 4812 scope.go:117] "RemoveContainer" containerID="45c40b45fbbab6eb9594c78d319db459feaa5567c3e3aa78f5440922ff7f46cc" Nov 24 19:38:09 crc kubenswrapper[4812]: E1124 19:38:09.065526 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45c40b45fbbab6eb9594c78d319db459feaa5567c3e3aa78f5440922ff7f46cc\": container with ID starting with 45c40b45fbbab6eb9594c78d319db459feaa5567c3e3aa78f5440922ff7f46cc not found: ID does not exist" containerID="45c40b45fbbab6eb9594c78d319db459feaa5567c3e3aa78f5440922ff7f46cc" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.065577 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45c40b45fbbab6eb9594c78d319db459feaa5567c3e3aa78f5440922ff7f46cc"} err="failed to get container status \"45c40b45fbbab6eb9594c78d319db459feaa5567c3e3aa78f5440922ff7f46cc\": rpc error: code = NotFound desc = could not find container \"45c40b45fbbab6eb9594c78d319db459feaa5567c3e3aa78f5440922ff7f46cc\": container with ID starting with 45c40b45fbbab6eb9594c78d319db459feaa5567c3e3aa78f5440922ff7f46cc not found: ID does not exist" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.113959 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b1eff93-3e0b-440b-8baf-56e84e60ded1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.114000 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b1eff93-3e0b-440b-8baf-56e84e60ded1-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.114014 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbg7x\" (UniqueName: \"kubernetes.io/projected/0b1eff93-3e0b-440b-8baf-56e84e60ded1-kube-api-access-xbg7x\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.114028 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b1eff93-3e0b-440b-8baf-56e84e60ded1-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.395295 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.414209 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.435641 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 19:38:09 crc kubenswrapper[4812]: E1124 19:38:09.436124 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b1eff93-3e0b-440b-8baf-56e84e60ded1" containerName="nova-api-log" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.436138 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b1eff93-3e0b-440b-8baf-56e84e60ded1" containerName="nova-api-log" Nov 24 19:38:09 crc kubenswrapper[4812]: E1124 19:38:09.436174 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b1eff93-3e0b-440b-8baf-56e84e60ded1" containerName="nova-api-api" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.436180 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b1eff93-3e0b-440b-8baf-56e84e60ded1" containerName="nova-api-api" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.436381 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b1eff93-3e0b-440b-8baf-56e84e60ded1" containerName="nova-api-log" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.436395 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b1eff93-3e0b-440b-8baf-56e84e60ded1" containerName="nova-api-api" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.453397 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.464423 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.487216 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.491316 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.492120 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.520467 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdfd6\" (UniqueName: \"kubernetes.io/projected/76b9ef26-80dd-4a42-8ec5-d28479574740-kube-api-access-bdfd6\") pod \"nova-api-0\" (UID: \"76b9ef26-80dd-4a42-8ec5-d28479574740\") " pod="openstack/nova-api-0" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.520554 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b9ef26-80dd-4a42-8ec5-d28479574740-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"76b9ef26-80dd-4a42-8ec5-d28479574740\") " pod="openstack/nova-api-0" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.520609 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b9ef26-80dd-4a42-8ec5-d28479574740-config-data\") pod \"nova-api-0\" (UID: \"76b9ef26-80dd-4a42-8ec5-d28479574740\") " pod="openstack/nova-api-0" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.520653 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76b9ef26-80dd-4a42-8ec5-d28479574740-logs\") pod \"nova-api-0\" (UID: \"76b9ef26-80dd-4a42-8ec5-d28479574740\") " pod="openstack/nova-api-0" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.622068 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdfd6\" (UniqueName: \"kubernetes.io/projected/76b9ef26-80dd-4a42-8ec5-d28479574740-kube-api-access-bdfd6\") pod \"nova-api-0\" (UID: \"76b9ef26-80dd-4a42-8ec5-d28479574740\") " pod="openstack/nova-api-0" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.622193 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b9ef26-80dd-4a42-8ec5-d28479574740-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"76b9ef26-80dd-4a42-8ec5-d28479574740\") " pod="openstack/nova-api-0" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.622288 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b9ef26-80dd-4a42-8ec5-d28479574740-config-data\") pod \"nova-api-0\" (UID: \"76b9ef26-80dd-4a42-8ec5-d28479574740\") " pod="openstack/nova-api-0" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.622380 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76b9ef26-80dd-4a42-8ec5-d28479574740-logs\") pod \"nova-api-0\" (UID: \"76b9ef26-80dd-4a42-8ec5-d28479574740\") " pod="openstack/nova-api-0" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.623077 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76b9ef26-80dd-4a42-8ec5-d28479574740-logs\") pod \"nova-api-0\" (UID: \"76b9ef26-80dd-4a42-8ec5-d28479574740\") " pod="openstack/nova-api-0" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.631182 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b9ef26-80dd-4a42-8ec5-d28479574740-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"76b9ef26-80dd-4a42-8ec5-d28479574740\") " pod="openstack/nova-api-0" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.644854 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b9ef26-80dd-4a42-8ec5-d28479574740-config-data\") pod \"nova-api-0\" (UID: \"76b9ef26-80dd-4a42-8ec5-d28479574740\") " pod="openstack/nova-api-0" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.648702 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdfd6\" (UniqueName: \"kubernetes.io/projected/76b9ef26-80dd-4a42-8ec5-d28479574740-kube-api-access-bdfd6\") pod \"nova-api-0\" (UID: \"76b9ef26-80dd-4a42-8ec5-d28479574740\") " pod="openstack/nova-api-0" Nov 24 19:38:09 crc kubenswrapper[4812]: I1124 19:38:09.791632 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 19:38:10 crc kubenswrapper[4812]: I1124 19:38:10.033990 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a6e7e914-8687-44d0-ae3b-d1cbdf4e861c","Type":"ContainerStarted","Data":"70b1b974b18a427b38a6ea845666552daee3070e59154bce1d46d193527228d3"} Nov 24 19:38:10 crc kubenswrapper[4812]: I1124 19:38:10.056244 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.056228494 podStartE2EDuration="2.056228494s" podCreationTimestamp="2025-11-24 19:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:38:10.047493783 +0000 UTC m=+1283.836446154" watchObservedRunningTime="2025-11-24 19:38:10.056228494 +0000 UTC m=+1283.845180865" Nov 24 19:38:10 crc kubenswrapper[4812]: I1124 19:38:10.244099 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 19:38:10 crc kubenswrapper[4812]: W1124 19:38:10.249999 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76b9ef26_80dd_4a42_8ec5_d28479574740.slice/crio-6b6c04210c03c4c604560c70dff0721ce36442bd368733d98343554eea3ac691 WatchSource:0}: Error finding container 6b6c04210c03c4c604560c70dff0721ce36442bd368733d98343554eea3ac691: Status 404 returned error can't find the container with id 6b6c04210c03c4c604560c70dff0721ce36442bd368733d98343554eea3ac691 Nov 24 19:38:10 crc kubenswrapper[4812]: I1124 19:38:10.980162 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b1eff93-3e0b-440b-8baf-56e84e60ded1" path="/var/lib/kubelet/pods/0b1eff93-3e0b-440b-8baf-56e84e60ded1/volumes" Nov 24 19:38:11 crc kubenswrapper[4812]: I1124 19:38:11.057291 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76b9ef26-80dd-4a42-8ec5-d28479574740","Type":"ContainerStarted","Data":"058e29dcc10816b86281e89b4c08e6894ae7b649b59f228d6984d05cd22e3f4d"} Nov 24 19:38:11 crc kubenswrapper[4812]: I1124 19:38:11.057418 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76b9ef26-80dd-4a42-8ec5-d28479574740","Type":"ContainerStarted","Data":"791d8ba9a1bc07f4c77ab5b1e9e2cb270739e3425d3cb889813fb41ebd2f667d"} Nov 24 19:38:11 crc kubenswrapper[4812]: I1124 19:38:11.057445 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76b9ef26-80dd-4a42-8ec5-d28479574740","Type":"ContainerStarted","Data":"6b6c04210c03c4c604560c70dff0721ce36442bd368733d98343554eea3ac691"} Nov 24 19:38:11 crc kubenswrapper[4812]: I1124 19:38:11.093174 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.093149399 podStartE2EDuration="2.093149399s" podCreationTimestamp="2025-11-24 19:38:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:38:11.081024551 +0000 UTC m=+1284.869976952" watchObservedRunningTime="2025-11-24 19:38:11.093149399 +0000 UTC m=+1284.882101810" Nov 24 19:38:13 crc kubenswrapper[4812]: I1124 19:38:13.419498 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 19:38:13 crc kubenswrapper[4812]: I1124 19:38:13.500502 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 24 19:38:14 crc kubenswrapper[4812]: I1124 19:38:14.491329 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 19:38:14 crc kubenswrapper[4812]: I1124 19:38:14.491806 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 19:38:15 crc kubenswrapper[4812]: I1124 19:38:15.160464 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 19:38:15 crc kubenswrapper[4812]: I1124 19:38:15.502544 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9f9b1104-0b19-40dd-b447-364a7b4dc79a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 19:38:15 crc kubenswrapper[4812]: I1124 19:38:15.502552 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9f9b1104-0b19-40dd-b447-364a7b4dc79a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 19:38:18 crc kubenswrapper[4812]: I1124 19:38:18.418632 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 19:38:18 crc kubenswrapper[4812]: I1124 19:38:18.452208 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 19:38:19 crc kubenswrapper[4812]: I1124 19:38:19.046599 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 19:38:19 crc kubenswrapper[4812]: I1124 19:38:19.046848 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="c066990e-686f-4d3b-bb9d-185df0b741ec" containerName="kube-state-metrics" containerID="cri-o://6e43bdba808025dd018f60635760781cec24460f3701b1a7863bd20153c99de7" gracePeriod=30 Nov 24 19:38:19 crc kubenswrapper[4812]: I1124 19:38:19.204082 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 19:38:19 crc kubenswrapper[4812]: I1124 19:38:19.616626 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 19:38:19 crc kubenswrapper[4812]: I1124 19:38:19.728217 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k2q4\" (UniqueName: \"kubernetes.io/projected/c066990e-686f-4d3b-bb9d-185df0b741ec-kube-api-access-5k2q4\") pod \"c066990e-686f-4d3b-bb9d-185df0b741ec\" (UID: \"c066990e-686f-4d3b-bb9d-185df0b741ec\") " Nov 24 19:38:19 crc kubenswrapper[4812]: I1124 19:38:19.734549 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c066990e-686f-4d3b-bb9d-185df0b741ec-kube-api-access-5k2q4" (OuterVolumeSpecName: "kube-api-access-5k2q4") pod "c066990e-686f-4d3b-bb9d-185df0b741ec" (UID: "c066990e-686f-4d3b-bb9d-185df0b741ec"). InnerVolumeSpecName "kube-api-access-5k2q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:38:19 crc kubenswrapper[4812]: I1124 19:38:19.792300 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 19:38:19 crc kubenswrapper[4812]: I1124 19:38:19.792384 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 19:38:19 crc kubenswrapper[4812]: I1124 19:38:19.830258 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k2q4\" (UniqueName: \"kubernetes.io/projected/c066990e-686f-4d3b-bb9d-185df0b741ec-kube-api-access-5k2q4\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.154137 4812 generic.go:334] "Generic (PLEG): container finished" podID="c066990e-686f-4d3b-bb9d-185df0b741ec" containerID="6e43bdba808025dd018f60635760781cec24460f3701b1a7863bd20153c99de7" exitCode=2 Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.154885 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.155386 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c066990e-686f-4d3b-bb9d-185df0b741ec","Type":"ContainerDied","Data":"6e43bdba808025dd018f60635760781cec24460f3701b1a7863bd20153c99de7"} Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.155419 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c066990e-686f-4d3b-bb9d-185df0b741ec","Type":"ContainerDied","Data":"ce01c7ef69b38dd6304cd8a104a2b1cbecd4f76a27c6f1e74293ac2bc37ec65c"} Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.155435 4812 scope.go:117] "RemoveContainer" containerID="6e43bdba808025dd018f60635760781cec24460f3701b1a7863bd20153c99de7" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.192630 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.193914 4812 scope.go:117] "RemoveContainer" containerID="6e43bdba808025dd018f60635760781cec24460f3701b1a7863bd20153c99de7" Nov 24 19:38:20 crc kubenswrapper[4812]: E1124 19:38:20.194404 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e43bdba808025dd018f60635760781cec24460f3701b1a7863bd20153c99de7\": container with ID starting with 6e43bdba808025dd018f60635760781cec24460f3701b1a7863bd20153c99de7 not found: ID does not exist" containerID="6e43bdba808025dd018f60635760781cec24460f3701b1a7863bd20153c99de7" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.194447 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e43bdba808025dd018f60635760781cec24460f3701b1a7863bd20153c99de7"} err="failed to get container status \"6e43bdba808025dd018f60635760781cec24460f3701b1a7863bd20153c99de7\": rpc error: code = NotFound desc = could not find container \"6e43bdba808025dd018f60635760781cec24460f3701b1a7863bd20153c99de7\": container with ID starting with 6e43bdba808025dd018f60635760781cec24460f3701b1a7863bd20153c99de7 not found: ID does not exist" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.200555 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.246742 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 19:38:20 crc kubenswrapper[4812]: E1124 19:38:20.248191 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c066990e-686f-4d3b-bb9d-185df0b741ec" containerName="kube-state-metrics" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.248244 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c066990e-686f-4d3b-bb9d-185df0b741ec" containerName="kube-state-metrics" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.248867 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c066990e-686f-4d3b-bb9d-185df0b741ec" containerName="kube-state-metrics" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.250084 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.252425 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.253067 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.258059 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.456455 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9dbc131-33f8-4df7-88ba-2d90e93a436c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b9dbc131-33f8-4df7-88ba-2d90e93a436c\") " pod="openstack/kube-state-metrics-0" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.456579 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9dbc131-33f8-4df7-88ba-2d90e93a436c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b9dbc131-33f8-4df7-88ba-2d90e93a436c\") " pod="openstack/kube-state-metrics-0" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.456624 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b9dbc131-33f8-4df7-88ba-2d90e93a436c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b9dbc131-33f8-4df7-88ba-2d90e93a436c\") " pod="openstack/kube-state-metrics-0" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.456658 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t8lt\" (UniqueName: \"kubernetes.io/projected/b9dbc131-33f8-4df7-88ba-2d90e93a436c-kube-api-access-9t8lt\") pod \"kube-state-metrics-0\" (UID: \"b9dbc131-33f8-4df7-88ba-2d90e93a436c\") " pod="openstack/kube-state-metrics-0" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.558740 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9dbc131-33f8-4df7-88ba-2d90e93a436c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b9dbc131-33f8-4df7-88ba-2d90e93a436c\") " pod="openstack/kube-state-metrics-0" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.558813 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b9dbc131-33f8-4df7-88ba-2d90e93a436c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b9dbc131-33f8-4df7-88ba-2d90e93a436c\") " pod="openstack/kube-state-metrics-0" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.558850 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t8lt\" (UniqueName: \"kubernetes.io/projected/b9dbc131-33f8-4df7-88ba-2d90e93a436c-kube-api-access-9t8lt\") pod \"kube-state-metrics-0\" (UID: \"b9dbc131-33f8-4df7-88ba-2d90e93a436c\") " pod="openstack/kube-state-metrics-0" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.558904 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9dbc131-33f8-4df7-88ba-2d90e93a436c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b9dbc131-33f8-4df7-88ba-2d90e93a436c\") " pod="openstack/kube-state-metrics-0" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.564396 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9dbc131-33f8-4df7-88ba-2d90e93a436c-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b9dbc131-33f8-4df7-88ba-2d90e93a436c\") " pod="openstack/kube-state-metrics-0" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.564626 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b9dbc131-33f8-4df7-88ba-2d90e93a436c-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b9dbc131-33f8-4df7-88ba-2d90e93a436c\") " pod="openstack/kube-state-metrics-0" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.572201 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9dbc131-33f8-4df7-88ba-2d90e93a436c-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b9dbc131-33f8-4df7-88ba-2d90e93a436c\") " pod="openstack/kube-state-metrics-0" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.578116 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t8lt\" (UniqueName: \"kubernetes.io/projected/b9dbc131-33f8-4df7-88ba-2d90e93a436c-kube-api-access-9t8lt\") pod \"kube-state-metrics-0\" (UID: \"b9dbc131-33f8-4df7-88ba-2d90e93a436c\") " pod="openstack/kube-state-metrics-0" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.849992 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.851288 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ceb2200c-b4b1-4664-8daf-8cd31df808d7" containerName="ceilometer-central-agent" containerID="cri-o://0ce6851bafb5524ea4e01da18fb3bd216ba7cc4dcb648829219517ac8572e299" gracePeriod=30 Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.851653 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ceb2200c-b4b1-4664-8daf-8cd31df808d7" containerName="ceilometer-notification-agent" containerID="cri-o://b28a85b61a046fe18eb46083ce2596dfa0fa12630af7e68f3ecf8d7def7a12eb" gracePeriod=30 Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.851696 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ceb2200c-b4b1-4664-8daf-8cd31df808d7" containerName="sg-core" containerID="cri-o://9c52b55c2a544f96764f8a80d1b80e4b1e79340ecc278983508ab62023d06b0b" gracePeriod=30 Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.851605 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ceb2200c-b4b1-4664-8daf-8cd31df808d7" containerName="proxy-httpd" containerID="cri-o://637db67528ce6aed7f9f40f0fb19ce29de2305e95a27aef6f8cdfbb2c6b828c8" gracePeriod=30 Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.873156 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.874515 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="76b9ef26-80dd-4a42-8ec5-d28479574740" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 19:38:20 crc kubenswrapper[4812]: I1124 19:38:20.874523 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="76b9ef26-80dd-4a42-8ec5-d28479574740" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.196:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 19:38:21 crc kubenswrapper[4812]: I1124 19:38:21.009277 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c066990e-686f-4d3b-bb9d-185df0b741ec" path="/var/lib/kubelet/pods/c066990e-686f-4d3b-bb9d-185df0b741ec/volumes" Nov 24 19:38:21 crc kubenswrapper[4812]: I1124 19:38:21.189210 4812 generic.go:334] "Generic (PLEG): container finished" podID="ceb2200c-b4b1-4664-8daf-8cd31df808d7" containerID="637db67528ce6aed7f9f40f0fb19ce29de2305e95a27aef6f8cdfbb2c6b828c8" exitCode=0 Nov 24 19:38:21 crc kubenswrapper[4812]: I1124 19:38:21.189646 4812 generic.go:334] "Generic (PLEG): container finished" podID="ceb2200c-b4b1-4664-8daf-8cd31df808d7" containerID="9c52b55c2a544f96764f8a80d1b80e4b1e79340ecc278983508ab62023d06b0b" exitCode=2 Nov 24 19:38:21 crc kubenswrapper[4812]: I1124 19:38:21.189427 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceb2200c-b4b1-4664-8daf-8cd31df808d7","Type":"ContainerDied","Data":"637db67528ce6aed7f9f40f0fb19ce29de2305e95a27aef6f8cdfbb2c6b828c8"} Nov 24 19:38:21 crc kubenswrapper[4812]: I1124 19:38:21.189706 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceb2200c-b4b1-4664-8daf-8cd31df808d7","Type":"ContainerDied","Data":"9c52b55c2a544f96764f8a80d1b80e4b1e79340ecc278983508ab62023d06b0b"} Nov 24 19:38:21 crc kubenswrapper[4812]: I1124 19:38:21.334402 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 19:38:21 crc kubenswrapper[4812]: W1124 19:38:21.341917 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9dbc131_33f8_4df7_88ba_2d90e93a436c.slice/crio-53e80f140a60dea173919a25bc6a589b453849483c9dc0c47422cfab29e78f8a WatchSource:0}: Error finding container 53e80f140a60dea173919a25bc6a589b453849483c9dc0c47422cfab29e78f8a: Status 404 returned error can't find the container with id 53e80f140a60dea173919a25bc6a589b453849483c9dc0c47422cfab29e78f8a Nov 24 19:38:22 crc kubenswrapper[4812]: I1124 19:38:22.200582 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b9dbc131-33f8-4df7-88ba-2d90e93a436c","Type":"ContainerStarted","Data":"8ae9509f408e8d7a118f2c286e6243f94fd2c8cc3503c8aa239b550061eb119e"} Nov 24 19:38:22 crc kubenswrapper[4812]: I1124 19:38:22.200932 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b9dbc131-33f8-4df7-88ba-2d90e93a436c","Type":"ContainerStarted","Data":"53e80f140a60dea173919a25bc6a589b453849483c9dc0c47422cfab29e78f8a"} Nov 24 19:38:22 crc kubenswrapper[4812]: I1124 19:38:22.201298 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 19:38:22 crc kubenswrapper[4812]: I1124 19:38:22.204140 4812 generic.go:334] "Generic (PLEG): container finished" podID="ceb2200c-b4b1-4664-8daf-8cd31df808d7" containerID="0ce6851bafb5524ea4e01da18fb3bd216ba7cc4dcb648829219517ac8572e299" exitCode=0 Nov 24 19:38:22 crc kubenswrapper[4812]: I1124 19:38:22.204177 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceb2200c-b4b1-4664-8daf-8cd31df808d7","Type":"ContainerDied","Data":"0ce6851bafb5524ea4e01da18fb3bd216ba7cc4dcb648829219517ac8572e299"} Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.040922 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.060981 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.656014614 podStartE2EDuration="4.060963706s" podCreationTimestamp="2025-11-24 19:38:20 +0000 UTC" firstStartedPulling="2025-11-24 19:38:21.344945434 +0000 UTC m=+1295.133897815" lastFinishedPulling="2025-11-24 19:38:21.749894526 +0000 UTC m=+1295.538846907" observedRunningTime="2025-11-24 19:38:22.230729449 +0000 UTC m=+1296.019681840" watchObservedRunningTime="2025-11-24 19:38:24.060963706 +0000 UTC m=+1297.849916077" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.131743 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znch9\" (UniqueName: \"kubernetes.io/projected/ceb2200c-b4b1-4664-8daf-8cd31df808d7-kube-api-access-znch9\") pod \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.131825 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-sg-core-conf-yaml\") pod \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.131872 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceb2200c-b4b1-4664-8daf-8cd31df808d7-run-httpd\") pod \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.131909 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-scripts\") pod \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.131929 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceb2200c-b4b1-4664-8daf-8cd31df808d7-log-httpd\") pod \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.131960 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-config-data\") pod \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.132001 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-combined-ca-bundle\") pod \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\" (UID: \"ceb2200c-b4b1-4664-8daf-8cd31df808d7\") " Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.132578 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceb2200c-b4b1-4664-8daf-8cd31df808d7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ceb2200c-b4b1-4664-8daf-8cd31df808d7" (UID: "ceb2200c-b4b1-4664-8daf-8cd31df808d7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.132622 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ceb2200c-b4b1-4664-8daf-8cd31df808d7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ceb2200c-b4b1-4664-8daf-8cd31df808d7" (UID: "ceb2200c-b4b1-4664-8daf-8cd31df808d7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.138587 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-scripts" (OuterVolumeSpecName: "scripts") pod "ceb2200c-b4b1-4664-8daf-8cd31df808d7" (UID: "ceb2200c-b4b1-4664-8daf-8cd31df808d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.155102 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceb2200c-b4b1-4664-8daf-8cd31df808d7-kube-api-access-znch9" (OuterVolumeSpecName: "kube-api-access-znch9") pod "ceb2200c-b4b1-4664-8daf-8cd31df808d7" (UID: "ceb2200c-b4b1-4664-8daf-8cd31df808d7"). InnerVolumeSpecName "kube-api-access-znch9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.179351 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ceb2200c-b4b1-4664-8daf-8cd31df808d7" (UID: "ceb2200c-b4b1-4664-8daf-8cd31df808d7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.218742 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ceb2200c-b4b1-4664-8daf-8cd31df808d7" (UID: "ceb2200c-b4b1-4664-8daf-8cd31df808d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.228528 4812 generic.go:334] "Generic (PLEG): container finished" podID="ceb2200c-b4b1-4664-8daf-8cd31df808d7" containerID="b28a85b61a046fe18eb46083ce2596dfa0fa12630af7e68f3ecf8d7def7a12eb" exitCode=0 Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.228570 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceb2200c-b4b1-4664-8daf-8cd31df808d7","Type":"ContainerDied","Data":"b28a85b61a046fe18eb46083ce2596dfa0fa12630af7e68f3ecf8d7def7a12eb"} Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.228604 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ceb2200c-b4b1-4664-8daf-8cd31df808d7","Type":"ContainerDied","Data":"1ffb00e3e124b72ca81b93a4506bbc7dbdd4b9414dbf4e8d866a2cfe71f928bc"} Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.228605 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.228632 4812 scope.go:117] "RemoveContainer" containerID="637db67528ce6aed7f9f40f0fb19ce29de2305e95a27aef6f8cdfbb2c6b828c8" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.233324 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.233372 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceb2200c-b4b1-4664-8daf-8cd31df808d7-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.233384 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.233393 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znch9\" (UniqueName: \"kubernetes.io/projected/ceb2200c-b4b1-4664-8daf-8cd31df808d7-kube-api-access-znch9\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.233402 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.233410 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ceb2200c-b4b1-4664-8daf-8cd31df808d7-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.250023 4812 scope.go:117] "RemoveContainer" containerID="9c52b55c2a544f96764f8a80d1b80e4b1e79340ecc278983508ab62023d06b0b" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.253145 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-config-data" (OuterVolumeSpecName: "config-data") pod "ceb2200c-b4b1-4664-8daf-8cd31df808d7" (UID: "ceb2200c-b4b1-4664-8daf-8cd31df808d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.268213 4812 scope.go:117] "RemoveContainer" containerID="b28a85b61a046fe18eb46083ce2596dfa0fa12630af7e68f3ecf8d7def7a12eb" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.285408 4812 scope.go:117] "RemoveContainer" containerID="0ce6851bafb5524ea4e01da18fb3bd216ba7cc4dcb648829219517ac8572e299" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.309633 4812 scope.go:117] "RemoveContainer" containerID="637db67528ce6aed7f9f40f0fb19ce29de2305e95a27aef6f8cdfbb2c6b828c8" Nov 24 19:38:24 crc kubenswrapper[4812]: E1124 19:38:24.310078 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"637db67528ce6aed7f9f40f0fb19ce29de2305e95a27aef6f8cdfbb2c6b828c8\": container with ID starting with 637db67528ce6aed7f9f40f0fb19ce29de2305e95a27aef6f8cdfbb2c6b828c8 not found: ID does not exist" containerID="637db67528ce6aed7f9f40f0fb19ce29de2305e95a27aef6f8cdfbb2c6b828c8" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.310114 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"637db67528ce6aed7f9f40f0fb19ce29de2305e95a27aef6f8cdfbb2c6b828c8"} err="failed to get container status \"637db67528ce6aed7f9f40f0fb19ce29de2305e95a27aef6f8cdfbb2c6b828c8\": rpc error: code = NotFound desc = could not find container \"637db67528ce6aed7f9f40f0fb19ce29de2305e95a27aef6f8cdfbb2c6b828c8\": container with ID starting with 637db67528ce6aed7f9f40f0fb19ce29de2305e95a27aef6f8cdfbb2c6b828c8 not found: ID does not exist" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.310136 4812 scope.go:117] "RemoveContainer" containerID="9c52b55c2a544f96764f8a80d1b80e4b1e79340ecc278983508ab62023d06b0b" Nov 24 19:38:24 crc kubenswrapper[4812]: E1124 19:38:24.310437 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c52b55c2a544f96764f8a80d1b80e4b1e79340ecc278983508ab62023d06b0b\": container with ID starting with 9c52b55c2a544f96764f8a80d1b80e4b1e79340ecc278983508ab62023d06b0b not found: ID does not exist" containerID="9c52b55c2a544f96764f8a80d1b80e4b1e79340ecc278983508ab62023d06b0b" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.310462 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c52b55c2a544f96764f8a80d1b80e4b1e79340ecc278983508ab62023d06b0b"} err="failed to get container status \"9c52b55c2a544f96764f8a80d1b80e4b1e79340ecc278983508ab62023d06b0b\": rpc error: code = NotFound desc = could not find container \"9c52b55c2a544f96764f8a80d1b80e4b1e79340ecc278983508ab62023d06b0b\": container with ID starting with 9c52b55c2a544f96764f8a80d1b80e4b1e79340ecc278983508ab62023d06b0b not found: ID does not exist" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.310477 4812 scope.go:117] "RemoveContainer" containerID="b28a85b61a046fe18eb46083ce2596dfa0fa12630af7e68f3ecf8d7def7a12eb" Nov 24 19:38:24 crc kubenswrapper[4812]: E1124 19:38:24.310914 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b28a85b61a046fe18eb46083ce2596dfa0fa12630af7e68f3ecf8d7def7a12eb\": container with ID starting with b28a85b61a046fe18eb46083ce2596dfa0fa12630af7e68f3ecf8d7def7a12eb not found: ID does not exist" containerID="b28a85b61a046fe18eb46083ce2596dfa0fa12630af7e68f3ecf8d7def7a12eb" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.310966 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b28a85b61a046fe18eb46083ce2596dfa0fa12630af7e68f3ecf8d7def7a12eb"} err="failed to get container status \"b28a85b61a046fe18eb46083ce2596dfa0fa12630af7e68f3ecf8d7def7a12eb\": rpc error: code = NotFound desc = could not find container \"b28a85b61a046fe18eb46083ce2596dfa0fa12630af7e68f3ecf8d7def7a12eb\": container with ID starting with b28a85b61a046fe18eb46083ce2596dfa0fa12630af7e68f3ecf8d7def7a12eb not found: ID does not exist" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.310992 4812 scope.go:117] "RemoveContainer" containerID="0ce6851bafb5524ea4e01da18fb3bd216ba7cc4dcb648829219517ac8572e299" Nov 24 19:38:24 crc kubenswrapper[4812]: E1124 19:38:24.311245 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ce6851bafb5524ea4e01da18fb3bd216ba7cc4dcb648829219517ac8572e299\": container with ID starting with 0ce6851bafb5524ea4e01da18fb3bd216ba7cc4dcb648829219517ac8572e299 not found: ID does not exist" containerID="0ce6851bafb5524ea4e01da18fb3bd216ba7cc4dcb648829219517ac8572e299" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.311323 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ce6851bafb5524ea4e01da18fb3bd216ba7cc4dcb648829219517ac8572e299"} err="failed to get container status \"0ce6851bafb5524ea4e01da18fb3bd216ba7cc4dcb648829219517ac8572e299\": rpc error: code = NotFound desc = could not find container \"0ce6851bafb5524ea4e01da18fb3bd216ba7cc4dcb648829219517ac8572e299\": container with ID starting with 0ce6851bafb5524ea4e01da18fb3bd216ba7cc4dcb648829219517ac8572e299 not found: ID does not exist" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.336614 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceb2200c-b4b1-4664-8daf-8cd31df808d7-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.497682 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.499862 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.504326 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.561273 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.570378 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.593356 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:38:24 crc kubenswrapper[4812]: E1124 19:38:24.593964 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb2200c-b4b1-4664-8daf-8cd31df808d7" containerName="ceilometer-central-agent" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.593995 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb2200c-b4b1-4664-8daf-8cd31df808d7" containerName="ceilometer-central-agent" Nov 24 19:38:24 crc kubenswrapper[4812]: E1124 19:38:24.594041 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb2200c-b4b1-4664-8daf-8cd31df808d7" containerName="proxy-httpd" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.594054 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb2200c-b4b1-4664-8daf-8cd31df808d7" containerName="proxy-httpd" Nov 24 19:38:24 crc kubenswrapper[4812]: E1124 19:38:24.594078 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb2200c-b4b1-4664-8daf-8cd31df808d7" containerName="sg-core" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.594089 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb2200c-b4b1-4664-8daf-8cd31df808d7" containerName="sg-core" Nov 24 19:38:24 crc kubenswrapper[4812]: E1124 19:38:24.594102 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb2200c-b4b1-4664-8daf-8cd31df808d7" containerName="ceilometer-notification-agent" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.594112 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb2200c-b4b1-4664-8daf-8cd31df808d7" containerName="ceilometer-notification-agent" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.594453 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb2200c-b4b1-4664-8daf-8cd31df808d7" containerName="ceilometer-notification-agent" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.594483 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb2200c-b4b1-4664-8daf-8cd31df808d7" containerName="proxy-httpd" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.594511 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb2200c-b4b1-4664-8daf-8cd31df808d7" containerName="ceilometer-central-agent" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.594540 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb2200c-b4b1-4664-8daf-8cd31df808d7" containerName="sg-core" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.596907 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.599990 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.600522 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.601712 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.604457 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.744051 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8q4r\" (UniqueName: \"kubernetes.io/projected/584ff91f-b107-4cb7-a33d-f02a782466f8-kube-api-access-m8q4r\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.744105 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584ff91f-b107-4cb7-a33d-f02a782466f8-run-httpd\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.744142 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-config-data\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.744170 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-scripts\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.744239 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.744262 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.744286 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584ff91f-b107-4cb7-a33d-f02a782466f8-log-httpd\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.744436 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.845905 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8q4r\" (UniqueName: \"kubernetes.io/projected/584ff91f-b107-4cb7-a33d-f02a782466f8-kube-api-access-m8q4r\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.845957 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584ff91f-b107-4cb7-a33d-f02a782466f8-run-httpd\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.845985 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-config-data\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.846012 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-scripts\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.846051 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.846069 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.846089 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584ff91f-b107-4cb7-a33d-f02a782466f8-log-httpd\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.846123 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.846426 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584ff91f-b107-4cb7-a33d-f02a782466f8-run-httpd\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.847441 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584ff91f-b107-4cb7-a33d-f02a782466f8-log-httpd\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.849634 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.849678 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.851099 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-scripts\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.851577 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.851969 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-config-data\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.862124 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8q4r\" (UniqueName: \"kubernetes.io/projected/584ff91f-b107-4cb7-a33d-f02a782466f8-kube-api-access-m8q4r\") pod \"ceilometer-0\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " pod="openstack/ceilometer-0" Nov 24 19:38:24 crc kubenswrapper[4812]: I1124 19:38:24.917169 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:38:25 crc kubenswrapper[4812]: I1124 19:38:25.002797 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceb2200c-b4b1-4664-8daf-8cd31df808d7" path="/var/lib/kubelet/pods/ceb2200c-b4b1-4664-8daf-8cd31df808d7/volumes" Nov 24 19:38:25 crc kubenswrapper[4812]: I1124 19:38:25.248974 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 19:38:25 crc kubenswrapper[4812]: I1124 19:38:25.406508 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:38:25 crc kubenswrapper[4812]: W1124 19:38:25.410305 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod584ff91f_b107_4cb7_a33d_f02a782466f8.slice/crio-c65700f65f8f3db39f10299a07cd0ef7ef589eff45b50b173e29d596c7171e6a WatchSource:0}: Error finding container c65700f65f8f3db39f10299a07cd0ef7ef589eff45b50b173e29d596c7171e6a: Status 404 returned error can't find the container with id c65700f65f8f3db39f10299a07cd0ef7ef589eff45b50b173e29d596c7171e6a Nov 24 19:38:26 crc kubenswrapper[4812]: I1124 19:38:26.253588 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"584ff91f-b107-4cb7-a33d-f02a782466f8","Type":"ContainerStarted","Data":"c65700f65f8f3db39f10299a07cd0ef7ef589eff45b50b173e29d596c7171e6a"} Nov 24 19:38:27 crc kubenswrapper[4812]: I1124 19:38:27.291593 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"584ff91f-b107-4cb7-a33d-f02a782466f8","Type":"ContainerStarted","Data":"995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c"} Nov 24 19:38:27 crc kubenswrapper[4812]: I1124 19:38:27.292397 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"584ff91f-b107-4cb7-a33d-f02a782466f8","Type":"ContainerStarted","Data":"41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba"} Nov 24 19:38:27 crc kubenswrapper[4812]: I1124 19:38:27.295733 4812 generic.go:334] "Generic (PLEG): container finished" podID="0feedb89-3882-482b-82d0-24de3d81f892" containerID="7722d3ce2d1af69114e8e9df886108d0349e108a42086aac479457603d1c072a" exitCode=137 Nov 24 19:38:27 crc kubenswrapper[4812]: I1124 19:38:27.295806 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0feedb89-3882-482b-82d0-24de3d81f892","Type":"ContainerDied","Data":"7722d3ce2d1af69114e8e9df886108d0349e108a42086aac479457603d1c072a"} Nov 24 19:38:27 crc kubenswrapper[4812]: I1124 19:38:27.295839 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0feedb89-3882-482b-82d0-24de3d81f892","Type":"ContainerDied","Data":"b3f8f78ad659646a1ac1d514ff1e2321791c7fe8f94b08f7a24a4f5862a350a0"} Nov 24 19:38:27 crc kubenswrapper[4812]: I1124 19:38:27.295849 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3f8f78ad659646a1ac1d514ff1e2321791c7fe8f94b08f7a24a4f5862a350a0" Nov 24 19:38:27 crc kubenswrapper[4812]: I1124 19:38:27.303549 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:27 crc kubenswrapper[4812]: I1124 19:38:27.502305 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6rlp\" (UniqueName: \"kubernetes.io/projected/0feedb89-3882-482b-82d0-24de3d81f892-kube-api-access-b6rlp\") pod \"0feedb89-3882-482b-82d0-24de3d81f892\" (UID: \"0feedb89-3882-482b-82d0-24de3d81f892\") " Nov 24 19:38:27 crc kubenswrapper[4812]: I1124 19:38:27.502911 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0feedb89-3882-482b-82d0-24de3d81f892-config-data\") pod \"0feedb89-3882-482b-82d0-24de3d81f892\" (UID: \"0feedb89-3882-482b-82d0-24de3d81f892\") " Nov 24 19:38:27 crc kubenswrapper[4812]: I1124 19:38:27.503062 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0feedb89-3882-482b-82d0-24de3d81f892-combined-ca-bundle\") pod \"0feedb89-3882-482b-82d0-24de3d81f892\" (UID: \"0feedb89-3882-482b-82d0-24de3d81f892\") " Nov 24 19:38:27 crc kubenswrapper[4812]: I1124 19:38:27.508250 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0feedb89-3882-482b-82d0-24de3d81f892-kube-api-access-b6rlp" (OuterVolumeSpecName: "kube-api-access-b6rlp") pod "0feedb89-3882-482b-82d0-24de3d81f892" (UID: "0feedb89-3882-482b-82d0-24de3d81f892"). InnerVolumeSpecName "kube-api-access-b6rlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:38:27 crc kubenswrapper[4812]: I1124 19:38:27.534709 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0feedb89-3882-482b-82d0-24de3d81f892-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0feedb89-3882-482b-82d0-24de3d81f892" (UID: "0feedb89-3882-482b-82d0-24de3d81f892"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:27 crc kubenswrapper[4812]: I1124 19:38:27.562571 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0feedb89-3882-482b-82d0-24de3d81f892-config-data" (OuterVolumeSpecName: "config-data") pod "0feedb89-3882-482b-82d0-24de3d81f892" (UID: "0feedb89-3882-482b-82d0-24de3d81f892"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:27 crc kubenswrapper[4812]: I1124 19:38:27.605547 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6rlp\" (UniqueName: \"kubernetes.io/projected/0feedb89-3882-482b-82d0-24de3d81f892-kube-api-access-b6rlp\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:27 crc kubenswrapper[4812]: I1124 19:38:27.605586 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0feedb89-3882-482b-82d0-24de3d81f892-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:27 crc kubenswrapper[4812]: I1124 19:38:27.605599 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0feedb89-3882-482b-82d0-24de3d81f892-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.306640 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"584ff91f-b107-4cb7-a33d-f02a782466f8","Type":"ContainerStarted","Data":"2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7"} Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.306654 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.342760 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.352748 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.363047 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 19:38:28 crc kubenswrapper[4812]: E1124 19:38:28.363601 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0feedb89-3882-482b-82d0-24de3d81f892" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.363627 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0feedb89-3882-482b-82d0-24de3d81f892" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.363915 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0feedb89-3882-482b-82d0-24de3d81f892" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.364778 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.366663 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.367264 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.367657 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.372048 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.421928 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.421967 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.422067 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.422115 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trjwl\" (UniqueName: \"kubernetes.io/projected/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-kube-api-access-trjwl\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.422143 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.523839 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.523896 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.523937 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.523967 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trjwl\" (UniqueName: \"kubernetes.io/projected/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-kube-api-access-trjwl\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.523997 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.531021 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.534106 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.534251 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.534308 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.564987 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trjwl\" (UniqueName: \"kubernetes.io/projected/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-kube-api-access-trjwl\") pod \"nova-cell1-novncproxy-0\" (UID: \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.681005 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:28 crc kubenswrapper[4812]: I1124 19:38:28.977264 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0feedb89-3882-482b-82d0-24de3d81f892" path="/var/lib/kubelet/pods/0feedb89-3882-482b-82d0-24de3d81f892/volumes" Nov 24 19:38:29 crc kubenswrapper[4812]: I1124 19:38:29.157737 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 19:38:29 crc kubenswrapper[4812]: W1124 19:38:29.171526 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee91c9e3_c2f7_48a3_8a78_19f6dc262e26.slice/crio-c7714efb1d8ef338030911e95ba5501768f5bdd583b8c2b72ac5c89c3f066ed0 WatchSource:0}: Error finding container c7714efb1d8ef338030911e95ba5501768f5bdd583b8c2b72ac5c89c3f066ed0: Status 404 returned error can't find the container with id c7714efb1d8ef338030911e95ba5501768f5bdd583b8c2b72ac5c89c3f066ed0 Nov 24 19:38:29 crc kubenswrapper[4812]: I1124 19:38:29.318235 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26","Type":"ContainerStarted","Data":"c7714efb1d8ef338030911e95ba5501768f5bdd583b8c2b72ac5c89c3f066ed0"} Nov 24 19:38:29 crc kubenswrapper[4812]: I1124 19:38:29.797399 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 19:38:29 crc kubenswrapper[4812]: I1124 19:38:29.798015 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 19:38:29 crc kubenswrapper[4812]: I1124 19:38:29.800259 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 19:38:29 crc kubenswrapper[4812]: I1124 19:38:29.808663 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.331193 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26","Type":"ContainerStarted","Data":"632ed76758afffe7deabbb497be96385b1fa68fdc85d0a39f726b5ca3baf0880"} Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.333921 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"584ff91f-b107-4cb7-a33d-f02a782466f8","Type":"ContainerStarted","Data":"0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533"} Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.334301 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.338033 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.357262 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.357241802 podStartE2EDuration="2.357241802s" podCreationTimestamp="2025-11-24 19:38:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:38:30.348955771 +0000 UTC m=+1304.137908162" watchObservedRunningTime="2025-11-24 19:38:30.357241802 +0000 UTC m=+1304.146194183" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.534656 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.487053563 podStartE2EDuration="6.534636338s" podCreationTimestamp="2025-11-24 19:38:24 +0000 UTC" firstStartedPulling="2025-11-24 19:38:25.413161912 +0000 UTC m=+1299.202114283" lastFinishedPulling="2025-11-24 19:38:29.460744687 +0000 UTC m=+1303.249697058" observedRunningTime="2025-11-24 19:38:30.42123794 +0000 UTC m=+1304.210190311" watchObservedRunningTime="2025-11-24 19:38:30.534636338 +0000 UTC m=+1304.323588709" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.535735 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-xnfjt"] Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.543027 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.560614 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-xnfjt"] Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.574019 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-ovsdbserver-sb\") pod \"dnsmasq-dns-5d7f54fb65-xnfjt\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.574106 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg22s\" (UniqueName: \"kubernetes.io/projected/d7b9be63-7c78-4d97-87e7-8efd09c2669b-kube-api-access-lg22s\") pod \"dnsmasq-dns-5d7f54fb65-xnfjt\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.574135 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7f54fb65-xnfjt\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.574154 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-dns-swift-storage-0\") pod \"dnsmasq-dns-5d7f54fb65-xnfjt\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.574209 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-dns-svc\") pod \"dnsmasq-dns-5d7f54fb65-xnfjt\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.574241 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-config\") pod \"dnsmasq-dns-5d7f54fb65-xnfjt\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.674836 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-config\") pod \"dnsmasq-dns-5d7f54fb65-xnfjt\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.675100 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-ovsdbserver-sb\") pod \"dnsmasq-dns-5d7f54fb65-xnfjt\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.675160 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg22s\" (UniqueName: \"kubernetes.io/projected/d7b9be63-7c78-4d97-87e7-8efd09c2669b-kube-api-access-lg22s\") pod \"dnsmasq-dns-5d7f54fb65-xnfjt\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.675187 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7f54fb65-xnfjt\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.675206 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-dns-swift-storage-0\") pod \"dnsmasq-dns-5d7f54fb65-xnfjt\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.675257 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-dns-svc\") pod \"dnsmasq-dns-5d7f54fb65-xnfjt\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.676064 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-dns-svc\") pod \"dnsmasq-dns-5d7f54fb65-xnfjt\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.676579 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-config\") pod \"dnsmasq-dns-5d7f54fb65-xnfjt\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.677068 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-ovsdbserver-sb\") pod \"dnsmasq-dns-5d7f54fb65-xnfjt\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.677788 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7f54fb65-xnfjt\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.678296 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-dns-swift-storage-0\") pod \"dnsmasq-dns-5d7f54fb65-xnfjt\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.707009 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg22s\" (UniqueName: \"kubernetes.io/projected/d7b9be63-7c78-4d97-87e7-8efd09c2669b-kube-api-access-lg22s\") pod \"dnsmasq-dns-5d7f54fb65-xnfjt\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.885947 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 19:38:30 crc kubenswrapper[4812]: I1124 19:38:30.896139 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:31 crc kubenswrapper[4812]: I1124 19:38:31.342947 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 19:38:31 crc kubenswrapper[4812]: I1124 19:38:31.442119 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-xnfjt"] Nov 24 19:38:32 crc kubenswrapper[4812]: I1124 19:38:32.365518 4812 generic.go:334] "Generic (PLEG): container finished" podID="d7b9be63-7c78-4d97-87e7-8efd09c2669b" containerID="b78e0685c8bdcfef9fa9fdc10eabd6b7f30b9fa81246e5fa6583ae72d1a875d6" exitCode=0 Nov 24 19:38:32 crc kubenswrapper[4812]: I1124 19:38:32.366968 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" event={"ID":"d7b9be63-7c78-4d97-87e7-8efd09c2669b","Type":"ContainerDied","Data":"b78e0685c8bdcfef9fa9fdc10eabd6b7f30b9fa81246e5fa6583ae72d1a875d6"} Nov 24 19:38:32 crc kubenswrapper[4812]: I1124 19:38:32.367004 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" event={"ID":"d7b9be63-7c78-4d97-87e7-8efd09c2669b","Type":"ContainerStarted","Data":"678ed26199d83c2f2720c5a67c94fb04859f4e7a7afdd5e3d0f856d59fc34f92"} Nov 24 19:38:33 crc kubenswrapper[4812]: I1124 19:38:33.375638 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" event={"ID":"d7b9be63-7c78-4d97-87e7-8efd09c2669b","Type":"ContainerStarted","Data":"1f40489248bad75264493cd7ecc63fec04273b7705afa1d923049f768b10f769"} Nov 24 19:38:33 crc kubenswrapper[4812]: I1124 19:38:33.377156 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:33 crc kubenswrapper[4812]: I1124 19:38:33.410228 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" podStartSLOduration=3.410211443 podStartE2EDuration="3.410211443s" podCreationTimestamp="2025-11-24 19:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:38:33.405894292 +0000 UTC m=+1307.194846653" watchObservedRunningTime="2025-11-24 19:38:33.410211443 +0000 UTC m=+1307.199163814" Nov 24 19:38:33 crc kubenswrapper[4812]: I1124 19:38:33.513445 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 19:38:33 crc kubenswrapper[4812]: I1124 19:38:33.513681 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="76b9ef26-80dd-4a42-8ec5-d28479574740" containerName="nova-api-log" containerID="cri-o://791d8ba9a1bc07f4c77ab5b1e9e2cb270739e3425d3cb889813fb41ebd2f667d" gracePeriod=30 Nov 24 19:38:33 crc kubenswrapper[4812]: I1124 19:38:33.513800 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="76b9ef26-80dd-4a42-8ec5-d28479574740" containerName="nova-api-api" containerID="cri-o://058e29dcc10816b86281e89b4c08e6894ae7b649b59f228d6984d05cd22e3f4d" gracePeriod=30 Nov 24 19:38:33 crc kubenswrapper[4812]: I1124 19:38:33.681127 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:34 crc kubenswrapper[4812]: I1124 19:38:34.386720 4812 generic.go:334] "Generic (PLEG): container finished" podID="76b9ef26-80dd-4a42-8ec5-d28479574740" containerID="791d8ba9a1bc07f4c77ab5b1e9e2cb270739e3425d3cb889813fb41ebd2f667d" exitCode=143 Nov 24 19:38:34 crc kubenswrapper[4812]: I1124 19:38:34.386898 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76b9ef26-80dd-4a42-8ec5-d28479574740","Type":"ContainerDied","Data":"791d8ba9a1bc07f4c77ab5b1e9e2cb270739e3425d3cb889813fb41ebd2f667d"} Nov 24 19:38:34 crc kubenswrapper[4812]: I1124 19:38:34.501205 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:38:34 crc kubenswrapper[4812]: I1124 19:38:34.501449 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="584ff91f-b107-4cb7-a33d-f02a782466f8" containerName="ceilometer-central-agent" containerID="cri-o://41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba" gracePeriod=30 Nov 24 19:38:34 crc kubenswrapper[4812]: I1124 19:38:34.501650 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="584ff91f-b107-4cb7-a33d-f02a782466f8" containerName="proxy-httpd" containerID="cri-o://0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533" gracePeriod=30 Nov 24 19:38:34 crc kubenswrapper[4812]: I1124 19:38:34.501689 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="584ff91f-b107-4cb7-a33d-f02a782466f8" containerName="sg-core" containerID="cri-o://2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7" gracePeriod=30 Nov 24 19:38:34 crc kubenswrapper[4812]: I1124 19:38:34.501700 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="584ff91f-b107-4cb7-a33d-f02a782466f8" containerName="ceilometer-notification-agent" containerID="cri-o://995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c" gracePeriod=30 Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.253190 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.392536 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8q4r\" (UniqueName: \"kubernetes.io/projected/584ff91f-b107-4cb7-a33d-f02a782466f8-kube-api-access-m8q4r\") pod \"584ff91f-b107-4cb7-a33d-f02a782466f8\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.392721 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-config-data\") pod \"584ff91f-b107-4cb7-a33d-f02a782466f8\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.392777 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-combined-ca-bundle\") pod \"584ff91f-b107-4cb7-a33d-f02a782466f8\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.392889 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-scripts\") pod \"584ff91f-b107-4cb7-a33d-f02a782466f8\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.392957 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-sg-core-conf-yaml\") pod \"584ff91f-b107-4cb7-a33d-f02a782466f8\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.393010 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584ff91f-b107-4cb7-a33d-f02a782466f8-run-httpd\") pod \"584ff91f-b107-4cb7-a33d-f02a782466f8\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.393059 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584ff91f-b107-4cb7-a33d-f02a782466f8-log-httpd\") pod \"584ff91f-b107-4cb7-a33d-f02a782466f8\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.393112 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-ceilometer-tls-certs\") pod \"584ff91f-b107-4cb7-a33d-f02a782466f8\" (UID: \"584ff91f-b107-4cb7-a33d-f02a782466f8\") " Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.393450 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584ff91f-b107-4cb7-a33d-f02a782466f8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "584ff91f-b107-4cb7-a33d-f02a782466f8" (UID: "584ff91f-b107-4cb7-a33d-f02a782466f8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.393675 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584ff91f-b107-4cb7-a33d-f02a782466f8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "584ff91f-b107-4cb7-a33d-f02a782466f8" (UID: "584ff91f-b107-4cb7-a33d-f02a782466f8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.394225 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584ff91f-b107-4cb7-a33d-f02a782466f8-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.394248 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584ff91f-b107-4cb7-a33d-f02a782466f8-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.398419 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584ff91f-b107-4cb7-a33d-f02a782466f8-kube-api-access-m8q4r" (OuterVolumeSpecName: "kube-api-access-m8q4r") pod "584ff91f-b107-4cb7-a33d-f02a782466f8" (UID: "584ff91f-b107-4cb7-a33d-f02a782466f8"). InnerVolumeSpecName "kube-api-access-m8q4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.400156 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-scripts" (OuterVolumeSpecName: "scripts") pod "584ff91f-b107-4cb7-a33d-f02a782466f8" (UID: "584ff91f-b107-4cb7-a33d-f02a782466f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.416767 4812 generic.go:334] "Generic (PLEG): container finished" podID="584ff91f-b107-4cb7-a33d-f02a782466f8" containerID="0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533" exitCode=0 Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.416812 4812 generic.go:334] "Generic (PLEG): container finished" podID="584ff91f-b107-4cb7-a33d-f02a782466f8" containerID="2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7" exitCode=2 Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.416831 4812 generic.go:334] "Generic (PLEG): container finished" podID="584ff91f-b107-4cb7-a33d-f02a782466f8" containerID="995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c" exitCode=0 Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.416843 4812 generic.go:334] "Generic (PLEG): container finished" podID="584ff91f-b107-4cb7-a33d-f02a782466f8" containerID="41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba" exitCode=0 Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.417012 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.417455 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"584ff91f-b107-4cb7-a33d-f02a782466f8","Type":"ContainerDied","Data":"0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533"} Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.417535 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"584ff91f-b107-4cb7-a33d-f02a782466f8","Type":"ContainerDied","Data":"2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7"} Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.417553 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"584ff91f-b107-4cb7-a33d-f02a782466f8","Type":"ContainerDied","Data":"995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c"} Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.417564 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"584ff91f-b107-4cb7-a33d-f02a782466f8","Type":"ContainerDied","Data":"41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba"} Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.417575 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"584ff91f-b107-4cb7-a33d-f02a782466f8","Type":"ContainerDied","Data":"c65700f65f8f3db39f10299a07cd0ef7ef589eff45b50b173e29d596c7171e6a"} Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.417597 4812 scope.go:117] "RemoveContainer" containerID="0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.441921 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "584ff91f-b107-4cb7-a33d-f02a782466f8" (UID: "584ff91f-b107-4cb7-a33d-f02a782466f8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.449561 4812 scope.go:117] "RemoveContainer" containerID="2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.455944 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "584ff91f-b107-4cb7-a33d-f02a782466f8" (UID: "584ff91f-b107-4cb7-a33d-f02a782466f8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.476407 4812 scope.go:117] "RemoveContainer" containerID="995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.496470 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.497262 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.497524 4812 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.497613 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8q4r\" (UniqueName: \"kubernetes.io/projected/584ff91f-b107-4cb7-a33d-f02a782466f8-kube-api-access-m8q4r\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.497984 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "584ff91f-b107-4cb7-a33d-f02a782466f8" (UID: "584ff91f-b107-4cb7-a33d-f02a782466f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.498266 4812 scope.go:117] "RemoveContainer" containerID="41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.506227 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-config-data" (OuterVolumeSpecName: "config-data") pod "584ff91f-b107-4cb7-a33d-f02a782466f8" (UID: "584ff91f-b107-4cb7-a33d-f02a782466f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.520631 4812 scope.go:117] "RemoveContainer" containerID="0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533" Nov 24 19:38:35 crc kubenswrapper[4812]: E1124 19:38:35.521057 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533\": container with ID starting with 0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533 not found: ID does not exist" containerID="0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.521166 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533"} err="failed to get container status \"0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533\": rpc error: code = NotFound desc = could not find container \"0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533\": container with ID starting with 0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533 not found: ID does not exist" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.521254 4812 scope.go:117] "RemoveContainer" containerID="2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7" Nov 24 19:38:35 crc kubenswrapper[4812]: E1124 19:38:35.521715 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7\": container with ID starting with 2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7 not found: ID does not exist" containerID="2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.521805 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7"} err="failed to get container status \"2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7\": rpc error: code = NotFound desc = could not find container \"2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7\": container with ID starting with 2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7 not found: ID does not exist" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.521880 4812 scope.go:117] "RemoveContainer" containerID="995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c" Nov 24 19:38:35 crc kubenswrapper[4812]: E1124 19:38:35.522493 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c\": container with ID starting with 995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c not found: ID does not exist" containerID="995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.522533 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c"} err="failed to get container status \"995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c\": rpc error: code = NotFound desc = could not find container \"995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c\": container with ID starting with 995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c not found: ID does not exist" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.522559 4812 scope.go:117] "RemoveContainer" containerID="41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba" Nov 24 19:38:35 crc kubenswrapper[4812]: E1124 19:38:35.522859 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba\": container with ID starting with 41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba not found: ID does not exist" containerID="41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.523013 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba"} err="failed to get container status \"41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba\": rpc error: code = NotFound desc = could not find container \"41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba\": container with ID starting with 41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba not found: ID does not exist" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.523078 4812 scope.go:117] "RemoveContainer" containerID="0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.523649 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533"} err="failed to get container status \"0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533\": rpc error: code = NotFound desc = could not find container \"0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533\": container with ID starting with 0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533 not found: ID does not exist" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.523734 4812 scope.go:117] "RemoveContainer" containerID="2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.524154 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7"} err="failed to get container status \"2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7\": rpc error: code = NotFound desc = could not find container \"2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7\": container with ID starting with 2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7 not found: ID does not exist" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.524183 4812 scope.go:117] "RemoveContainer" containerID="995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.525548 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c"} err="failed to get container status \"995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c\": rpc error: code = NotFound desc = could not find container \"995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c\": container with ID starting with 995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c not found: ID does not exist" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.525587 4812 scope.go:117] "RemoveContainer" containerID="41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.525932 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba"} err="failed to get container status \"41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba\": rpc error: code = NotFound desc = could not find container \"41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba\": container with ID starting with 41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba not found: ID does not exist" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.526023 4812 scope.go:117] "RemoveContainer" containerID="0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.526528 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533"} err="failed to get container status \"0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533\": rpc error: code = NotFound desc = could not find container \"0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533\": container with ID starting with 0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533 not found: ID does not exist" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.526623 4812 scope.go:117] "RemoveContainer" containerID="2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.527118 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7"} err="failed to get container status \"2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7\": rpc error: code = NotFound desc = could not find container \"2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7\": container with ID starting with 2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7 not found: ID does not exist" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.527145 4812 scope.go:117] "RemoveContainer" containerID="995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.527549 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c"} err="failed to get container status \"995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c\": rpc error: code = NotFound desc = could not find container \"995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c\": container with ID starting with 995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c not found: ID does not exist" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.527575 4812 scope.go:117] "RemoveContainer" containerID="41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.527822 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba"} err="failed to get container status \"41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba\": rpc error: code = NotFound desc = could not find container \"41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba\": container with ID starting with 41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba not found: ID does not exist" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.527847 4812 scope.go:117] "RemoveContainer" containerID="0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.528247 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533"} err="failed to get container status \"0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533\": rpc error: code = NotFound desc = could not find container \"0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533\": container with ID starting with 0bd8a7fc8a3e67d5ab0f62cee29b935d00f5d9028804509980c1e741d7a7d533 not found: ID does not exist" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.528272 4812 scope.go:117] "RemoveContainer" containerID="2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.528585 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7"} err="failed to get container status \"2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7\": rpc error: code = NotFound desc = could not find container \"2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7\": container with ID starting with 2b25cb01e92344f75ac20cfded0645fd05a0130fac8bc1bb4b442179e1c220a7 not found: ID does not exist" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.528610 4812 scope.go:117] "RemoveContainer" containerID="995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.528817 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c"} err="failed to get container status \"995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c\": rpc error: code = NotFound desc = could not find container \"995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c\": container with ID starting with 995181af799079040c04f71c3fc08666e4d0ea59d5606157185eff41acb09f3c not found: ID does not exist" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.528843 4812 scope.go:117] "RemoveContainer" containerID="41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.529078 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba"} err="failed to get container status \"41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba\": rpc error: code = NotFound desc = could not find container \"41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba\": container with ID starting with 41bc02a1fdfa50fd7f3466840401103cb5eb050157210614942ad13e5ecf53ba not found: ID does not exist" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.600419 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.600457 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584ff91f-b107-4cb7-a33d-f02a782466f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.809508 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.818527 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.834541 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:38:35 crc kubenswrapper[4812]: E1124 19:38:35.835006 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584ff91f-b107-4cb7-a33d-f02a782466f8" containerName="proxy-httpd" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.835023 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="584ff91f-b107-4cb7-a33d-f02a782466f8" containerName="proxy-httpd" Nov 24 19:38:35 crc kubenswrapper[4812]: E1124 19:38:35.835039 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584ff91f-b107-4cb7-a33d-f02a782466f8" containerName="sg-core" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.835045 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="584ff91f-b107-4cb7-a33d-f02a782466f8" containerName="sg-core" Nov 24 19:38:35 crc kubenswrapper[4812]: E1124 19:38:35.835070 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584ff91f-b107-4cb7-a33d-f02a782466f8" containerName="ceilometer-notification-agent" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.835077 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="584ff91f-b107-4cb7-a33d-f02a782466f8" containerName="ceilometer-notification-agent" Nov 24 19:38:35 crc kubenswrapper[4812]: E1124 19:38:35.835086 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="584ff91f-b107-4cb7-a33d-f02a782466f8" containerName="ceilometer-central-agent" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.835112 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="584ff91f-b107-4cb7-a33d-f02a782466f8" containerName="ceilometer-central-agent" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.835333 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="584ff91f-b107-4cb7-a33d-f02a782466f8" containerName="ceilometer-notification-agent" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.835370 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="584ff91f-b107-4cb7-a33d-f02a782466f8" containerName="proxy-httpd" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.835382 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="584ff91f-b107-4cb7-a33d-f02a782466f8" containerName="ceilometer-central-agent" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.835390 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="584ff91f-b107-4cb7-a33d-f02a782466f8" containerName="sg-core" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.847382 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.848972 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.884969 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.885327 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.885551 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.912040 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-scripts\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.912158 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48k4h\" (UniqueName: \"kubernetes.io/projected/09c615e5-0d09-4775-b559-6883b6dc280b-kube-api-access-48k4h\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.912187 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.912242 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09c615e5-0d09-4775-b559-6883b6dc280b-log-httpd\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.912291 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09c615e5-0d09-4775-b559-6883b6dc280b-run-httpd\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.912325 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.912382 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:35 crc kubenswrapper[4812]: I1124 19:38:35.912429 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-config-data\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:36 crc kubenswrapper[4812]: I1124 19:38:36.013655 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-scripts\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:36 crc kubenswrapper[4812]: I1124 19:38:36.013934 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48k4h\" (UniqueName: \"kubernetes.io/projected/09c615e5-0d09-4775-b559-6883b6dc280b-kube-api-access-48k4h\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:36 crc kubenswrapper[4812]: I1124 19:38:36.013955 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:36 crc kubenswrapper[4812]: I1124 19:38:36.013975 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09c615e5-0d09-4775-b559-6883b6dc280b-log-httpd\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:36 crc kubenswrapper[4812]: I1124 19:38:36.014000 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09c615e5-0d09-4775-b559-6883b6dc280b-run-httpd\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:36 crc kubenswrapper[4812]: I1124 19:38:36.014024 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:36 crc kubenswrapper[4812]: I1124 19:38:36.014046 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:36 crc kubenswrapper[4812]: I1124 19:38:36.014073 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-config-data\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:36 crc kubenswrapper[4812]: I1124 19:38:36.014478 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09c615e5-0d09-4775-b559-6883b6dc280b-run-httpd\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:36 crc kubenswrapper[4812]: I1124 19:38:36.014594 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09c615e5-0d09-4775-b559-6883b6dc280b-log-httpd\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:36 crc kubenswrapper[4812]: I1124 19:38:36.018601 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:36 crc kubenswrapper[4812]: I1124 19:38:36.018609 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:36 crc kubenswrapper[4812]: I1124 19:38:36.019491 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-config-data\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:36 crc kubenswrapper[4812]: I1124 19:38:36.020727 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-scripts\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:36 crc kubenswrapper[4812]: I1124 19:38:36.020768 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:36 crc kubenswrapper[4812]: I1124 19:38:36.034791 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48k4h\" (UniqueName: \"kubernetes.io/projected/09c615e5-0d09-4775-b559-6883b6dc280b-kube-api-access-48k4h\") pod \"ceilometer-0\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " pod="openstack/ceilometer-0" Nov 24 19:38:36 crc kubenswrapper[4812]: I1124 19:38:36.221371 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:38:36 crc kubenswrapper[4812]: I1124 19:38:36.739076 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:38:36 crc kubenswrapper[4812]: I1124 19:38:36.977229 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584ff91f-b107-4cb7-a33d-f02a782466f8" path="/var/lib/kubelet/pods/584ff91f-b107-4cb7-a33d-f02a782466f8/volumes" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.223721 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.247538 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76b9ef26-80dd-4a42-8ec5-d28479574740-logs\") pod \"76b9ef26-80dd-4a42-8ec5-d28479574740\" (UID: \"76b9ef26-80dd-4a42-8ec5-d28479574740\") " Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.247721 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdfd6\" (UniqueName: \"kubernetes.io/projected/76b9ef26-80dd-4a42-8ec5-d28479574740-kube-api-access-bdfd6\") pod \"76b9ef26-80dd-4a42-8ec5-d28479574740\" (UID: \"76b9ef26-80dd-4a42-8ec5-d28479574740\") " Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.247751 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b9ef26-80dd-4a42-8ec5-d28479574740-config-data\") pod \"76b9ef26-80dd-4a42-8ec5-d28479574740\" (UID: \"76b9ef26-80dd-4a42-8ec5-d28479574740\") " Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.247783 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b9ef26-80dd-4a42-8ec5-d28479574740-combined-ca-bundle\") pod \"76b9ef26-80dd-4a42-8ec5-d28479574740\" (UID: \"76b9ef26-80dd-4a42-8ec5-d28479574740\") " Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.254456 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76b9ef26-80dd-4a42-8ec5-d28479574740-logs" (OuterVolumeSpecName: "logs") pod "76b9ef26-80dd-4a42-8ec5-d28479574740" (UID: "76b9ef26-80dd-4a42-8ec5-d28479574740"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.255426 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76b9ef26-80dd-4a42-8ec5-d28479574740-kube-api-access-bdfd6" (OuterVolumeSpecName: "kube-api-access-bdfd6") pod "76b9ef26-80dd-4a42-8ec5-d28479574740" (UID: "76b9ef26-80dd-4a42-8ec5-d28479574740"). InnerVolumeSpecName "kube-api-access-bdfd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.302543 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b9ef26-80dd-4a42-8ec5-d28479574740-config-data" (OuterVolumeSpecName: "config-data") pod "76b9ef26-80dd-4a42-8ec5-d28479574740" (UID: "76b9ef26-80dd-4a42-8ec5-d28479574740"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.305793 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b9ef26-80dd-4a42-8ec5-d28479574740-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76b9ef26-80dd-4a42-8ec5-d28479574740" (UID: "76b9ef26-80dd-4a42-8ec5-d28479574740"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.350904 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76b9ef26-80dd-4a42-8ec5-d28479574740-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.350940 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdfd6\" (UniqueName: \"kubernetes.io/projected/76b9ef26-80dd-4a42-8ec5-d28479574740-kube-api-access-bdfd6\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.350955 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b9ef26-80dd-4a42-8ec5-d28479574740-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.350965 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b9ef26-80dd-4a42-8ec5-d28479574740-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.442090 4812 generic.go:334] "Generic (PLEG): container finished" podID="76b9ef26-80dd-4a42-8ec5-d28479574740" containerID="058e29dcc10816b86281e89b4c08e6894ae7b649b59f228d6984d05cd22e3f4d" exitCode=0 Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.442129 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76b9ef26-80dd-4a42-8ec5-d28479574740","Type":"ContainerDied","Data":"058e29dcc10816b86281e89b4c08e6894ae7b649b59f228d6984d05cd22e3f4d"} Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.442180 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"76b9ef26-80dd-4a42-8ec5-d28479574740","Type":"ContainerDied","Data":"6b6c04210c03c4c604560c70dff0721ce36442bd368733d98343554eea3ac691"} Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.442195 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.442207 4812 scope.go:117] "RemoveContainer" containerID="058e29dcc10816b86281e89b4c08e6894ae7b649b59f228d6984d05cd22e3f4d" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.444297 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09c615e5-0d09-4775-b559-6883b6dc280b","Type":"ContainerStarted","Data":"4f851252ee23d67d22e90d2078e9411ca959bda541556e2caded4cc701cdeee5"} Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.466726 4812 scope.go:117] "RemoveContainer" containerID="791d8ba9a1bc07f4c77ab5b1e9e2cb270739e3425d3cb889813fb41ebd2f667d" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.490752 4812 scope.go:117] "RemoveContainer" containerID="058e29dcc10816b86281e89b4c08e6894ae7b649b59f228d6984d05cd22e3f4d" Nov 24 19:38:37 crc kubenswrapper[4812]: E1124 19:38:37.491591 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"058e29dcc10816b86281e89b4c08e6894ae7b649b59f228d6984d05cd22e3f4d\": container with ID starting with 058e29dcc10816b86281e89b4c08e6894ae7b649b59f228d6984d05cd22e3f4d not found: ID does not exist" containerID="058e29dcc10816b86281e89b4c08e6894ae7b649b59f228d6984d05cd22e3f4d" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.491643 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"058e29dcc10816b86281e89b4c08e6894ae7b649b59f228d6984d05cd22e3f4d"} err="failed to get container status \"058e29dcc10816b86281e89b4c08e6894ae7b649b59f228d6984d05cd22e3f4d\": rpc error: code = NotFound desc = could not find container \"058e29dcc10816b86281e89b4c08e6894ae7b649b59f228d6984d05cd22e3f4d\": container with ID starting with 058e29dcc10816b86281e89b4c08e6894ae7b649b59f228d6984d05cd22e3f4d not found: ID does not exist" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.491679 4812 scope.go:117] "RemoveContainer" containerID="791d8ba9a1bc07f4c77ab5b1e9e2cb270739e3425d3cb889813fb41ebd2f667d" Nov 24 19:38:37 crc kubenswrapper[4812]: E1124 19:38:37.491949 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"791d8ba9a1bc07f4c77ab5b1e9e2cb270739e3425d3cb889813fb41ebd2f667d\": container with ID starting with 791d8ba9a1bc07f4c77ab5b1e9e2cb270739e3425d3cb889813fb41ebd2f667d not found: ID does not exist" containerID="791d8ba9a1bc07f4c77ab5b1e9e2cb270739e3425d3cb889813fb41ebd2f667d" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.491978 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"791d8ba9a1bc07f4c77ab5b1e9e2cb270739e3425d3cb889813fb41ebd2f667d"} err="failed to get container status \"791d8ba9a1bc07f4c77ab5b1e9e2cb270739e3425d3cb889813fb41ebd2f667d\": rpc error: code = NotFound desc = could not find container \"791d8ba9a1bc07f4c77ab5b1e9e2cb270739e3425d3cb889813fb41ebd2f667d\": container with ID starting with 791d8ba9a1bc07f4c77ab5b1e9e2cb270739e3425d3cb889813fb41ebd2f667d not found: ID does not exist" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.497687 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.507104 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.527691 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 19:38:37 crc kubenswrapper[4812]: E1124 19:38:37.528273 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76b9ef26-80dd-4a42-8ec5-d28479574740" containerName="nova-api-api" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.528361 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b9ef26-80dd-4a42-8ec5-d28479574740" containerName="nova-api-api" Nov 24 19:38:37 crc kubenswrapper[4812]: E1124 19:38:37.528433 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76b9ef26-80dd-4a42-8ec5-d28479574740" containerName="nova-api-log" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.528491 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b9ef26-80dd-4a42-8ec5-d28479574740" containerName="nova-api-log" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.528723 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="76b9ef26-80dd-4a42-8ec5-d28479574740" containerName="nova-api-api" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.528801 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="76b9ef26-80dd-4a42-8ec5-d28479574740" containerName="nova-api-log" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.529767 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.533613 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.533835 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.533996 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.540091 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.556651 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " pod="openstack/nova-api-0" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.556871 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " pod="openstack/nova-api-0" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.556966 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/404974fb-32c8-45a5-a403-da498a0979c3-logs\") pod \"nova-api-0\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " pod="openstack/nova-api-0" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.557066 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-config-data\") pod \"nova-api-0\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " pod="openstack/nova-api-0" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.557153 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-public-tls-certs\") pod \"nova-api-0\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " pod="openstack/nova-api-0" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.557232 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zln8h\" (UniqueName: \"kubernetes.io/projected/404974fb-32c8-45a5-a403-da498a0979c3-kube-api-access-zln8h\") pod \"nova-api-0\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " pod="openstack/nova-api-0" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.660074 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " pod="openstack/nova-api-0" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.660481 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/404974fb-32c8-45a5-a403-da498a0979c3-logs\") pod \"nova-api-0\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " pod="openstack/nova-api-0" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.660664 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-config-data\") pod \"nova-api-0\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " pod="openstack/nova-api-0" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.660790 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-public-tls-certs\") pod \"nova-api-0\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " pod="openstack/nova-api-0" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.660915 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zln8h\" (UniqueName: \"kubernetes.io/projected/404974fb-32c8-45a5-a403-da498a0979c3-kube-api-access-zln8h\") pod \"nova-api-0\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " pod="openstack/nova-api-0" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.660956 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/404974fb-32c8-45a5-a403-da498a0979c3-logs\") pod \"nova-api-0\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " pod="openstack/nova-api-0" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.661205 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " pod="openstack/nova-api-0" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.667030 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " pod="openstack/nova-api-0" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.667233 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-public-tls-certs\") pod \"nova-api-0\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " pod="openstack/nova-api-0" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.667414 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " pod="openstack/nova-api-0" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.667536 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-config-data\") pod \"nova-api-0\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " pod="openstack/nova-api-0" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.679724 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zln8h\" (UniqueName: \"kubernetes.io/projected/404974fb-32c8-45a5-a403-da498a0979c3-kube-api-access-zln8h\") pod \"nova-api-0\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " pod="openstack/nova-api-0" Nov 24 19:38:37 crc kubenswrapper[4812]: I1124 19:38:37.856282 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 19:38:38 crc kubenswrapper[4812]: I1124 19:38:38.395206 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 19:38:38 crc kubenswrapper[4812]: I1124 19:38:38.470831 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"404974fb-32c8-45a5-a403-da498a0979c3","Type":"ContainerStarted","Data":"4e5f03c9d53d9bb9b2dc598f542da87b172a98eacf27c01251a5069647aad8d2"} Nov 24 19:38:38 crc kubenswrapper[4812]: I1124 19:38:38.473076 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09c615e5-0d09-4775-b559-6883b6dc280b","Type":"ContainerStarted","Data":"fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267"} Nov 24 19:38:38 crc kubenswrapper[4812]: I1124 19:38:38.473125 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09c615e5-0d09-4775-b559-6883b6dc280b","Type":"ContainerStarted","Data":"8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0"} Nov 24 19:38:38 crc kubenswrapper[4812]: I1124 19:38:38.682174 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:38 crc kubenswrapper[4812]: I1124 19:38:38.715823 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:38 crc kubenswrapper[4812]: I1124 19:38:38.991384 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76b9ef26-80dd-4a42-8ec5-d28479574740" path="/var/lib/kubelet/pods/76b9ef26-80dd-4a42-8ec5-d28479574740/volumes" Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.483558 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"404974fb-32c8-45a5-a403-da498a0979c3","Type":"ContainerStarted","Data":"92929414396b7aac438f05516894513aac8879121d8f65257d2d92ecc7bcd3b1"} Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.483846 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"404974fb-32c8-45a5-a403-da498a0979c3","Type":"ContainerStarted","Data":"2e2720bb760df0de58cc6d48c4b0b6f3f483a1c0ee1ef1ba694eda7b066767ea"} Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.487016 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09c615e5-0d09-4775-b559-6883b6dc280b","Type":"ContainerStarted","Data":"b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63"} Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.507137 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.545731 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5457089870000003 podStartE2EDuration="2.545708987s" podCreationTimestamp="2025-11-24 19:38:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:38:39.51969131 +0000 UTC m=+1313.308643691" watchObservedRunningTime="2025-11-24 19:38:39.545708987 +0000 UTC m=+1313.334661368" Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.671006 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-hrl87"] Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.672147 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hrl87" Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.674306 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.689855 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.691125 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hrl87"] Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.719230 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c79fffda-97d4-4772-bea3-dd5bf8721315-scripts\") pod \"nova-cell1-cell-mapping-hrl87\" (UID: \"c79fffda-97d4-4772-bea3-dd5bf8721315\") " pod="openstack/nova-cell1-cell-mapping-hrl87" Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.719283 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79fffda-97d4-4772-bea3-dd5bf8721315-config-data\") pod \"nova-cell1-cell-mapping-hrl87\" (UID: \"c79fffda-97d4-4772-bea3-dd5bf8721315\") " pod="openstack/nova-cell1-cell-mapping-hrl87" Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.719496 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79fffda-97d4-4772-bea3-dd5bf8721315-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hrl87\" (UID: \"c79fffda-97d4-4772-bea3-dd5bf8721315\") " pod="openstack/nova-cell1-cell-mapping-hrl87" Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.719693 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmzlz\" (UniqueName: \"kubernetes.io/projected/c79fffda-97d4-4772-bea3-dd5bf8721315-kube-api-access-hmzlz\") pod \"nova-cell1-cell-mapping-hrl87\" (UID: \"c79fffda-97d4-4772-bea3-dd5bf8721315\") " pod="openstack/nova-cell1-cell-mapping-hrl87" Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.821972 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79fffda-97d4-4772-bea3-dd5bf8721315-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hrl87\" (UID: \"c79fffda-97d4-4772-bea3-dd5bf8721315\") " pod="openstack/nova-cell1-cell-mapping-hrl87" Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.822062 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmzlz\" (UniqueName: \"kubernetes.io/projected/c79fffda-97d4-4772-bea3-dd5bf8721315-kube-api-access-hmzlz\") pod \"nova-cell1-cell-mapping-hrl87\" (UID: \"c79fffda-97d4-4772-bea3-dd5bf8721315\") " pod="openstack/nova-cell1-cell-mapping-hrl87" Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.822096 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c79fffda-97d4-4772-bea3-dd5bf8721315-scripts\") pod \"nova-cell1-cell-mapping-hrl87\" (UID: \"c79fffda-97d4-4772-bea3-dd5bf8721315\") " pod="openstack/nova-cell1-cell-mapping-hrl87" Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.822126 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79fffda-97d4-4772-bea3-dd5bf8721315-config-data\") pod \"nova-cell1-cell-mapping-hrl87\" (UID: \"c79fffda-97d4-4772-bea3-dd5bf8721315\") " pod="openstack/nova-cell1-cell-mapping-hrl87" Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.827041 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79fffda-97d4-4772-bea3-dd5bf8721315-config-data\") pod \"nova-cell1-cell-mapping-hrl87\" (UID: \"c79fffda-97d4-4772-bea3-dd5bf8721315\") " pod="openstack/nova-cell1-cell-mapping-hrl87" Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.829483 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c79fffda-97d4-4772-bea3-dd5bf8721315-scripts\") pod \"nova-cell1-cell-mapping-hrl87\" (UID: \"c79fffda-97d4-4772-bea3-dd5bf8721315\") " pod="openstack/nova-cell1-cell-mapping-hrl87" Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.839628 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmzlz\" (UniqueName: \"kubernetes.io/projected/c79fffda-97d4-4772-bea3-dd5bf8721315-kube-api-access-hmzlz\") pod \"nova-cell1-cell-mapping-hrl87\" (UID: \"c79fffda-97d4-4772-bea3-dd5bf8721315\") " pod="openstack/nova-cell1-cell-mapping-hrl87" Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.840176 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79fffda-97d4-4772-bea3-dd5bf8721315-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hrl87\" (UID: \"c79fffda-97d4-4772-bea3-dd5bf8721315\") " pod="openstack/nova-cell1-cell-mapping-hrl87" Nov 24 19:38:39 crc kubenswrapper[4812]: I1124 19:38:39.994478 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hrl87" Nov 24 19:38:40 crc kubenswrapper[4812]: I1124 19:38:40.500881 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09c615e5-0d09-4775-b559-6883b6dc280b","Type":"ContainerStarted","Data":"a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2"} Nov 24 19:38:40 crc kubenswrapper[4812]: I1124 19:38:40.525827 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.156189382 podStartE2EDuration="5.525811068s" podCreationTimestamp="2025-11-24 19:38:35 +0000 UTC" firstStartedPulling="2025-11-24 19:38:36.749625164 +0000 UTC m=+1310.538577535" lastFinishedPulling="2025-11-24 19:38:40.11924686 +0000 UTC m=+1313.908199221" observedRunningTime="2025-11-24 19:38:40.521903599 +0000 UTC m=+1314.310855990" watchObservedRunningTime="2025-11-24 19:38:40.525811068 +0000 UTC m=+1314.314763439" Nov 24 19:38:40 crc kubenswrapper[4812]: I1124 19:38:40.590767 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hrl87"] Nov 24 19:38:40 crc kubenswrapper[4812]: I1124 19:38:40.897543 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:38:40 crc kubenswrapper[4812]: I1124 19:38:40.961229 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-s9tzq"] Nov 24 19:38:40 crc kubenswrapper[4812]: I1124 19:38:40.961508 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" podUID="90a24987-2482-402a-bd74-c4edd9e8c7a5" containerName="dnsmasq-dns" containerID="cri-o://f70d149ddfd46c785755b9349dba5981dd2385aef715ad1622f4b420f03e5534" gracePeriod=10 Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.449527 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.515650 4812 generic.go:334] "Generic (PLEG): container finished" podID="90a24987-2482-402a-bd74-c4edd9e8c7a5" containerID="f70d149ddfd46c785755b9349dba5981dd2385aef715ad1622f4b420f03e5534" exitCode=0 Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.515706 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" event={"ID":"90a24987-2482-402a-bd74-c4edd9e8c7a5","Type":"ContainerDied","Data":"f70d149ddfd46c785755b9349dba5981dd2385aef715ad1622f4b420f03e5534"} Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.515731 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" event={"ID":"90a24987-2482-402a-bd74-c4edd9e8c7a5","Type":"ContainerDied","Data":"258b547381b9743c23e116f2e76ed22188a22bfbf426a46945e957510dc3bcc9"} Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.515747 4812 scope.go:117] "RemoveContainer" containerID="f70d149ddfd46c785755b9349dba5981dd2385aef715ad1622f4b420f03e5534" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.515872 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5dd7c4987f-s9tzq" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.526250 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hrl87" event={"ID":"c79fffda-97d4-4772-bea3-dd5bf8721315","Type":"ContainerStarted","Data":"4e7a4b06f4ec1fd3d802dead9d218ddf2050b690808c104ce9eff41582fadc93"} Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.526405 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.526425 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hrl87" event={"ID":"c79fffda-97d4-4772-bea3-dd5bf8721315","Type":"ContainerStarted","Data":"89fda10e3c98408c415aae6ed2734fce87577cc1148699562bb92d96e1ebf600"} Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.548630 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-hrl87" podStartSLOduration=2.548609201 podStartE2EDuration="2.548609201s" podCreationTimestamp="2025-11-24 19:38:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:38:41.541728568 +0000 UTC m=+1315.330680939" watchObservedRunningTime="2025-11-24 19:38:41.548609201 +0000 UTC m=+1315.337561572" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.553211 4812 scope.go:117] "RemoveContainer" containerID="d1ccc8c5e188f97cc9beaea3ba95e24061b68b7713092914ac0cbe2887f9ad26" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.566032 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-config\") pod \"90a24987-2482-402a-bd74-c4edd9e8c7a5\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.566073 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-dns-svc\") pod \"90a24987-2482-402a-bd74-c4edd9e8c7a5\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.566174 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-dns-swift-storage-0\") pod \"90a24987-2482-402a-bd74-c4edd9e8c7a5\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.566269 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-ovsdbserver-sb\") pod \"90a24987-2482-402a-bd74-c4edd9e8c7a5\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.566329 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-ovsdbserver-nb\") pod \"90a24987-2482-402a-bd74-c4edd9e8c7a5\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.566402 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snrck\" (UniqueName: \"kubernetes.io/projected/90a24987-2482-402a-bd74-c4edd9e8c7a5-kube-api-access-snrck\") pod \"90a24987-2482-402a-bd74-c4edd9e8c7a5\" (UID: \"90a24987-2482-402a-bd74-c4edd9e8c7a5\") " Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.573715 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90a24987-2482-402a-bd74-c4edd9e8c7a5-kube-api-access-snrck" (OuterVolumeSpecName: "kube-api-access-snrck") pod "90a24987-2482-402a-bd74-c4edd9e8c7a5" (UID: "90a24987-2482-402a-bd74-c4edd9e8c7a5"). InnerVolumeSpecName "kube-api-access-snrck". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.582762 4812 scope.go:117] "RemoveContainer" containerID="f70d149ddfd46c785755b9349dba5981dd2385aef715ad1622f4b420f03e5534" Nov 24 19:38:41 crc kubenswrapper[4812]: E1124 19:38:41.583145 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f70d149ddfd46c785755b9349dba5981dd2385aef715ad1622f4b420f03e5534\": container with ID starting with f70d149ddfd46c785755b9349dba5981dd2385aef715ad1622f4b420f03e5534 not found: ID does not exist" containerID="f70d149ddfd46c785755b9349dba5981dd2385aef715ad1622f4b420f03e5534" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.583174 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f70d149ddfd46c785755b9349dba5981dd2385aef715ad1622f4b420f03e5534"} err="failed to get container status \"f70d149ddfd46c785755b9349dba5981dd2385aef715ad1622f4b420f03e5534\": rpc error: code = NotFound desc = could not find container \"f70d149ddfd46c785755b9349dba5981dd2385aef715ad1622f4b420f03e5534\": container with ID starting with f70d149ddfd46c785755b9349dba5981dd2385aef715ad1622f4b420f03e5534 not found: ID does not exist" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.583194 4812 scope.go:117] "RemoveContainer" containerID="d1ccc8c5e188f97cc9beaea3ba95e24061b68b7713092914ac0cbe2887f9ad26" Nov 24 19:38:41 crc kubenswrapper[4812]: E1124 19:38:41.583666 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1ccc8c5e188f97cc9beaea3ba95e24061b68b7713092914ac0cbe2887f9ad26\": container with ID starting with d1ccc8c5e188f97cc9beaea3ba95e24061b68b7713092914ac0cbe2887f9ad26 not found: ID does not exist" containerID="d1ccc8c5e188f97cc9beaea3ba95e24061b68b7713092914ac0cbe2887f9ad26" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.583730 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1ccc8c5e188f97cc9beaea3ba95e24061b68b7713092914ac0cbe2887f9ad26"} err="failed to get container status \"d1ccc8c5e188f97cc9beaea3ba95e24061b68b7713092914ac0cbe2887f9ad26\": rpc error: code = NotFound desc = could not find container \"d1ccc8c5e188f97cc9beaea3ba95e24061b68b7713092914ac0cbe2887f9ad26\": container with ID starting with d1ccc8c5e188f97cc9beaea3ba95e24061b68b7713092914ac0cbe2887f9ad26 not found: ID does not exist" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.630543 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "90a24987-2482-402a-bd74-c4edd9e8c7a5" (UID: "90a24987-2482-402a-bd74-c4edd9e8c7a5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.634864 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "90a24987-2482-402a-bd74-c4edd9e8c7a5" (UID: "90a24987-2482-402a-bd74-c4edd9e8c7a5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.644909 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "90a24987-2482-402a-bd74-c4edd9e8c7a5" (UID: "90a24987-2482-402a-bd74-c4edd9e8c7a5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.649703 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "90a24987-2482-402a-bd74-c4edd9e8c7a5" (UID: "90a24987-2482-402a-bd74-c4edd9e8c7a5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.668896 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-config" (OuterVolumeSpecName: "config") pod "90a24987-2482-402a-bd74-c4edd9e8c7a5" (UID: "90a24987-2482-402a-bd74-c4edd9e8c7a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.670324 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.670365 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snrck\" (UniqueName: \"kubernetes.io/projected/90a24987-2482-402a-bd74-c4edd9e8c7a5-kube-api-access-snrck\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.670376 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.670384 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.670392 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.670400 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90a24987-2482-402a-bd74-c4edd9e8c7a5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.913843 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-s9tzq"] Nov 24 19:38:41 crc kubenswrapper[4812]: I1124 19:38:41.923617 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5dd7c4987f-s9tzq"] Nov 24 19:38:42 crc kubenswrapper[4812]: I1124 19:38:42.983536 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90a24987-2482-402a-bd74-c4edd9e8c7a5" path="/var/lib/kubelet/pods/90a24987-2482-402a-bd74-c4edd9e8c7a5/volumes" Nov 24 19:38:45 crc kubenswrapper[4812]: I1124 19:38:45.582744 4812 generic.go:334] "Generic (PLEG): container finished" podID="c79fffda-97d4-4772-bea3-dd5bf8721315" containerID="4e7a4b06f4ec1fd3d802dead9d218ddf2050b690808c104ce9eff41582fadc93" exitCode=0 Nov 24 19:38:45 crc kubenswrapper[4812]: I1124 19:38:45.582864 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hrl87" event={"ID":"c79fffda-97d4-4772-bea3-dd5bf8721315","Type":"ContainerDied","Data":"4e7a4b06f4ec1fd3d802dead9d218ddf2050b690808c104ce9eff41582fadc93"} Nov 24 19:38:46 crc kubenswrapper[4812]: I1124 19:38:46.983054 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hrl87" Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.081650 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmzlz\" (UniqueName: \"kubernetes.io/projected/c79fffda-97d4-4772-bea3-dd5bf8721315-kube-api-access-hmzlz\") pod \"c79fffda-97d4-4772-bea3-dd5bf8721315\" (UID: \"c79fffda-97d4-4772-bea3-dd5bf8721315\") " Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.081732 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79fffda-97d4-4772-bea3-dd5bf8721315-config-data\") pod \"c79fffda-97d4-4772-bea3-dd5bf8721315\" (UID: \"c79fffda-97d4-4772-bea3-dd5bf8721315\") " Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.081826 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79fffda-97d4-4772-bea3-dd5bf8721315-combined-ca-bundle\") pod \"c79fffda-97d4-4772-bea3-dd5bf8721315\" (UID: \"c79fffda-97d4-4772-bea3-dd5bf8721315\") " Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.081909 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c79fffda-97d4-4772-bea3-dd5bf8721315-scripts\") pod \"c79fffda-97d4-4772-bea3-dd5bf8721315\" (UID: \"c79fffda-97d4-4772-bea3-dd5bf8721315\") " Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.086928 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c79fffda-97d4-4772-bea3-dd5bf8721315-scripts" (OuterVolumeSpecName: "scripts") pod "c79fffda-97d4-4772-bea3-dd5bf8721315" (UID: "c79fffda-97d4-4772-bea3-dd5bf8721315"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.087808 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c79fffda-97d4-4772-bea3-dd5bf8721315-kube-api-access-hmzlz" (OuterVolumeSpecName: "kube-api-access-hmzlz") pod "c79fffda-97d4-4772-bea3-dd5bf8721315" (UID: "c79fffda-97d4-4772-bea3-dd5bf8721315"). InnerVolumeSpecName "kube-api-access-hmzlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.137222 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c79fffda-97d4-4772-bea3-dd5bf8721315-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c79fffda-97d4-4772-bea3-dd5bf8721315" (UID: "c79fffda-97d4-4772-bea3-dd5bf8721315"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.137598 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c79fffda-97d4-4772-bea3-dd5bf8721315-config-data" (OuterVolumeSpecName: "config-data") pod "c79fffda-97d4-4772-bea3-dd5bf8721315" (UID: "c79fffda-97d4-4772-bea3-dd5bf8721315"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.185057 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c79fffda-97d4-4772-bea3-dd5bf8721315-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.185115 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c79fffda-97d4-4772-bea3-dd5bf8721315-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.185137 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c79fffda-97d4-4772-bea3-dd5bf8721315-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.185154 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmzlz\" (UniqueName: \"kubernetes.io/projected/c79fffda-97d4-4772-bea3-dd5bf8721315-kube-api-access-hmzlz\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.610575 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hrl87" event={"ID":"c79fffda-97d4-4772-bea3-dd5bf8721315","Type":"ContainerDied","Data":"89fda10e3c98408c415aae6ed2734fce87577cc1148699562bb92d96e1ebf600"} Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.610635 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89fda10e3c98408c415aae6ed2734fce87577cc1148699562bb92d96e1ebf600" Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.610752 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hrl87" Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.828663 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.829238 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="404974fb-32c8-45a5-a403-da498a0979c3" containerName="nova-api-log" containerID="cri-o://2e2720bb760df0de58cc6d48c4b0b6f3f483a1c0ee1ef1ba694eda7b066767ea" gracePeriod=30 Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.829416 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="404974fb-32c8-45a5-a403-da498a0979c3" containerName="nova-api-api" containerID="cri-o://92929414396b7aac438f05516894513aac8879121d8f65257d2d92ecc7bcd3b1" gracePeriod=30 Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.878379 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.879010 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a6e7e914-8687-44d0-ae3b-d1cbdf4e861c" containerName="nova-scheduler-scheduler" containerID="cri-o://70b1b974b18a427b38a6ea845666552daee3070e59154bce1d46d193527228d3" gracePeriod=30 Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.894971 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.895204 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9f9b1104-0b19-40dd-b447-364a7b4dc79a" containerName="nova-metadata-log" containerID="cri-o://ef427c471848f1577a77b7610d7ce7a5bd3ceeb5261cd719fe52fb979636e32b" gracePeriod=30 Nov 24 19:38:47 crc kubenswrapper[4812]: I1124 19:38:47.895293 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9f9b1104-0b19-40dd-b447-364a7b4dc79a" containerName="nova-metadata-metadata" containerID="cri-o://841ce7af9f103f7288872040170406fd4b3092ec6cbda39b97ccb9b3c9554806" gracePeriod=30 Nov 24 19:38:48 crc kubenswrapper[4812]: E1124 19:38:48.422621 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="70b1b974b18a427b38a6ea845666552daee3070e59154bce1d46d193527228d3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 19:38:48 crc kubenswrapper[4812]: E1124 19:38:48.424050 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="70b1b974b18a427b38a6ea845666552daee3070e59154bce1d46d193527228d3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 19:38:48 crc kubenswrapper[4812]: E1124 19:38:48.425618 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="70b1b974b18a427b38a6ea845666552daee3070e59154bce1d46d193527228d3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 19:38:48 crc kubenswrapper[4812]: E1124 19:38:48.425661 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a6e7e914-8687-44d0-ae3b-d1cbdf4e861c" containerName="nova-scheduler-scheduler" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.474579 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.520163 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zln8h\" (UniqueName: \"kubernetes.io/projected/404974fb-32c8-45a5-a403-da498a0979c3-kube-api-access-zln8h\") pod \"404974fb-32c8-45a5-a403-da498a0979c3\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.520389 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-combined-ca-bundle\") pod \"404974fb-32c8-45a5-a403-da498a0979c3\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.520581 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-internal-tls-certs\") pod \"404974fb-32c8-45a5-a403-da498a0979c3\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.520621 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-public-tls-certs\") pod \"404974fb-32c8-45a5-a403-da498a0979c3\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.520701 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-config-data\") pod \"404974fb-32c8-45a5-a403-da498a0979c3\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.520745 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/404974fb-32c8-45a5-a403-da498a0979c3-logs\") pod \"404974fb-32c8-45a5-a403-da498a0979c3\" (UID: \"404974fb-32c8-45a5-a403-da498a0979c3\") " Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.524198 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/404974fb-32c8-45a5-a403-da498a0979c3-logs" (OuterVolumeSpecName: "logs") pod "404974fb-32c8-45a5-a403-da498a0979c3" (UID: "404974fb-32c8-45a5-a403-da498a0979c3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.533062 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/404974fb-32c8-45a5-a403-da498a0979c3-kube-api-access-zln8h" (OuterVolumeSpecName: "kube-api-access-zln8h") pod "404974fb-32c8-45a5-a403-da498a0979c3" (UID: "404974fb-32c8-45a5-a403-da498a0979c3"). InnerVolumeSpecName "kube-api-access-zln8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.566687 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "404974fb-32c8-45a5-a403-da498a0979c3" (UID: "404974fb-32c8-45a5-a403-da498a0979c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.572068 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-config-data" (OuterVolumeSpecName: "config-data") pod "404974fb-32c8-45a5-a403-da498a0979c3" (UID: "404974fb-32c8-45a5-a403-da498a0979c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.574020 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "404974fb-32c8-45a5-a403-da498a0979c3" (UID: "404974fb-32c8-45a5-a403-da498a0979c3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.586655 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "404974fb-32c8-45a5-a403-da498a0979c3" (UID: "404974fb-32c8-45a5-a403-da498a0979c3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.623763 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zln8h\" (UniqueName: \"kubernetes.io/projected/404974fb-32c8-45a5-a403-da498a0979c3-kube-api-access-zln8h\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.623818 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.623864 4812 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.623879 4812 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.623893 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/404974fb-32c8-45a5-a403-da498a0979c3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.623906 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/404974fb-32c8-45a5-a403-da498a0979c3-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.633202 4812 generic.go:334] "Generic (PLEG): container finished" podID="404974fb-32c8-45a5-a403-da498a0979c3" containerID="92929414396b7aac438f05516894513aac8879121d8f65257d2d92ecc7bcd3b1" exitCode=0 Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.633249 4812 generic.go:334] "Generic (PLEG): container finished" podID="404974fb-32c8-45a5-a403-da498a0979c3" containerID="2e2720bb760df0de58cc6d48c4b0b6f3f483a1c0ee1ef1ba694eda7b066767ea" exitCode=143 Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.633255 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.633310 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"404974fb-32c8-45a5-a403-da498a0979c3","Type":"ContainerDied","Data":"92929414396b7aac438f05516894513aac8879121d8f65257d2d92ecc7bcd3b1"} Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.633385 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"404974fb-32c8-45a5-a403-da498a0979c3","Type":"ContainerDied","Data":"2e2720bb760df0de58cc6d48c4b0b6f3f483a1c0ee1ef1ba694eda7b066767ea"} Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.633404 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"404974fb-32c8-45a5-a403-da498a0979c3","Type":"ContainerDied","Data":"4e5f03c9d53d9bb9b2dc598f542da87b172a98eacf27c01251a5069647aad8d2"} Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.633429 4812 scope.go:117] "RemoveContainer" containerID="92929414396b7aac438f05516894513aac8879121d8f65257d2d92ecc7bcd3b1" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.647031 4812 generic.go:334] "Generic (PLEG): container finished" podID="9f9b1104-0b19-40dd-b447-364a7b4dc79a" containerID="ef427c471848f1577a77b7610d7ce7a5bd3ceeb5261cd719fe52fb979636e32b" exitCode=143 Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.647070 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f9b1104-0b19-40dd-b447-364a7b4dc79a","Type":"ContainerDied","Data":"ef427c471848f1577a77b7610d7ce7a5bd3ceeb5261cd719fe52fb979636e32b"} Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.673535 4812 scope.go:117] "RemoveContainer" containerID="2e2720bb760df0de58cc6d48c4b0b6f3f483a1c0ee1ef1ba694eda7b066767ea" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.679317 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.694973 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.701684 4812 scope.go:117] "RemoveContainer" containerID="92929414396b7aac438f05516894513aac8879121d8f65257d2d92ecc7bcd3b1" Nov 24 19:38:48 crc kubenswrapper[4812]: E1124 19:38:48.702175 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92929414396b7aac438f05516894513aac8879121d8f65257d2d92ecc7bcd3b1\": container with ID starting with 92929414396b7aac438f05516894513aac8879121d8f65257d2d92ecc7bcd3b1 not found: ID does not exist" containerID="92929414396b7aac438f05516894513aac8879121d8f65257d2d92ecc7bcd3b1" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.702215 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92929414396b7aac438f05516894513aac8879121d8f65257d2d92ecc7bcd3b1"} err="failed to get container status \"92929414396b7aac438f05516894513aac8879121d8f65257d2d92ecc7bcd3b1\": rpc error: code = NotFound desc = could not find container \"92929414396b7aac438f05516894513aac8879121d8f65257d2d92ecc7bcd3b1\": container with ID starting with 92929414396b7aac438f05516894513aac8879121d8f65257d2d92ecc7bcd3b1 not found: ID does not exist" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.702242 4812 scope.go:117] "RemoveContainer" containerID="2e2720bb760df0de58cc6d48c4b0b6f3f483a1c0ee1ef1ba694eda7b066767ea" Nov 24 19:38:48 crc kubenswrapper[4812]: E1124 19:38:48.702558 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e2720bb760df0de58cc6d48c4b0b6f3f483a1c0ee1ef1ba694eda7b066767ea\": container with ID starting with 2e2720bb760df0de58cc6d48c4b0b6f3f483a1c0ee1ef1ba694eda7b066767ea not found: ID does not exist" containerID="2e2720bb760df0de58cc6d48c4b0b6f3f483a1c0ee1ef1ba694eda7b066767ea" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.702609 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e2720bb760df0de58cc6d48c4b0b6f3f483a1c0ee1ef1ba694eda7b066767ea"} err="failed to get container status \"2e2720bb760df0de58cc6d48c4b0b6f3f483a1c0ee1ef1ba694eda7b066767ea\": rpc error: code = NotFound desc = could not find container \"2e2720bb760df0de58cc6d48c4b0b6f3f483a1c0ee1ef1ba694eda7b066767ea\": container with ID starting with 2e2720bb760df0de58cc6d48c4b0b6f3f483a1c0ee1ef1ba694eda7b066767ea not found: ID does not exist" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.702638 4812 scope.go:117] "RemoveContainer" containerID="92929414396b7aac438f05516894513aac8879121d8f65257d2d92ecc7bcd3b1" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.703079 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92929414396b7aac438f05516894513aac8879121d8f65257d2d92ecc7bcd3b1"} err="failed to get container status \"92929414396b7aac438f05516894513aac8879121d8f65257d2d92ecc7bcd3b1\": rpc error: code = NotFound desc = could not find container \"92929414396b7aac438f05516894513aac8879121d8f65257d2d92ecc7bcd3b1\": container with ID starting with 92929414396b7aac438f05516894513aac8879121d8f65257d2d92ecc7bcd3b1 not found: ID does not exist" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.703125 4812 scope.go:117] "RemoveContainer" containerID="2e2720bb760df0de58cc6d48c4b0b6f3f483a1c0ee1ef1ba694eda7b066767ea" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.705811 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e2720bb760df0de58cc6d48c4b0b6f3f483a1c0ee1ef1ba694eda7b066767ea"} err="failed to get container status \"2e2720bb760df0de58cc6d48c4b0b6f3f483a1c0ee1ef1ba694eda7b066767ea\": rpc error: code = NotFound desc = could not find container \"2e2720bb760df0de58cc6d48c4b0b6f3f483a1c0ee1ef1ba694eda7b066767ea\": container with ID starting with 2e2720bb760df0de58cc6d48c4b0b6f3f483a1c0ee1ef1ba694eda7b066767ea not found: ID does not exist" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.708428 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 19:38:48 crc kubenswrapper[4812]: E1124 19:38:48.709074 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a24987-2482-402a-bd74-c4edd9e8c7a5" containerName="init" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.709154 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a24987-2482-402a-bd74-c4edd9e8c7a5" containerName="init" Nov 24 19:38:48 crc kubenswrapper[4812]: E1124 19:38:48.709244 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="404974fb-32c8-45a5-a403-da498a0979c3" containerName="nova-api-api" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.709309 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="404974fb-32c8-45a5-a403-da498a0979c3" containerName="nova-api-api" Nov 24 19:38:48 crc kubenswrapper[4812]: E1124 19:38:48.709432 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="404974fb-32c8-45a5-a403-da498a0979c3" containerName="nova-api-log" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.709495 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="404974fb-32c8-45a5-a403-da498a0979c3" containerName="nova-api-log" Nov 24 19:38:48 crc kubenswrapper[4812]: E1124 19:38:48.709569 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90a24987-2482-402a-bd74-c4edd9e8c7a5" containerName="dnsmasq-dns" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.709641 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="90a24987-2482-402a-bd74-c4edd9e8c7a5" containerName="dnsmasq-dns" Nov 24 19:38:48 crc kubenswrapper[4812]: E1124 19:38:48.709723 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c79fffda-97d4-4772-bea3-dd5bf8721315" containerName="nova-manage" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.709801 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c79fffda-97d4-4772-bea3-dd5bf8721315" containerName="nova-manage" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.710075 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="404974fb-32c8-45a5-a403-da498a0979c3" containerName="nova-api-log" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.710184 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="404974fb-32c8-45a5-a403-da498a0979c3" containerName="nova-api-api" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.710270 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c79fffda-97d4-4772-bea3-dd5bf8721315" containerName="nova-manage" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.710401 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="90a24987-2482-402a-bd74-c4edd9e8c7a5" containerName="dnsmasq-dns" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.711655 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.718098 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.719277 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.719449 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.719553 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.827263 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " pod="openstack/nova-api-0" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.827365 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plzzk\" (UniqueName: \"kubernetes.io/projected/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-kube-api-access-plzzk\") pod \"nova-api-0\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " pod="openstack/nova-api-0" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.827427 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-config-data\") pod \"nova-api-0\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " pod="openstack/nova-api-0" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.827461 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " pod="openstack/nova-api-0" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.827544 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-logs\") pod \"nova-api-0\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " pod="openstack/nova-api-0" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.827593 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-public-tls-certs\") pod \"nova-api-0\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " pod="openstack/nova-api-0" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.928939 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-config-data\") pod \"nova-api-0\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " pod="openstack/nova-api-0" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.929250 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " pod="openstack/nova-api-0" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.929415 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-logs\") pod \"nova-api-0\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " pod="openstack/nova-api-0" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.929510 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-public-tls-certs\") pod \"nova-api-0\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " pod="openstack/nova-api-0" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.929636 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " pod="openstack/nova-api-0" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.929769 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plzzk\" (UniqueName: \"kubernetes.io/projected/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-kube-api-access-plzzk\") pod \"nova-api-0\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " pod="openstack/nova-api-0" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.930797 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-logs\") pod \"nova-api-0\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " pod="openstack/nova-api-0" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.933467 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-config-data\") pod \"nova-api-0\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " pod="openstack/nova-api-0" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.933558 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " pod="openstack/nova-api-0" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.933699 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " pod="openstack/nova-api-0" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.934650 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-public-tls-certs\") pod \"nova-api-0\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " pod="openstack/nova-api-0" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.949233 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plzzk\" (UniqueName: \"kubernetes.io/projected/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-kube-api-access-plzzk\") pod \"nova-api-0\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " pod="openstack/nova-api-0" Nov 24 19:38:48 crc kubenswrapper[4812]: I1124 19:38:48.976797 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="404974fb-32c8-45a5-a403-da498a0979c3" path="/var/lib/kubelet/pods/404974fb-32c8-45a5-a403-da498a0979c3/volumes" Nov 24 19:38:49 crc kubenswrapper[4812]: I1124 19:38:49.044687 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 19:38:49 crc kubenswrapper[4812]: I1124 19:38:49.522860 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 19:38:49 crc kubenswrapper[4812]: W1124 19:38:49.532759 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3d373f4_f1f8_4c34_9c3e_047d0f67d6d9.slice/crio-f4db2447b431d51c943a31dcf8f8575862b333ed58807da40fec9643fb3c419e WatchSource:0}: Error finding container f4db2447b431d51c943a31dcf8f8575862b333ed58807da40fec9643fb3c419e: Status 404 returned error can't find the container with id f4db2447b431d51c943a31dcf8f8575862b333ed58807da40fec9643fb3c419e Nov 24 19:38:49 crc kubenswrapper[4812]: I1124 19:38:49.667294 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9","Type":"ContainerStarted","Data":"f4db2447b431d51c943a31dcf8f8575862b333ed58807da40fec9643fb3c419e"} Nov 24 19:38:50 crc kubenswrapper[4812]: I1124 19:38:50.684705 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9","Type":"ContainerStarted","Data":"936b8b6a5274f750529b939168932b838cfb5b6deb53493684bf1a8efdb874f6"} Nov 24 19:38:50 crc kubenswrapper[4812]: I1124 19:38:50.685164 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9","Type":"ContainerStarted","Data":"bf019b3765313bce34a72ee4466da8ea1de124ec7d57584290484bf4ecf926a5"} Nov 24 19:38:50 crc kubenswrapper[4812]: I1124 19:38:50.730175 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.730142491 podStartE2EDuration="2.730142491s" podCreationTimestamp="2025-11-24 19:38:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:38:50.711637664 +0000 UTC m=+1324.500590105" watchObservedRunningTime="2025-11-24 19:38:50.730142491 +0000 UTC m=+1324.519094912" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.038760 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9f9b1104-0b19-40dd-b447-364a7b4dc79a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:59406->10.217.0.194:8775: read: connection reset by peer" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.038781 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9f9b1104-0b19-40dd-b447-364a7b4dc79a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.194:8775/\": read tcp 10.217.0.2:59410->10.217.0.194:8775: read: connection reset by peer" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.515113 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.603189 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs2rn\" (UniqueName: \"kubernetes.io/projected/9f9b1104-0b19-40dd-b447-364a7b4dc79a-kube-api-access-rs2rn\") pod \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\" (UID: \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\") " Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.603279 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f9b1104-0b19-40dd-b447-364a7b4dc79a-combined-ca-bundle\") pod \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\" (UID: \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\") " Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.603311 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f9b1104-0b19-40dd-b447-364a7b4dc79a-config-data\") pod \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\" (UID: \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\") " Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.603369 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f9b1104-0b19-40dd-b447-364a7b4dc79a-logs\") pod \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\" (UID: \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\") " Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.603431 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9b1104-0b19-40dd-b447-364a7b4dc79a-nova-metadata-tls-certs\") pod \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\" (UID: \"9f9b1104-0b19-40dd-b447-364a7b4dc79a\") " Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.604288 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f9b1104-0b19-40dd-b447-364a7b4dc79a-logs" (OuterVolumeSpecName: "logs") pod "9f9b1104-0b19-40dd-b447-364a7b4dc79a" (UID: "9f9b1104-0b19-40dd-b447-364a7b4dc79a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.609300 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f9b1104-0b19-40dd-b447-364a7b4dc79a-kube-api-access-rs2rn" (OuterVolumeSpecName: "kube-api-access-rs2rn") pod "9f9b1104-0b19-40dd-b447-364a7b4dc79a" (UID: "9f9b1104-0b19-40dd-b447-364a7b4dc79a"). InnerVolumeSpecName "kube-api-access-rs2rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.640574 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f9b1104-0b19-40dd-b447-364a7b4dc79a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f9b1104-0b19-40dd-b447-364a7b4dc79a" (UID: "9f9b1104-0b19-40dd-b447-364a7b4dc79a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.660549 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f9b1104-0b19-40dd-b447-364a7b4dc79a-config-data" (OuterVolumeSpecName: "config-data") pod "9f9b1104-0b19-40dd-b447-364a7b4dc79a" (UID: "9f9b1104-0b19-40dd-b447-364a7b4dc79a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.697212 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f9b1104-0b19-40dd-b447-364a7b4dc79a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9f9b1104-0b19-40dd-b447-364a7b4dc79a" (UID: "9f9b1104-0b19-40dd-b447-364a7b4dc79a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.705695 4812 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f9b1104-0b19-40dd-b447-364a7b4dc79a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.705723 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs2rn\" (UniqueName: \"kubernetes.io/projected/9f9b1104-0b19-40dd-b447-364a7b4dc79a-kube-api-access-rs2rn\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.705732 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f9b1104-0b19-40dd-b447-364a7b4dc79a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.705745 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f9b1104-0b19-40dd-b447-364a7b4dc79a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.705754 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f9b1104-0b19-40dd-b447-364a7b4dc79a-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.706847 4812 generic.go:334] "Generic (PLEG): container finished" podID="9f9b1104-0b19-40dd-b447-364a7b4dc79a" containerID="841ce7af9f103f7288872040170406fd4b3092ec6cbda39b97ccb9b3c9554806" exitCode=0 Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.706932 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.707205 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f9b1104-0b19-40dd-b447-364a7b4dc79a","Type":"ContainerDied","Data":"841ce7af9f103f7288872040170406fd4b3092ec6cbda39b97ccb9b3c9554806"} Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.707250 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9f9b1104-0b19-40dd-b447-364a7b4dc79a","Type":"ContainerDied","Data":"28816508457fa5185267caf7d7776b4d6f84013206f209eee00a7d5985eac364"} Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.707271 4812 scope.go:117] "RemoveContainer" containerID="841ce7af9f103f7288872040170406fd4b3092ec6cbda39b97ccb9b3c9554806" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.746544 4812 scope.go:117] "RemoveContainer" containerID="ef427c471848f1577a77b7610d7ce7a5bd3ceeb5261cd719fe52fb979636e32b" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.756164 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.774479 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.780482 4812 scope.go:117] "RemoveContainer" containerID="841ce7af9f103f7288872040170406fd4b3092ec6cbda39b97ccb9b3c9554806" Nov 24 19:38:51 crc kubenswrapper[4812]: E1124 19:38:51.780838 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"841ce7af9f103f7288872040170406fd4b3092ec6cbda39b97ccb9b3c9554806\": container with ID starting with 841ce7af9f103f7288872040170406fd4b3092ec6cbda39b97ccb9b3c9554806 not found: ID does not exist" containerID="841ce7af9f103f7288872040170406fd4b3092ec6cbda39b97ccb9b3c9554806" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.780865 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"841ce7af9f103f7288872040170406fd4b3092ec6cbda39b97ccb9b3c9554806"} err="failed to get container status \"841ce7af9f103f7288872040170406fd4b3092ec6cbda39b97ccb9b3c9554806\": rpc error: code = NotFound desc = could not find container \"841ce7af9f103f7288872040170406fd4b3092ec6cbda39b97ccb9b3c9554806\": container with ID starting with 841ce7af9f103f7288872040170406fd4b3092ec6cbda39b97ccb9b3c9554806 not found: ID does not exist" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.780887 4812 scope.go:117] "RemoveContainer" containerID="ef427c471848f1577a77b7610d7ce7a5bd3ceeb5261cd719fe52fb979636e32b" Nov 24 19:38:51 crc kubenswrapper[4812]: E1124 19:38:51.781150 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef427c471848f1577a77b7610d7ce7a5bd3ceeb5261cd719fe52fb979636e32b\": container with ID starting with ef427c471848f1577a77b7610d7ce7a5bd3ceeb5261cd719fe52fb979636e32b not found: ID does not exist" containerID="ef427c471848f1577a77b7610d7ce7a5bd3ceeb5261cd719fe52fb979636e32b" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.781171 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef427c471848f1577a77b7610d7ce7a5bd3ceeb5261cd719fe52fb979636e32b"} err="failed to get container status \"ef427c471848f1577a77b7610d7ce7a5bd3ceeb5261cd719fe52fb979636e32b\": rpc error: code = NotFound desc = could not find container \"ef427c471848f1577a77b7610d7ce7a5bd3ceeb5261cd719fe52fb979636e32b\": container with ID starting with ef427c471848f1577a77b7610d7ce7a5bd3ceeb5261cd719fe52fb979636e32b not found: ID does not exist" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.783554 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:38:51 crc kubenswrapper[4812]: E1124 19:38:51.783908 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9b1104-0b19-40dd-b447-364a7b4dc79a" containerName="nova-metadata-metadata" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.783921 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9b1104-0b19-40dd-b447-364a7b4dc79a" containerName="nova-metadata-metadata" Nov 24 19:38:51 crc kubenswrapper[4812]: E1124 19:38:51.783966 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9b1104-0b19-40dd-b447-364a7b4dc79a" containerName="nova-metadata-log" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.783972 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9b1104-0b19-40dd-b447-364a7b4dc79a" containerName="nova-metadata-log" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.784160 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f9b1104-0b19-40dd-b447-364a7b4dc79a" containerName="nova-metadata-metadata" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.784172 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f9b1104-0b19-40dd-b447-364a7b4dc79a" containerName="nova-metadata-log" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.785673 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.788709 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.788804 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.794261 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.908928 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fph6p\" (UniqueName: \"kubernetes.io/projected/ce28537c-ff5d-4318-bc95-8a29da6aae53-kube-api-access-fph6p\") pod \"nova-metadata-0\" (UID: \"ce28537c-ff5d-4318-bc95-8a29da6aae53\") " pod="openstack/nova-metadata-0" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.909041 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce28537c-ff5d-4318-bc95-8a29da6aae53-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ce28537c-ff5d-4318-bc95-8a29da6aae53\") " pod="openstack/nova-metadata-0" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.909070 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce28537c-ff5d-4318-bc95-8a29da6aae53-logs\") pod \"nova-metadata-0\" (UID: \"ce28537c-ff5d-4318-bc95-8a29da6aae53\") " pod="openstack/nova-metadata-0" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.909091 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce28537c-ff5d-4318-bc95-8a29da6aae53-config-data\") pod \"nova-metadata-0\" (UID: \"ce28537c-ff5d-4318-bc95-8a29da6aae53\") " pod="openstack/nova-metadata-0" Nov 24 19:38:51 crc kubenswrapper[4812]: I1124 19:38:51.909181 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce28537c-ff5d-4318-bc95-8a29da6aae53-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ce28537c-ff5d-4318-bc95-8a29da6aae53\") " pod="openstack/nova-metadata-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.011121 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce28537c-ff5d-4318-bc95-8a29da6aae53-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ce28537c-ff5d-4318-bc95-8a29da6aae53\") " pod="openstack/nova-metadata-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.011167 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce28537c-ff5d-4318-bc95-8a29da6aae53-logs\") pod \"nova-metadata-0\" (UID: \"ce28537c-ff5d-4318-bc95-8a29da6aae53\") " pod="openstack/nova-metadata-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.011189 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce28537c-ff5d-4318-bc95-8a29da6aae53-config-data\") pod \"nova-metadata-0\" (UID: \"ce28537c-ff5d-4318-bc95-8a29da6aae53\") " pod="openstack/nova-metadata-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.011260 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce28537c-ff5d-4318-bc95-8a29da6aae53-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ce28537c-ff5d-4318-bc95-8a29da6aae53\") " pod="openstack/nova-metadata-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.011296 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fph6p\" (UniqueName: \"kubernetes.io/projected/ce28537c-ff5d-4318-bc95-8a29da6aae53-kube-api-access-fph6p\") pod \"nova-metadata-0\" (UID: \"ce28537c-ff5d-4318-bc95-8a29da6aae53\") " pod="openstack/nova-metadata-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.012074 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce28537c-ff5d-4318-bc95-8a29da6aae53-logs\") pod \"nova-metadata-0\" (UID: \"ce28537c-ff5d-4318-bc95-8a29da6aae53\") " pod="openstack/nova-metadata-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.016041 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce28537c-ff5d-4318-bc95-8a29da6aae53-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ce28537c-ff5d-4318-bc95-8a29da6aae53\") " pod="openstack/nova-metadata-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.016815 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce28537c-ff5d-4318-bc95-8a29da6aae53-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ce28537c-ff5d-4318-bc95-8a29da6aae53\") " pod="openstack/nova-metadata-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.019436 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce28537c-ff5d-4318-bc95-8a29da6aae53-config-data\") pod \"nova-metadata-0\" (UID: \"ce28537c-ff5d-4318-bc95-8a29da6aae53\") " pod="openstack/nova-metadata-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.029786 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fph6p\" (UniqueName: \"kubernetes.io/projected/ce28537c-ff5d-4318-bc95-8a29da6aae53-kube-api-access-fph6p\") pod \"nova-metadata-0\" (UID: \"ce28537c-ff5d-4318-bc95-8a29da6aae53\") " pod="openstack/nova-metadata-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.106684 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.248324 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.321268 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbfqt\" (UniqueName: \"kubernetes.io/projected/a6e7e914-8687-44d0-ae3b-d1cbdf4e861c-kube-api-access-nbfqt\") pod \"a6e7e914-8687-44d0-ae3b-d1cbdf4e861c\" (UID: \"a6e7e914-8687-44d0-ae3b-d1cbdf4e861c\") " Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.321668 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e7e914-8687-44d0-ae3b-d1cbdf4e861c-combined-ca-bundle\") pod \"a6e7e914-8687-44d0-ae3b-d1cbdf4e861c\" (UID: \"a6e7e914-8687-44d0-ae3b-d1cbdf4e861c\") " Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.321699 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6e7e914-8687-44d0-ae3b-d1cbdf4e861c-config-data\") pod \"a6e7e914-8687-44d0-ae3b-d1cbdf4e861c\" (UID: \"a6e7e914-8687-44d0-ae3b-d1cbdf4e861c\") " Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.329510 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6e7e914-8687-44d0-ae3b-d1cbdf4e861c-kube-api-access-nbfqt" (OuterVolumeSpecName: "kube-api-access-nbfqt") pod "a6e7e914-8687-44d0-ae3b-d1cbdf4e861c" (UID: "a6e7e914-8687-44d0-ae3b-d1cbdf4e861c"). InnerVolumeSpecName "kube-api-access-nbfqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.349026 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6e7e914-8687-44d0-ae3b-d1cbdf4e861c-config-data" (OuterVolumeSpecName: "config-data") pod "a6e7e914-8687-44d0-ae3b-d1cbdf4e861c" (UID: "a6e7e914-8687-44d0-ae3b-d1cbdf4e861c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.356380 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6e7e914-8687-44d0-ae3b-d1cbdf4e861c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6e7e914-8687-44d0-ae3b-d1cbdf4e861c" (UID: "a6e7e914-8687-44d0-ae3b-d1cbdf4e861c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.425922 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbfqt\" (UniqueName: \"kubernetes.io/projected/a6e7e914-8687-44d0-ae3b-d1cbdf4e861c-kube-api-access-nbfqt\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.425956 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6e7e914-8687-44d0-ae3b-d1cbdf4e861c-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.425968 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6e7e914-8687-44d0-ae3b-d1cbdf4e861c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.621323 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.719059 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ce28537c-ff5d-4318-bc95-8a29da6aae53","Type":"ContainerStarted","Data":"4f856e21ab2f67e4837e0420d5d6704ccea9a30bfeea083fd0b4293a9359d1c0"} Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.720119 4812 generic.go:334] "Generic (PLEG): container finished" podID="a6e7e914-8687-44d0-ae3b-d1cbdf4e861c" containerID="70b1b974b18a427b38a6ea845666552daee3070e59154bce1d46d193527228d3" exitCode=0 Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.720145 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a6e7e914-8687-44d0-ae3b-d1cbdf4e861c","Type":"ContainerDied","Data":"70b1b974b18a427b38a6ea845666552daee3070e59154bce1d46d193527228d3"} Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.720162 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a6e7e914-8687-44d0-ae3b-d1cbdf4e861c","Type":"ContainerDied","Data":"06ae2986347285f357e0ecc6c04334edb5c452e2fc838a0fce4cc3b654076287"} Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.720181 4812 scope.go:117] "RemoveContainer" containerID="70b1b974b18a427b38a6ea845666552daee3070e59154bce1d46d193527228d3" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.720188 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.746484 4812 scope.go:117] "RemoveContainer" containerID="70b1b974b18a427b38a6ea845666552daee3070e59154bce1d46d193527228d3" Nov 24 19:38:52 crc kubenswrapper[4812]: E1124 19:38:52.749683 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70b1b974b18a427b38a6ea845666552daee3070e59154bce1d46d193527228d3\": container with ID starting with 70b1b974b18a427b38a6ea845666552daee3070e59154bce1d46d193527228d3 not found: ID does not exist" containerID="70b1b974b18a427b38a6ea845666552daee3070e59154bce1d46d193527228d3" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.749930 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70b1b974b18a427b38a6ea845666552daee3070e59154bce1d46d193527228d3"} err="failed to get container status \"70b1b974b18a427b38a6ea845666552daee3070e59154bce1d46d193527228d3\": rpc error: code = NotFound desc = could not find container \"70b1b974b18a427b38a6ea845666552daee3070e59154bce1d46d193527228d3\": container with ID starting with 70b1b974b18a427b38a6ea845666552daee3070e59154bce1d46d193527228d3 not found: ID does not exist" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.768505 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.778812 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.788839 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 19:38:52 crc kubenswrapper[4812]: E1124 19:38:52.789567 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6e7e914-8687-44d0-ae3b-d1cbdf4e861c" containerName="nova-scheduler-scheduler" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.789600 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6e7e914-8687-44d0-ae3b-d1cbdf4e861c" containerName="nova-scheduler-scheduler" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.789984 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6e7e914-8687-44d0-ae3b-d1cbdf4e861c" containerName="nova-scheduler-scheduler" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.791190 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.799714 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.816880 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.841868 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36aff13b-5f97-40c0-8c88-64c93ce91bcb-config-data\") pod \"nova-scheduler-0\" (UID: \"36aff13b-5f97-40c0-8c88-64c93ce91bcb\") " pod="openstack/nova-scheduler-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.841932 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlr2w\" (UniqueName: \"kubernetes.io/projected/36aff13b-5f97-40c0-8c88-64c93ce91bcb-kube-api-access-wlr2w\") pod \"nova-scheduler-0\" (UID: \"36aff13b-5f97-40c0-8c88-64c93ce91bcb\") " pod="openstack/nova-scheduler-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.842041 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36aff13b-5f97-40c0-8c88-64c93ce91bcb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"36aff13b-5f97-40c0-8c88-64c93ce91bcb\") " pod="openstack/nova-scheduler-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.944412 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36aff13b-5f97-40c0-8c88-64c93ce91bcb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"36aff13b-5f97-40c0-8c88-64c93ce91bcb\") " pod="openstack/nova-scheduler-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.944507 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36aff13b-5f97-40c0-8c88-64c93ce91bcb-config-data\") pod \"nova-scheduler-0\" (UID: \"36aff13b-5f97-40c0-8c88-64c93ce91bcb\") " pod="openstack/nova-scheduler-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.944565 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlr2w\" (UniqueName: \"kubernetes.io/projected/36aff13b-5f97-40c0-8c88-64c93ce91bcb-kube-api-access-wlr2w\") pod \"nova-scheduler-0\" (UID: \"36aff13b-5f97-40c0-8c88-64c93ce91bcb\") " pod="openstack/nova-scheduler-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.948970 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36aff13b-5f97-40c0-8c88-64c93ce91bcb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"36aff13b-5f97-40c0-8c88-64c93ce91bcb\") " pod="openstack/nova-scheduler-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.948980 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36aff13b-5f97-40c0-8c88-64c93ce91bcb-config-data\") pod \"nova-scheduler-0\" (UID: \"36aff13b-5f97-40c0-8c88-64c93ce91bcb\") " pod="openstack/nova-scheduler-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.961996 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlr2w\" (UniqueName: \"kubernetes.io/projected/36aff13b-5f97-40c0-8c88-64c93ce91bcb-kube-api-access-wlr2w\") pod \"nova-scheduler-0\" (UID: \"36aff13b-5f97-40c0-8c88-64c93ce91bcb\") " pod="openstack/nova-scheduler-0" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.983469 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f9b1104-0b19-40dd-b447-364a7b4dc79a" path="/var/lib/kubelet/pods/9f9b1104-0b19-40dd-b447-364a7b4dc79a/volumes" Nov 24 19:38:52 crc kubenswrapper[4812]: I1124 19:38:52.984314 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6e7e914-8687-44d0-ae3b-d1cbdf4e861c" path="/var/lib/kubelet/pods/a6e7e914-8687-44d0-ae3b-d1cbdf4e861c/volumes" Nov 24 19:38:53 crc kubenswrapper[4812]: I1124 19:38:53.157296 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 19:38:53 crc kubenswrapper[4812]: I1124 19:38:53.651696 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 19:38:53 crc kubenswrapper[4812]: I1124 19:38:53.733123 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ce28537c-ff5d-4318-bc95-8a29da6aae53","Type":"ContainerStarted","Data":"83ad3cb937149c006757630969320727679b254e01b1b68d6042c9ca2d9a7b07"} Nov 24 19:38:53 crc kubenswrapper[4812]: I1124 19:38:53.733442 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ce28537c-ff5d-4318-bc95-8a29da6aae53","Type":"ContainerStarted","Data":"3551f287cbecbb04cb08a8e33cd37a12bda36c20f012d009c7acdca3dc72561a"} Nov 24 19:38:53 crc kubenswrapper[4812]: I1124 19:38:53.739286 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36aff13b-5f97-40c0-8c88-64c93ce91bcb","Type":"ContainerStarted","Data":"b50e4e72e94a42f7e72562434ad82fb05fd349b3c5f911909f455c087fa9cf28"} Nov 24 19:38:53 crc kubenswrapper[4812]: I1124 19:38:53.763224 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.763204794 podStartE2EDuration="2.763204794s" podCreationTimestamp="2025-11-24 19:38:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:38:53.758897144 +0000 UTC m=+1327.547849565" watchObservedRunningTime="2025-11-24 19:38:53.763204794 +0000 UTC m=+1327.552157165" Nov 24 19:38:54 crc kubenswrapper[4812]: I1124 19:38:54.757299 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36aff13b-5f97-40c0-8c88-64c93ce91bcb","Type":"ContainerStarted","Data":"43fd51907e2ff8ea08d221fd8a7a09a85a3f90b61cf1114371ca7d05e3dcf447"} Nov 24 19:38:54 crc kubenswrapper[4812]: I1124 19:38:54.794419 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.7943901220000003 podStartE2EDuration="2.794390122s" podCreationTimestamp="2025-11-24 19:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:38:54.778604621 +0000 UTC m=+1328.567557022" watchObservedRunningTime="2025-11-24 19:38:54.794390122 +0000 UTC m=+1328.583342523" Nov 24 19:38:57 crc kubenswrapper[4812]: I1124 19:38:57.107109 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 19:38:57 crc kubenswrapper[4812]: I1124 19:38:57.107582 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 19:38:58 crc kubenswrapper[4812]: I1124 19:38:58.158202 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 19:38:59 crc kubenswrapper[4812]: I1124 19:38:59.045238 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 19:38:59 crc kubenswrapper[4812]: I1124 19:38:59.045714 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 19:39:00 crc kubenswrapper[4812]: I1124 19:39:00.065506 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 19:39:00 crc kubenswrapper[4812]: I1124 19:39:00.065601 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 19:39:02 crc kubenswrapper[4812]: I1124 19:39:02.107414 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 19:39:02 crc kubenswrapper[4812]: I1124 19:39:02.107807 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 19:39:03 crc kubenswrapper[4812]: I1124 19:39:03.125549 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ce28537c-ff5d-4318-bc95-8a29da6aae53" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 19:39:03 crc kubenswrapper[4812]: I1124 19:39:03.125565 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ce28537c-ff5d-4318-bc95-8a29da6aae53" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 19:39:03 crc kubenswrapper[4812]: I1124 19:39:03.158009 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 19:39:03 crc kubenswrapper[4812]: I1124 19:39:03.186715 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 19:39:03 crc kubenswrapper[4812]: I1124 19:39:03.920272 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 19:39:06 crc kubenswrapper[4812]: I1124 19:39:06.235081 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 19:39:09 crc kubenswrapper[4812]: I1124 19:39:09.053684 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 19:39:09 crc kubenswrapper[4812]: I1124 19:39:09.054827 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 19:39:09 crc kubenswrapper[4812]: I1124 19:39:09.055265 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 19:39:09 crc kubenswrapper[4812]: I1124 19:39:09.061707 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 19:39:09 crc kubenswrapper[4812]: I1124 19:39:09.939209 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 19:39:09 crc kubenswrapper[4812]: I1124 19:39:09.947205 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 19:39:12 crc kubenswrapper[4812]: I1124 19:39:12.117018 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 19:39:12 crc kubenswrapper[4812]: I1124 19:39:12.117898 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 19:39:12 crc kubenswrapper[4812]: I1124 19:39:12.125649 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 19:39:12 crc kubenswrapper[4812]: I1124 19:39:12.130447 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 19:39:30 crc kubenswrapper[4812]: I1124 19:39:30.472646 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 24 19:39:30 crc kubenswrapper[4812]: I1124 19:39:30.473231 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="8f7453e8-f9c5-4588-80b8-82bba37e1514" containerName="openstackclient" containerID="cri-o://efc44587faa3ea03e56f32c183bd4613a63898038cbd2f133bb473692484cc9e" gracePeriod=2 Nov 24 19:39:30 crc kubenswrapper[4812]: I1124 19:39:30.488288 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 24 19:39:30 crc kubenswrapper[4812]: I1124 19:39:30.613363 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 19:39:30 crc kubenswrapper[4812]: I1124 19:39:30.613949 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="29c07c15-c183-4546-8c7b-574776382e9b" containerName="openstack-network-exporter" containerID="cri-o://d59c90fc27bffc2d0a2421c8e4a2fca424269770fabbb3a8764ba143d1d50c6e" gracePeriod=300 Nov 24 19:39:30 crc kubenswrapper[4812]: I1124 19:39:30.674013 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 19:39:30 crc kubenswrapper[4812]: I1124 19:39:30.749538 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="29c07c15-c183-4546-8c7b-574776382e9b" containerName="ovsdbserver-nb" containerID="cri-o://c1e926415066f8182e74a12929d98c3843c7ccb4cba6f365608af1c2c5278dee" gracePeriod=300 Nov 24 19:39:30 crc kubenswrapper[4812]: I1124 19:39:30.752378 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 19:39:30 crc kubenswrapper[4812]: I1124 19:39:30.752561 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="0648d4b9-0096-4092-b2b9-70e23f9c863c" containerName="ovn-northd" containerID="cri-o://1612fa99105c6c6db36c69df3ab73e15ab7582ce223f0eb33cb1b46662439f92" gracePeriod=30 Nov 24 19:39:30 crc kubenswrapper[4812]: I1124 19:39:30.752913 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="0648d4b9-0096-4092-b2b9-70e23f9c863c" containerName="openstack-network-exporter" containerID="cri-o://14a1254945b1fd6a95f6b90d7cf54f5c07482fc6aea59fae62c4d9b5369203bf" gracePeriod=30 Nov 24 19:39:30 crc kubenswrapper[4812]: I1124 19:39:30.799483 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement0bb7-account-delete-m56jl"] Nov 24 19:39:30 crc kubenswrapper[4812]: E1124 19:39:30.799867 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f7453e8-f9c5-4588-80b8-82bba37e1514" containerName="openstackclient" Nov 24 19:39:30 crc kubenswrapper[4812]: I1124 19:39:30.799886 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f7453e8-f9c5-4588-80b8-82bba37e1514" containerName="openstackclient" Nov 24 19:39:30 crc kubenswrapper[4812]: I1124 19:39:30.800099 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f7453e8-f9c5-4588-80b8-82bba37e1514" containerName="openstackclient" Nov 24 19:39:30 crc kubenswrapper[4812]: I1124 19:39:30.800678 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement0bb7-account-delete-m56jl" Nov 24 19:39:30 crc kubenswrapper[4812]: E1124 19:39:30.800930 4812 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 24 19:39:30 crc kubenswrapper[4812]: E1124 19:39:30.800982 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-config-data podName:db2cd03e-b999-48ea-b540-7fd35356ba8b nodeName:}" failed. No retries permitted until 2025-11-24 19:39:31.300967052 +0000 UTC m=+1365.089919423 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-config-data") pod "rabbitmq-cell1-server-0" (UID: "db2cd03e-b999-48ea-b540-7fd35356ba8b") : configmap "rabbitmq-cell1-config-data" not found Nov 24 19:39:30 crc kubenswrapper[4812]: I1124 19:39:30.807599 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement0bb7-account-delete-m56jl"] Nov 24 19:39:30 crc kubenswrapper[4812]: I1124 19:39:30.901881 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsdsg\" (UniqueName: \"kubernetes.io/projected/b12268ff-1299-46c4-8937-8c4dc00f7dc5-kube-api-access-jsdsg\") pod \"placement0bb7-account-delete-m56jl\" (UID: \"b12268ff-1299-46c4-8937-8c4dc00f7dc5\") " pod="openstack/placement0bb7-account-delete-m56jl" Nov 24 19:39:30 crc kubenswrapper[4812]: I1124 19:39:30.901933 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12268ff-1299-46c4-8937-8c4dc00f7dc5-operator-scripts\") pod \"placement0bb7-account-delete-m56jl\" (UID: \"b12268ff-1299-46c4-8937-8c4dc00f7dc5\") " pod="openstack/placement0bb7-account-delete-m56jl" Nov 24 19:39:30 crc kubenswrapper[4812]: I1124 19:39:30.958091 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance7fd2-account-delete-vl62k"] Nov 24 19:39:30 crc kubenswrapper[4812]: I1124 19:39:30.959271 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance7fd2-account-delete-vl62k" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.007032 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12268ff-1299-46c4-8937-8c4dc00f7dc5-operator-scripts\") pod \"placement0bb7-account-delete-m56jl\" (UID: \"b12268ff-1299-46c4-8937-8c4dc00f7dc5\") " pod="openstack/placement0bb7-account-delete-m56jl" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.007255 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsdsg\" (UniqueName: \"kubernetes.io/projected/b12268ff-1299-46c4-8937-8c4dc00f7dc5-kube-api-access-jsdsg\") pod \"placement0bb7-account-delete-m56jl\" (UID: \"b12268ff-1299-46c4-8937-8c4dc00f7dc5\") " pod="openstack/placement0bb7-account-delete-m56jl" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.008228 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12268ff-1299-46c4-8937-8c4dc00f7dc5-operator-scripts\") pod \"placement0bb7-account-delete-m56jl\" (UID: \"b12268ff-1299-46c4-8937-8c4dc00f7dc5\") " pod="openstack/placement0bb7-account-delete-m56jl" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.019759 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbicanfac2-account-delete-v4hfz"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.024406 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbicanfac2-account-delete-v4hfz" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.039715 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance7fd2-account-delete-vl62k"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.101432 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsdsg\" (UniqueName: \"kubernetes.io/projected/b12268ff-1299-46c4-8937-8c4dc00f7dc5-kube-api-access-jsdsg\") pod \"placement0bb7-account-delete-m56jl\" (UID: \"b12268ff-1299-46c4-8937-8c4dc00f7dc5\") " pod="openstack/placement0bb7-account-delete-m56jl" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.106901 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbicanfac2-account-delete-v4hfz"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.111719 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-operator-scripts\") pod \"glance7fd2-account-delete-vl62k\" (UID: \"c2d42cca-2cc7-4b6a-9f1d-215f522b4c82\") " pod="openstack/glance7fd2-account-delete-vl62k" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.114443 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rb4x\" (UniqueName: \"kubernetes.io/projected/72d5ebd7-a16f-49bb-a80e-1138d5f197d3-kube-api-access-8rb4x\") pod \"barbicanfac2-account-delete-v4hfz\" (UID: \"72d5ebd7-a16f-49bb-a80e-1138d5f197d3\") " pod="openstack/barbicanfac2-account-delete-v4hfz" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.114768 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp9sz\" (UniqueName: \"kubernetes.io/projected/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-kube-api-access-fp9sz\") pod \"glance7fd2-account-delete-vl62k\" (UID: \"c2d42cca-2cc7-4b6a-9f1d-215f522b4c82\") " pod="openstack/glance7fd2-account-delete-vl62k" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.114813 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72d5ebd7-a16f-49bb-a80e-1138d5f197d3-operator-scripts\") pod \"barbicanfac2-account-delete-v4hfz\" (UID: \"72d5ebd7-a16f-49bb-a80e-1138d5f197d3\") " pod="openstack/barbicanfac2-account-delete-v4hfz" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.135811 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-j8glp"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.149964 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder5926-account-delete-lvwv5"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.175935 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder5926-account-delete-lvwv5" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.192427 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-j8glp"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.221459 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rb4x\" (UniqueName: \"kubernetes.io/projected/72d5ebd7-a16f-49bb-a80e-1138d5f197d3-kube-api-access-8rb4x\") pod \"barbicanfac2-account-delete-v4hfz\" (UID: \"72d5ebd7-a16f-49bb-a80e-1138d5f197d3\") " pod="openstack/barbicanfac2-account-delete-v4hfz" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.221756 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp9sz\" (UniqueName: \"kubernetes.io/projected/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-kube-api-access-fp9sz\") pod \"glance7fd2-account-delete-vl62k\" (UID: \"c2d42cca-2cc7-4b6a-9f1d-215f522b4c82\") " pod="openstack/glance7fd2-account-delete-vl62k" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.221777 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72d5ebd7-a16f-49bb-a80e-1138d5f197d3-operator-scripts\") pod \"barbicanfac2-account-delete-v4hfz\" (UID: \"72d5ebd7-a16f-49bb-a80e-1138d5f197d3\") " pod="openstack/barbicanfac2-account-delete-v4hfz" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.221880 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-operator-scripts\") pod \"glance7fd2-account-delete-vl62k\" (UID: \"c2d42cca-2cc7-4b6a-9f1d-215f522b4c82\") " pod="openstack/glance7fd2-account-delete-vl62k" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.222528 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-operator-scripts\") pod \"glance7fd2-account-delete-vl62k\" (UID: \"c2d42cca-2cc7-4b6a-9f1d-215f522b4c82\") " pod="openstack/glance7fd2-account-delete-vl62k" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.223002 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72d5ebd7-a16f-49bb-a80e-1138d5f197d3-operator-scripts\") pod \"barbicanfac2-account-delete-v4hfz\" (UID: \"72d5ebd7-a16f-49bb-a80e-1138d5f197d3\") " pod="openstack/barbicanfac2-account-delete-v4hfz" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.260701 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-fk6qb"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.279767 4812 generic.go:334] "Generic (PLEG): container finished" podID="0648d4b9-0096-4092-b2b9-70e23f9c863c" containerID="14a1254945b1fd6a95f6b90d7cf54f5c07482fc6aea59fae62c4d9b5369203bf" exitCode=2 Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.279846 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0648d4b9-0096-4092-b2b9-70e23f9c863c","Type":"ContainerDied","Data":"14a1254945b1fd6a95f6b90d7cf54f5c07482fc6aea59fae62c4d9b5369203bf"} Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.288128 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rb4x\" (UniqueName: \"kubernetes.io/projected/72d5ebd7-a16f-49bb-a80e-1138d5f197d3-kube-api-access-8rb4x\") pod \"barbicanfac2-account-delete-v4hfz\" (UID: \"72d5ebd7-a16f-49bb-a80e-1138d5f197d3\") " pod="openstack/barbicanfac2-account-delete-v4hfz" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.297022 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp9sz\" (UniqueName: \"kubernetes.io/projected/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-kube-api-access-fp9sz\") pod \"glance7fd2-account-delete-vl62k\" (UID: \"c2d42cca-2cc7-4b6a-9f1d-215f522b4c82\") " pod="openstack/glance7fd2-account-delete-vl62k" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.306895 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement0bb7-account-delete-m56jl" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.322855 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znmhl\" (UniqueName: \"kubernetes.io/projected/d1b0b239-3d7b-40c3-a599-f3f74d452813-kube-api-access-znmhl\") pod \"cinder5926-account-delete-lvwv5\" (UID: \"d1b0b239-3d7b-40c3-a599-f3f74d452813\") " pod="openstack/cinder5926-account-delete-lvwv5" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.323161 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1b0b239-3d7b-40c3-a599-f3f74d452813-operator-scripts\") pod \"cinder5926-account-delete-lvwv5\" (UID: \"d1b0b239-3d7b-40c3-a599-f3f74d452813\") " pod="openstack/cinder5926-account-delete-lvwv5" Nov 24 19:39:31 crc kubenswrapper[4812]: E1124 19:39:31.323475 4812 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 24 19:39:31 crc kubenswrapper[4812]: E1124 19:39:31.323589 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-config-data podName:db2cd03e-b999-48ea-b540-7fd35356ba8b nodeName:}" failed. No retries permitted until 2025-11-24 19:39:32.323571142 +0000 UTC m=+1366.112523513 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-config-data") pod "rabbitmq-cell1-server-0" (UID: "db2cd03e-b999-48ea-b540-7fd35356ba8b") : configmap "rabbitmq-cell1-config-data" not found Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.346738 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance7fd2-account-delete-vl62k" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.360698 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-fk6qb"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.376311 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_29c07c15-c183-4546-8c7b-574776382e9b/ovsdbserver-nb/0.log" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.376379 4812 generic.go:334] "Generic (PLEG): container finished" podID="29c07c15-c183-4546-8c7b-574776382e9b" containerID="d59c90fc27bffc2d0a2421c8e4a2fca424269770fabbb3a8764ba143d1d50c6e" exitCode=2 Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.376397 4812 generic.go:334] "Generic (PLEG): container finished" podID="29c07c15-c183-4546-8c7b-574776382e9b" containerID="c1e926415066f8182e74a12929d98c3843c7ccb4cba6f365608af1c2c5278dee" exitCode=143 Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.376417 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"29c07c15-c183-4546-8c7b-574776382e9b","Type":"ContainerDied","Data":"d59c90fc27bffc2d0a2421c8e4a2fca424269770fabbb3a8764ba143d1d50c6e"} Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.376440 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"29c07c15-c183-4546-8c7b-574776382e9b","Type":"ContainerDied","Data":"c1e926415066f8182e74a12929d98c3843c7ccb4cba6f365608af1c2c5278dee"} Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.404179 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder5926-account-delete-lvwv5"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.427372 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znmhl\" (UniqueName: \"kubernetes.io/projected/d1b0b239-3d7b-40c3-a599-f3f74d452813-kube-api-access-znmhl\") pod \"cinder5926-account-delete-lvwv5\" (UID: \"d1b0b239-3d7b-40c3-a599-f3f74d452813\") " pod="openstack/cinder5926-account-delete-lvwv5" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.427488 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1b0b239-3d7b-40c3-a599-f3f74d452813-operator-scripts\") pod \"cinder5926-account-delete-lvwv5\" (UID: \"d1b0b239-3d7b-40c3-a599-f3f74d452813\") " pod="openstack/cinder5926-account-delete-lvwv5" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.428220 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1b0b239-3d7b-40c3-a599-f3f74d452813-operator-scripts\") pod \"cinder5926-account-delete-lvwv5\" (UID: \"d1b0b239-3d7b-40c3-a599-f3f74d452813\") " pod="openstack/cinder5926-account-delete-lvwv5" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.443150 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbicanfac2-account-delete-v4hfz" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.443593 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-n2bml"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.443757 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-n2bml" podUID="8cfe0ed1-d668-482c-8303-2ca93bdfc057" containerName="openstack-network-exporter" containerID="cri-o://62d03c4a9e5685c9c943a0d1a4db0073f9bc0452d073dbf66734cda8bf0c38b1" gracePeriod=30 Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.453057 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-d48t8"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.469394 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-cgt9p"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.504622 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron448d-account-delete-qskvr"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.505821 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.506767 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron448d-account-delete-qskvr" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.539932 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znmhl\" (UniqueName: \"kubernetes.io/projected/d1b0b239-3d7b-40c3-a599-f3f74d452813-kube-api-access-znmhl\") pod \"cinder5926-account-delete-lvwv5\" (UID: \"d1b0b239-3d7b-40c3-a599-f3f74d452813\") " pod="openstack/cinder5926-account-delete-lvwv5" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.552356 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder5926-account-delete-lvwv5" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.564228 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron448d-account-delete-qskvr"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.632569 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgxq6\" (UniqueName: \"kubernetes.io/projected/b72df6d7-27f7-49e5-93d9-4069db72b602-kube-api-access-dgxq6\") pod \"neutron448d-account-delete-qskvr\" (UID: \"b72df6d7-27f7-49e5-93d9-4069db72b602\") " pod="openstack/neutron448d-account-delete-qskvr" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.632613 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72df6d7-27f7-49e5-93d9-4069db72b602-operator-scripts\") pod \"neutron448d-account-delete-qskvr\" (UID: \"b72df6d7-27f7-49e5-93d9-4069db72b602\") " pod="openstack/neutron448d-account-delete-qskvr" Nov 24 19:39:31 crc kubenswrapper[4812]: E1124 19:39:31.634369 4812 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 24 19:39:31 crc kubenswrapper[4812]: E1124 19:39:31.634412 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-config-data podName:7cb8ede6-6163-4906-89f7-7fe6458edc36 nodeName:}" failed. No retries permitted until 2025-11-24 19:39:32.134401225 +0000 UTC m=+1365.923353596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-config-data") pod "rabbitmq-server-0" (UID: "7cb8ede6-6163-4906-89f7-7fe6458edc36") : configmap "rabbitmq-config-data" not found Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.642623 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-xnfjt"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.642875 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" podUID="d7b9be63-7c78-4d97-87e7-8efd09c2669b" containerName="dnsmasq-dns" containerID="cri-o://1f40489248bad75264493cd7ecc63fec04273b7705afa1d923049f768b10f769" gracePeriod=10 Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.673165 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-pq5zx"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.731625 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-pq5zx"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.733502 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgxq6\" (UniqueName: \"kubernetes.io/projected/b72df6d7-27f7-49e5-93d9-4069db72b602-kube-api-access-dgxq6\") pod \"neutron448d-account-delete-qskvr\" (UID: \"b72df6d7-27f7-49e5-93d9-4069db72b602\") " pod="openstack/neutron448d-account-delete-qskvr" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.733546 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72df6d7-27f7-49e5-93d9-4069db72b602-operator-scripts\") pod \"neutron448d-account-delete-qskvr\" (UID: \"b72df6d7-27f7-49e5-93d9-4069db72b602\") " pod="openstack/neutron448d-account-delete-qskvr" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.748003 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72df6d7-27f7-49e5-93d9-4069db72b602-operator-scripts\") pod \"neutron448d-account-delete-qskvr\" (UID: \"b72df6d7-27f7-49e5-93d9-4069db72b602\") " pod="openstack/neutron448d-account-delete-qskvr" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.775926 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgxq6\" (UniqueName: \"kubernetes.io/projected/b72df6d7-27f7-49e5-93d9-4069db72b602-kube-api-access-dgxq6\") pod \"neutron448d-account-delete-qskvr\" (UID: \"b72df6d7-27f7-49e5-93d9-4069db72b602\") " pod="openstack/neutron448d-account-delete-qskvr" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.801414 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novaapi5538-account-delete-t9sww"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.802693 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi5538-account-delete-t9sww" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.810385 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-nl6jz"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.849528 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-nl6jz"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.926481 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi5538-account-delete-t9sww"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.931804 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell063c1-account-delete-9lhdk"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.933131 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell063c1-account-delete-9lhdk" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.936458 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron448d-account-delete-qskvr" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.946458 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbjsg\" (UniqueName: \"kubernetes.io/projected/7c8539e2-3b3b-488b-9111-d59aa7317490-kube-api-access-pbjsg\") pod \"novacell063c1-account-delete-9lhdk\" (UID: \"7c8539e2-3b3b-488b-9111-d59aa7317490\") " pod="openstack/novacell063c1-account-delete-9lhdk" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.946519 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-operator-scripts\") pod \"novaapi5538-account-delete-t9sww\" (UID: \"2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9\") " pod="openstack/novaapi5538-account-delete-t9sww" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.946608 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw4lg\" (UniqueName: \"kubernetes.io/projected/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-kube-api-access-lw4lg\") pod \"novaapi5538-account-delete-t9sww\" (UID: \"2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9\") " pod="openstack/novaapi5538-account-delete-t9sww" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.946723 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c8539e2-3b3b-488b-9111-d59aa7317490-operator-scripts\") pod \"novacell063c1-account-delete-9lhdk\" (UID: \"7c8539e2-3b3b-488b-9111-d59aa7317490\") " pod="openstack/novacell063c1-account-delete-9lhdk" Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.962406 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-wv8xf"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.979621 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-wv8xf"] Nov 24 19:39:31 crc kubenswrapper[4812]: I1124 19:39:31.989326 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell063c1-account-delete-9lhdk"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.011487 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-574f454648-szvd4"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.012615 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-574f454648-szvd4" podUID="1c8509a7-6885-4238-9da3-b214b5f8868e" containerName="placement-log" containerID="cri-o://692e428119dfde862fe0aeb20102df094a9cb17fc1bcce8548dce4d9b4546e92" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.013284 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-574f454648-szvd4" podUID="1c8509a7-6885-4238-9da3-b214b5f8868e" containerName="placement-api" containerID="cri-o://d07bf2cbab76f1bef1a940806ac6f6323ae423f3a69064aeb96db6f329d5665f" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.044089 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-99q2n"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.048583 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c8539e2-3b3b-488b-9111-d59aa7317490-operator-scripts\") pod \"novacell063c1-account-delete-9lhdk\" (UID: \"7c8539e2-3b3b-488b-9111-d59aa7317490\") " pod="openstack/novacell063c1-account-delete-9lhdk" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.048658 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbjsg\" (UniqueName: \"kubernetes.io/projected/7c8539e2-3b3b-488b-9111-d59aa7317490-kube-api-access-pbjsg\") pod \"novacell063c1-account-delete-9lhdk\" (UID: \"7c8539e2-3b3b-488b-9111-d59aa7317490\") " pod="openstack/novacell063c1-account-delete-9lhdk" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.048695 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-operator-scripts\") pod \"novaapi5538-account-delete-t9sww\" (UID: \"2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9\") " pod="openstack/novaapi5538-account-delete-t9sww" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.048790 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw4lg\" (UniqueName: \"kubernetes.io/projected/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-kube-api-access-lw4lg\") pod \"novaapi5538-account-delete-t9sww\" (UID: \"2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9\") " pod="openstack/novaapi5538-account-delete-t9sww" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.050791 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c8539e2-3b3b-488b-9111-d59aa7317490-operator-scripts\") pod \"novacell063c1-account-delete-9lhdk\" (UID: \"7c8539e2-3b3b-488b-9111-d59aa7317490\") " pod="openstack/novacell063c1-account-delete-9lhdk" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.051671 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-operator-scripts\") pod \"novaapi5538-account-delete-t9sww\" (UID: \"2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9\") " pod="openstack/novaapi5538-account-delete-t9sww" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.065023 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-99q2n"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.082486 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbjsg\" (UniqueName: \"kubernetes.io/projected/7c8539e2-3b3b-488b-9111-d59aa7317490-kube-api-access-pbjsg\") pod \"novacell063c1-account-delete-9lhdk\" (UID: \"7c8539e2-3b3b-488b-9111-d59aa7317490\") " pod="openstack/novacell063c1-account-delete-9lhdk" Nov 24 19:39:32 crc kubenswrapper[4812]: E1124 19:39:32.088364 4812 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-d48t8" message=< Nov 24 19:39:32 crc kubenswrapper[4812]: Exiting ovn-controller (1) [ OK ] Nov 24 19:39:32 crc kubenswrapper[4812]: > Nov 24 19:39:32 crc kubenswrapper[4812]: E1124 19:39:32.088399 4812 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-d48t8" podUID="4f3dca2d-7c6c-428d-9789-1463444fe46f" containerName="ovn-controller" containerID="cri-o://d5ff05e4d3f2fbc8607a3a9a2e2d57c02a86f1e0f1e41e800c7ae06df2e9f0dc" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.088435 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-d48t8" podUID="4f3dca2d-7c6c-428d-9789-1463444fe46f" containerName="ovn-controller" containerID="cri-o://d5ff05e4d3f2fbc8607a3a9a2e2d57c02a86f1e0f1e41e800c7ae06df2e9f0dc" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.088720 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-hrl87"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.120098 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw4lg\" (UniqueName: \"kubernetes.io/projected/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-kube-api-access-lw4lg\") pod \"novaapi5538-account-delete-t9sww\" (UID: \"2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9\") " pod="openstack/novaapi5538-account-delete-t9sww" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.142499 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-hrl87"] Nov 24 19:39:32 crc kubenswrapper[4812]: E1124 19:39:32.152863 4812 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 24 19:39:32 crc kubenswrapper[4812]: E1124 19:39:32.152906 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-config-data podName:7cb8ede6-6163-4906-89f7-7fe6458edc36 nodeName:}" failed. No retries permitted until 2025-11-24 19:39:33.15289249 +0000 UTC m=+1366.941844861 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-config-data") pod "rabbitmq-server-0" (UID: "7cb8ede6-6163-4906-89f7-7fe6458edc36") : configmap "rabbitmq-config-data" not found Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.230143 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-2zpr9"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.249636 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-2zpr9"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.255991 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi5538-account-delete-t9sww" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.277949 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.278170 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="56f4002e-a9bd-462d-b5ed-ce5ea166ec16" containerName="glance-log" containerID="cri-o://421d219729fffc3e229f6f163aa63a1f6a6b2a4dd6565bac6b4f81cad77dfa57" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.278575 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="56f4002e-a9bd-462d-b5ed-ce5ea166ec16" containerName="glance-httpd" containerID="cri-o://a37153fef35e2218bd72e8d349e3157d2b4261a45df7d03e4deb1d5ee538a030" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.290456 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell063c1-account-delete-9lhdk" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.292536 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.293079 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="dfeb89d5-a81d-411c-8808-ae9f506780e2" containerName="openstack-network-exporter" containerID="cri-o://b80b07ff93663ea7089effb52d3abdb13f3c43cfcfd43f87f18a84303d9f1a1c" gracePeriod=300 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.312029 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_29c07c15-c183-4546-8c7b-574776382e9b/ovsdbserver-nb/0.log" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.312109 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.319686 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.320009 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cf01449d-a52d-488f-bd93-b5f84b57fb13" containerName="glance-log" containerID="cri-o://289e772fea34cd9b5573b8563f47c8b619a399e0ab44c84b1b81de861433540a" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.320123 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="cf01449d-a52d-488f-bd93-b5f84b57fb13" containerName="glance-httpd" containerID="cri-o://0246dd3196f30fe8b017001268ba69633ac8f5832eaad85f9a757939e217b9fb" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.338121 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.340623 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="account-server" containerID="cri-o://ec513a4be128ca8f5f9f823a774f668bfdc27e8b505dfc23a7f565ab756b025f" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.340660 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="object-server" containerID="cri-o://369329005ba708cb3da74c4eb5f269c7a9bc67780612f12579a184b149f8f6a3" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.340801 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="swift-recon-cron" containerID="cri-o://5da49ed3bd43bccf7c1c7a762249c0b9563d0a34713a10ddabb4f29f3bb2d6c4" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.340858 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="rsync" containerID="cri-o://52a6492fd3d0deacb1f3de4099c2af1b8e13196fcc94e5057991492ec70dceb1" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.340901 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="object-expirer" containerID="cri-o://7dfb7bb5f541d3e6a7b34ba4b2765ad1a014b13d9acd5887236f22389f857686" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.340946 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="object-updater" containerID="cri-o://583024e52dfa2d0198e9e008ff828f760750f1c77c129940527456b669651af8" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.340984 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="object-auditor" containerID="cri-o://e7a4a9c734691227c7726486773c640c7222fb5fb72a6dfe0e67b5f73d49779c" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.341021 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="object-replicator" containerID="cri-o://6f8586c7b6379e41339cf4003e181ac17b8000c4b6cf916c10b8bd5974063830" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.341080 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="container-server" containerID="cri-o://3737b1e379814063ab73fb73d8cadf66e16906093ac70c6122f3dd28c3ef1165" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.341119 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="container-updater" containerID="cri-o://5c72f1df163389e7bb1686785b1e834a0fcaae9c25f9b0e0118c50ce389f8124" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.341152 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="container-auditor" containerID="cri-o://0f2648d680c181c55b3d5b4aac28a54db4a0cc1f041a8502edaf196dec2f5d2b" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.341196 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="container-replicator" containerID="cri-o://f42cebfcb67a0c82f8b5ea0093df23a7f3db4c4d55c27eba7281fffcdffb9145" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.341235 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="account-auditor" containerID="cri-o://85c687b61ffed21faaf0d22af2f315067e640622918bcd4af610650579b3e1ea" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.341263 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="account-reaper" containerID="cri-o://95ee721a93b130c2b838cad7d2589d756c66158d110fc387dea0a070f5cab34b" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.341294 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="account-replicator" containerID="cri-o://d1f7937bf24142a9d61506b61ddf9aa74aef7f7791c9f2de08d17609dc7039ce" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.372510 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.372876 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7063bb41-d0d1-4605-b265-1fb3adce77b5" containerName="cinder-scheduler" containerID="cri-o://de017585554f7c04efa116007ce5cd58c485707a3172724fa32769316d9a7c7b" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.377581 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7063bb41-d0d1-4605-b265-1fb3adce77b5" containerName="probe" containerID="cri-o://03b569a149de1654124dbb26b9b742abf3f34b40cd6556a3ae9992326e8dd347" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.447806 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.448535 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="de0c0304-db1e-4a1e-8b7e-5700c55fc9ed" containerName="cinder-api-log" containerID="cri-o://5afde10d2f90a0c5d1bc9ea9375923c024c51c32da257a3842c5489dcc1fd794" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.449183 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="de0c0304-db1e-4a1e-8b7e-5700c55fc9ed" containerName="cinder-api" containerID="cri-o://4984c0a89981ef7e9ccb2912cd1e3db25ba3f333055dcbc10b30cb69f8c45835" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: E1124 19:39:32.466464 4812 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 24 19:39:32 crc kubenswrapper[4812]: E1124 19:39:32.466558 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-config-data podName:db2cd03e-b999-48ea-b540-7fd35356ba8b nodeName:}" failed. No retries permitted until 2025-11-24 19:39:34.466529032 +0000 UTC m=+1368.255481403 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-config-data") pod "rabbitmq-cell1-server-0" (UID: "db2cd03e-b999-48ea-b540-7fd35356ba8b") : configmap "rabbitmq-cell1-config-data" not found Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.481172 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-n2bml_8cfe0ed1-d668-482c-8303-2ca93bdfc057/openstack-network-exporter/0.log" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.481227 4812 generic.go:334] "Generic (PLEG): container finished" podID="8cfe0ed1-d668-482c-8303-2ca93bdfc057" containerID="62d03c4a9e5685c9c943a0d1a4db0073f9bc0452d073dbf66734cda8bf0c38b1" exitCode=2 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.481308 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-n2bml" event={"ID":"8cfe0ed1-d668-482c-8303-2ca93bdfc057","Type":"ContainerDied","Data":"62d03c4a9e5685c9c943a0d1a4db0073f9bc0452d073dbf66734cda8bf0c38b1"} Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.512403 4812 generic.go:334] "Generic (PLEG): container finished" podID="4f3dca2d-7c6c-428d-9789-1463444fe46f" containerID="d5ff05e4d3f2fbc8607a3a9a2e2d57c02a86f1e0f1e41e800c7ae06df2e9f0dc" exitCode=0 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.512520 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-d48t8" event={"ID":"4f3dca2d-7c6c-428d-9789-1463444fe46f","Type":"ContainerDied","Data":"d5ff05e4d3f2fbc8607a3a9a2e2d57c02a86f1e0f1e41e800c7ae06df2e9f0dc"} Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.516812 4812 generic.go:334] "Generic (PLEG): container finished" podID="1c8509a7-6885-4238-9da3-b214b5f8868e" containerID="692e428119dfde862fe0aeb20102df094a9cb17fc1bcce8548dce4d9b4546e92" exitCode=143 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.516930 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-574f454648-szvd4" event={"ID":"1c8509a7-6885-4238-9da3-b214b5f8868e","Type":"ContainerDied","Data":"692e428119dfde862fe0aeb20102df094a9cb17fc1bcce8548dce4d9b4546e92"} Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.541599 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="dfeb89d5-a81d-411c-8808-ae9f506780e2" containerName="ovsdbserver-sb" containerID="cri-o://542fa987ba71008e6087798dc608e5f12d1c0e5b34bd8042efb1db753a6ec061" gracePeriod=300 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.555175 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_29c07c15-c183-4546-8c7b-574776382e9b/ovsdbserver-nb/0.log" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.555250 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"29c07c15-c183-4546-8c7b-574776382e9b","Type":"ContainerDied","Data":"eae3ae89009ffccbe83d7a6ba4cfe9b9ba857c8cf8f22d177ca4038a9cfd5056"} Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.555284 4812 scope.go:117] "RemoveContainer" containerID="d59c90fc27bffc2d0a2421c8e4a2fca424269770fabbb3a8764ba143d1d50c6e" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.561006 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.567828 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f58df686c-jn8qq"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.568080 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-f58df686c-jn8qq" podUID="cf04fa0a-bd1d-442f-afff-05fe1ebdeafa" containerName="neutron-api" containerID="cri-o://1f11baf8ed33d8f96d179e8923989ae7c6c575405267a06b72ced9510109a883" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.568227 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-f58df686c-jn8qq" podUID="cf04fa0a-bd1d-442f-afff-05fe1ebdeafa" containerName="neutron-httpd" containerID="cri-o://9eecdb130173be4362c4b01b4d7a91e9916967b8217d2c08f6922d9d056d6e8b" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.571019 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29c07c15-c183-4546-8c7b-574776382e9b-scripts\") pod \"29c07c15-c183-4546-8c7b-574776382e9b\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.571055 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c07c15-c183-4546-8c7b-574776382e9b-combined-ca-bundle\") pod \"29c07c15-c183-4546-8c7b-574776382e9b\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.571117 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/29c07c15-c183-4546-8c7b-574776382e9b-ovsdbserver-nb-tls-certs\") pod \"29c07c15-c183-4546-8c7b-574776382e9b\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.571140 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29c07c15-c183-4546-8c7b-574776382e9b-config\") pod \"29c07c15-c183-4546-8c7b-574776382e9b\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.571176 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvskw\" (UniqueName: \"kubernetes.io/projected/29c07c15-c183-4546-8c7b-574776382e9b-kube-api-access-vvskw\") pod \"29c07c15-c183-4546-8c7b-574776382e9b\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.571196 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"29c07c15-c183-4546-8c7b-574776382e9b\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.571228 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/29c07c15-c183-4546-8c7b-574776382e9b-ovsdb-rundir\") pod \"29c07c15-c183-4546-8c7b-574776382e9b\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.571371 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/29c07c15-c183-4546-8c7b-574776382e9b-metrics-certs-tls-certs\") pod \"29c07c15-c183-4546-8c7b-574776382e9b\" (UID: \"29c07c15-c183-4546-8c7b-574776382e9b\") " Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.573535 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29c07c15-c183-4546-8c7b-574776382e9b-scripts" (OuterVolumeSpecName: "scripts") pod "29c07c15-c183-4546-8c7b-574776382e9b" (UID: "29c07c15-c183-4546-8c7b-574776382e9b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.580726 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29c07c15-c183-4546-8c7b-574776382e9b-config" (OuterVolumeSpecName: "config") pod "29c07c15-c183-4546-8c7b-574776382e9b" (UID: "29c07c15-c183-4546-8c7b-574776382e9b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.589839 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29c07c15-c183-4546-8c7b-574776382e9b-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "29c07c15-c183-4546-8c7b-574776382e9b" (UID: "29c07c15-c183-4546-8c7b-574776382e9b"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.599677 4812 generic.go:334] "Generic (PLEG): container finished" podID="d7b9be63-7c78-4d97-87e7-8efd09c2669b" containerID="1f40489248bad75264493cd7ecc63fec04273b7705afa1d923049f768b10f769" exitCode=0 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.599728 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" event={"ID":"d7b9be63-7c78-4d97-87e7-8efd09c2669b","Type":"ContainerDied","Data":"1f40489248bad75264493cd7ecc63fec04273b7705afa1d923049f768b10f769"} Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.629473 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-86f4846457-t2fl4"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.636520 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-86f4846457-t2fl4" podUID="418d4320-50e7-4767-bed7-db43af1583f0" containerName="proxy-httpd" containerID="cri-o://c1a3bf87878ce67c52c317f9bd13b4bf250280872ea6f819e3464fba67ab5710" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.637044 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-86f4846457-t2fl4" podUID="418d4320-50e7-4767-bed7-db43af1583f0" containerName="proxy-server" containerID="cri-o://37219908d6616db696c761cac954619d4884750ea17c429640079a84f18bb5e8" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.643513 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "29c07c15-c183-4546-8c7b-574776382e9b" (UID: "29c07c15-c183-4546-8c7b-574776382e9b"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.643730 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29c07c15-c183-4546-8c7b-574776382e9b-kube-api-access-vvskw" (OuterVolumeSpecName: "kube-api-access-vvskw") pod "29c07c15-c183-4546-8c7b-574776382e9b" (UID: "29c07c15-c183-4546-8c7b-574776382e9b"). InnerVolumeSpecName "kube-api-access-vvskw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.661795 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.687728 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.691586 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/29c07c15-c183-4546-8c7b-574776382e9b-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.691615 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29c07c15-c183-4546-8c7b-574776382e9b-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.691625 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvskw\" (UniqueName: \"kubernetes.io/projected/29c07c15-c183-4546-8c7b-574776382e9b-kube-api-access-vvskw\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.691645 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.691655 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/29c07c15-c183-4546-8c7b-574776382e9b-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.702108 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.704962 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ce28537c-ff5d-4318-bc95-8a29da6aae53" containerName="nova-metadata-metadata" containerID="cri-o://83ad3cb937149c006757630969320727679b254e01b1b68d6042c9ca2d9a7b07" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.708861 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-94c5495b6-f8ptk"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.709064 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" podUID="fde2bf26-3775-4ffe-83d7-1bc11d36c1e3" containerName="barbican-keystone-listener-log" containerID="cri-o://c13e6786a49a879ebdafe7e2706bbeadf9a5620f8ec5d41a07a487a04ba9d615" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.709094 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" podUID="fde2bf26-3775-4ffe-83d7-1bc11d36c1e3" containerName="barbican-keystone-listener" containerID="cri-o://44d0bec5b4e03ee1c205aff04d3d557f0b6f05d3d6adee7a3fd558ceba3cbd6b" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.702566 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ce28537c-ff5d-4318-bc95-8a29da6aae53" containerName="nova-metadata-log" containerID="cri-o://3551f287cbecbb04cb08a8e33cd37a12bda36c20f012d009c7acdca3dc72561a" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.729397 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-b4d7f8ffb-jqsz8"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.729617 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-b4d7f8ffb-jqsz8" podUID="2e1ab98f-9015-432e-90f2-4692dc37c99e" containerName="barbican-api-log" containerID="cri-o://4cc9d9368a0a7cba1482b1fd1cdcb122538087b4b0ce32c9891541bd780042c8" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.729960 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-b4d7f8ffb-jqsz8" podUID="2e1ab98f-9015-432e-90f2-4692dc37c99e" containerName="barbican-api" containerID="cri-o://0e2652f399100af755bd463b87697dab3c3766ed7ffde1ccbef750bb41e7d20c" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.741811 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6474fd5f77-zwqph"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.742092 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6474fd5f77-zwqph" podUID="5388a6e8-73d7-4097-9521-b47649b4c6c8" containerName="barbican-worker-log" containerID="cri-o://8394c58793ba4d118153a08a1542de897146c92d9f0bb3f90ac84fef380e3cf0" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.742240 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6474fd5f77-zwqph" podUID="5388a6e8-73d7-4097-9521-b47649b4c6c8" containerName="barbican-worker" containerID="cri-o://a5b96a8d42c3405967068d87a03841996bd50f3213a845dde3d04bbf1ba0f3ff" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.752423 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.752668 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9" containerName="nova-api-log" containerID="cri-o://bf019b3765313bce34a72ee4466da8ea1de124ec7d57584290484bf4ecf926a5" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.753024 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9" containerName="nova-api-api" containerID="cri-o://936b8b6a5274f750529b939168932b838cfb5b6deb53493684bf1a8efdb874f6" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.762023 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.781831 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-c5h9s"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.786000 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-c5h9s"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.789912 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c07c15-c183-4546-8c7b-574776382e9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29c07c15-c183-4546-8c7b-574776382e9b" (UID: "29c07c15-c183-4546-8c7b-574776382e9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.797779 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29c07c15-c183-4546-8c7b-574776382e9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.802406 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-2180-account-create-k94c2"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.816813 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-2180-account-create-k94c2"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.822215 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.822503 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="ee91c9e3-c2f7-48a3-8a78-19f6dc262e26" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://632ed76758afffe7deabbb497be96385b1fa68fdc85d0a39f726b5ca3baf0880" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.824628 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="db2cd03e-b999-48ea-b540-7fd35356ba8b" containerName="rabbitmq" containerID="cri-o://f9a279d42e1d81ebcc809ef24fb24a103a629e0087d30c570c28a4d054c09cc4" gracePeriod=604800 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.834214 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.834620 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="36aff13b-5f97-40c0-8c88-64c93ce91bcb" containerName="nova-scheduler-scheduler" containerID="cri-o://43fd51907e2ff8ea08d221fd8a7a09a85a3f90b61cf1114371ca7d05e3dcf447" gracePeriod=30 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.955622 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="7cb8ede6-6163-4906-89f7-7fe6458edc36" containerName="rabbitmq" containerID="cri-o://608679d453b4a44e83990b6c799b689d9c67bad4d515f6fee46f773957c810bc" gracePeriod=604800 Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.972702 4812 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 24 19:39:32 crc kubenswrapper[4812]: I1124 19:39:32.987289 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29f01750-5996-43aa-9799-1f08a3e68b53" path="/var/lib/kubelet/pods/29f01750-5996-43aa-9799-1f08a3e68b53/volumes" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.011838 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e22c06f-3e52-4943-9a00-b964d62d8cab" path="/var/lib/kubelet/pods/2e22c06f-3e52-4943-9a00-b964d62d8cab/volumes" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.012885 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a5a2d87-4746-4254-9f63-10fe73a4001f" path="/var/lib/kubelet/pods/5a5a2d87-4746-4254-9f63-10fe73a4001f/volumes" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.013141 4812 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.016624 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fd280c7-5e1d-4106-af02-86eee8c72f62" path="/var/lib/kubelet/pods/7fd280c7-5e1d-4106-af02-86eee8c72f62/volumes" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.047313 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="948e94ea-f146-46a9-81e7-1254f7e3661e" path="/var/lib/kubelet/pods/948e94ea-f146-46a9-81e7-1254f7e3661e/volumes" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.047951 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3986d6e-e683-4327-a5cd-db98e23ca287" path="/var/lib/kubelet/pods/b3986d6e-e683-4327-a5cd-db98e23ca287/volumes" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.049011 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c79fffda-97d4-4772-bea3-dd5bf8721315" path="/var/lib/kubelet/pods/c79fffda-97d4-4772-bea3-dd5bf8721315/volumes" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.049561 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd9d1fd1-4439-473b-a619-dd107ae950ff" path="/var/lib/kubelet/pods/cd9d1fd1-4439-473b-a619-dd107ae950ff/volumes" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.050073 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da169797-327e-4338-961d-4965dcb70d05" path="/var/lib/kubelet/pods/da169797-327e-4338-961d-4965dcb70d05/volumes" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.088047 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f" path="/var/lib/kubelet/pods/e4ae996d-01c2-4ac3-acdb-c4ead2a2c33f/volumes" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.091841 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c07c15-c183-4546-8c7b-574776382e9b-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "29c07c15-c183-4546-8c7b-574776382e9b" (UID: "29c07c15-c183-4546-8c7b-574776382e9b"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.094436 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29c07c15-c183-4546-8c7b-574776382e9b-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "29c07c15-c183-4546-8c7b-574776382e9b" (UID: "29c07c15-c183-4546-8c7b-574776382e9b"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:33 crc kubenswrapper[4812]: E1124 19:39:33.112511 4812 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Nov 24 19:39:33 crc kubenswrapper[4812]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 24 19:39:33 crc kubenswrapper[4812]: + source /usr/local/bin/container-scripts/functions Nov 24 19:39:33 crc kubenswrapper[4812]: ++ OVNBridge=br-int Nov 24 19:39:33 crc kubenswrapper[4812]: ++ OVNRemote=tcp:localhost:6642 Nov 24 19:39:33 crc kubenswrapper[4812]: ++ OVNEncapType=geneve Nov 24 19:39:33 crc kubenswrapper[4812]: ++ OVNAvailabilityZones= Nov 24 19:39:33 crc kubenswrapper[4812]: ++ EnableChassisAsGateway=true Nov 24 19:39:33 crc kubenswrapper[4812]: ++ PhysicalNetworks= Nov 24 19:39:33 crc kubenswrapper[4812]: ++ OVNHostName= Nov 24 19:39:33 crc kubenswrapper[4812]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 24 19:39:33 crc kubenswrapper[4812]: ++ ovs_dir=/var/lib/openvswitch Nov 24 19:39:33 crc kubenswrapper[4812]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 24 19:39:33 crc kubenswrapper[4812]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 24 19:39:33 crc kubenswrapper[4812]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 24 19:39:33 crc kubenswrapper[4812]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 24 19:39:33 crc kubenswrapper[4812]: + sleep 0.5 Nov 24 19:39:33 crc kubenswrapper[4812]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 24 19:39:33 crc kubenswrapper[4812]: + sleep 0.5 Nov 24 19:39:33 crc kubenswrapper[4812]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 24 19:39:33 crc kubenswrapper[4812]: + cleanup_ovsdb_server_semaphore Nov 24 19:39:33 crc kubenswrapper[4812]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 24 19:39:33 crc kubenswrapper[4812]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 24 19:39:33 crc kubenswrapper[4812]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-cgt9p" message=< Nov 24 19:39:33 crc kubenswrapper[4812]: Exiting ovsdb-server (5) [ OK ] Nov 24 19:39:33 crc kubenswrapper[4812]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 24 19:39:33 crc kubenswrapper[4812]: + source /usr/local/bin/container-scripts/functions Nov 24 19:39:33 crc kubenswrapper[4812]: ++ OVNBridge=br-int Nov 24 19:39:33 crc kubenswrapper[4812]: ++ OVNRemote=tcp:localhost:6642 Nov 24 19:39:33 crc kubenswrapper[4812]: ++ OVNEncapType=geneve Nov 24 19:39:33 crc kubenswrapper[4812]: ++ OVNAvailabilityZones= Nov 24 19:39:33 crc kubenswrapper[4812]: ++ EnableChassisAsGateway=true Nov 24 19:39:33 crc kubenswrapper[4812]: ++ PhysicalNetworks= Nov 24 19:39:33 crc kubenswrapper[4812]: ++ OVNHostName= Nov 24 19:39:33 crc kubenswrapper[4812]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 24 19:39:33 crc kubenswrapper[4812]: ++ ovs_dir=/var/lib/openvswitch Nov 24 19:39:33 crc kubenswrapper[4812]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 24 19:39:33 crc kubenswrapper[4812]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 24 19:39:33 crc kubenswrapper[4812]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 24 19:39:33 crc kubenswrapper[4812]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 24 19:39:33 crc kubenswrapper[4812]: + sleep 0.5 Nov 24 19:39:33 crc kubenswrapper[4812]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 24 19:39:33 crc kubenswrapper[4812]: + sleep 0.5 Nov 24 19:39:33 crc kubenswrapper[4812]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 24 19:39:33 crc kubenswrapper[4812]: + cleanup_ovsdb_server_semaphore Nov 24 19:39:33 crc kubenswrapper[4812]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 24 19:39:33 crc kubenswrapper[4812]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 24 19:39:33 crc kubenswrapper[4812]: > Nov 24 19:39:33 crc kubenswrapper[4812]: E1124 19:39:33.112549 4812 kuberuntime_container.go:691] "PreStop hook failed" err=< Nov 24 19:39:33 crc kubenswrapper[4812]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 24 19:39:33 crc kubenswrapper[4812]: + source /usr/local/bin/container-scripts/functions Nov 24 19:39:33 crc kubenswrapper[4812]: ++ OVNBridge=br-int Nov 24 19:39:33 crc kubenswrapper[4812]: ++ OVNRemote=tcp:localhost:6642 Nov 24 19:39:33 crc kubenswrapper[4812]: ++ OVNEncapType=geneve Nov 24 19:39:33 crc kubenswrapper[4812]: ++ OVNAvailabilityZones= Nov 24 19:39:33 crc kubenswrapper[4812]: ++ EnableChassisAsGateway=true Nov 24 19:39:33 crc kubenswrapper[4812]: ++ PhysicalNetworks= Nov 24 19:39:33 crc kubenswrapper[4812]: ++ OVNHostName= Nov 24 19:39:33 crc kubenswrapper[4812]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 24 19:39:33 crc kubenswrapper[4812]: ++ ovs_dir=/var/lib/openvswitch Nov 24 19:39:33 crc kubenswrapper[4812]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 24 19:39:33 crc kubenswrapper[4812]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 24 19:39:33 crc kubenswrapper[4812]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 24 19:39:33 crc kubenswrapper[4812]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 24 19:39:33 crc kubenswrapper[4812]: + sleep 0.5 Nov 24 19:39:33 crc kubenswrapper[4812]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 24 19:39:33 crc kubenswrapper[4812]: + sleep 0.5 Nov 24 19:39:33 crc kubenswrapper[4812]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 24 19:39:33 crc kubenswrapper[4812]: + cleanup_ovsdb_server_semaphore Nov 24 19:39:33 crc kubenswrapper[4812]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 24 19:39:33 crc kubenswrapper[4812]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 24 19:39:33 crc kubenswrapper[4812]: > pod="openstack/ovn-controller-ovs-cgt9p" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovsdb-server" containerID="cri-o://20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.112589 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-cgt9p" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovsdb-server" containerID="cri-o://20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" gracePeriod=29 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.114454 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/29c07c15-c183-4546-8c7b-574776382e9b-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.114468 4812 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/29c07c15-c183-4546-8c7b-574776382e9b-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.148927 4812 scope.go:117] "RemoveContainer" containerID="c1e926415066f8182e74a12929d98c3843c7ccb4cba6f365608af1c2c5278dee" Nov 24 19:39:33 crc kubenswrapper[4812]: E1124 19:39:33.183907 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="43fd51907e2ff8ea08d221fd8a7a09a85a3f90b61cf1114371ca7d05e3dcf447" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 19:39:33 crc kubenswrapper[4812]: E1124 19:39:33.188266 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="43fd51907e2ff8ea08d221fd8a7a09a85a3f90b61cf1114371ca7d05e3dcf447" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 19:39:33 crc kubenswrapper[4812]: E1124 19:39:33.199458 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="43fd51907e2ff8ea08d221fd8a7a09a85a3f90b61cf1114371ca7d05e3dcf447" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 19:39:33 crc kubenswrapper[4812]: E1124 19:39:33.199515 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="36aff13b-5f97-40c0-8c88-64c93ce91bcb" containerName="nova-scheduler-scheduler" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.216226 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-cgt9p" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovs-vswitchd" containerID="cri-o://623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" gracePeriod=29 Nov 24 19:39:33 crc kubenswrapper[4812]: E1124 19:39:33.217542 4812 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 24 19:39:33 crc kubenswrapper[4812]: E1124 19:39:33.217589 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-config-data podName:7cb8ede6-6163-4906-89f7-7fe6458edc36 nodeName:}" failed. No retries permitted until 2025-11-24 19:39:35.217573894 +0000 UTC m=+1369.006526265 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-config-data") pod "rabbitmq-server-0" (UID: "7cb8ede6-6163-4906-89f7-7fe6458edc36") : configmap "rabbitmq-config-data" not found Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.348552 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="1ea33728-2445-4a6f-9ac6-ab0aeff9f95c" containerName="galera" containerID="cri-o://c006dae87ca76eaf2e493bd8a0fbad15d198c2c947c2d61201bc1d4f3e9c9d97" gracePeriod=30 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.508305 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.511828 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbicanfac2-account-delete-v4hfz"] Nov 24 19:39:33 crc kubenswrapper[4812]: W1124 19:39:33.536770 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72d5ebd7_a16f_49bb_a80e_1138d5f197d3.slice/crio-2ac7a4c2fa6ad3cf878bfa556ab1e046777c3c27ed6701e1ced8ec7359e2802a WatchSource:0}: Error finding container 2ac7a4c2fa6ad3cf878bfa556ab1e046777c3c27ed6701e1ced8ec7359e2802a: Status 404 returned error can't find the container with id 2ac7a4c2fa6ad3cf878bfa556ab1e046777c3c27ed6701e1ced8ec7359e2802a Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.538450 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.561574 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-n2bml_8cfe0ed1-d668-482c-8303-2ca93bdfc057/openstack-network-exporter/0.log" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.561639 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.591665 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-d48t8" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.611851 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.624004 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbicanfac2-account-delete-v4hfz" event={"ID":"72d5ebd7-a16f-49bb-a80e-1138d5f197d3","Type":"ContainerStarted","Data":"2ac7a4c2fa6ad3cf878bfa556ab1e046777c3c27ed6701e1ced8ec7359e2802a"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.625179 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-ovsdbserver-sb\") pod \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.625243 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-ovsdbserver-nb\") pod \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.625269 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-dns-svc\") pod \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.625440 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-dns-swift-storage-0\") pod \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.625490 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg22s\" (UniqueName: \"kubernetes.io/projected/d7b9be63-7c78-4d97-87e7-8efd09c2669b-kube-api-access-lg22s\") pod \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.625564 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-config\") pod \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\" (UID: \"d7b9be63-7c78-4d97-87e7-8efd09c2669b\") " Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.632237 4812 generic.go:334] "Generic (PLEG): container finished" podID="5388a6e8-73d7-4097-9521-b47649b4c6c8" containerID="8394c58793ba4d118153a08a1542de897146c92d9f0bb3f90ac84fef380e3cf0" exitCode=143 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.632322 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6474fd5f77-zwqph" event={"ID":"5388a6e8-73d7-4097-9521-b47649b4c6c8","Type":"ContainerDied","Data":"8394c58793ba4d118153a08a1542de897146c92d9f0bb3f90ac84fef380e3cf0"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.637067 4812 generic.go:334] "Generic (PLEG): container finished" podID="7063bb41-d0d1-4605-b265-1fb3adce77b5" containerID="03b569a149de1654124dbb26b9b742abf3f34b40cd6556a3ae9992326e8dd347" exitCode=0 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.637106 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7063bb41-d0d1-4605-b265-1fb3adce77b5","Type":"ContainerDied","Data":"03b569a149de1654124dbb26b9b742abf3f34b40cd6556a3ae9992326e8dd347"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.640094 4812 generic.go:334] "Generic (PLEG): container finished" podID="cf01449d-a52d-488f-bd93-b5f84b57fb13" containerID="289e772fea34cd9b5573b8563f47c8b619a399e0ab44c84b1b81de861433540a" exitCode=143 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.640138 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cf01449d-a52d-488f-bd93-b5f84b57fb13","Type":"ContainerDied","Data":"289e772fea34cd9b5573b8563f47c8b619a399e0ab44c84b1b81de861433540a"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.641793 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7b9be63-7c78-4d97-87e7-8efd09c2669b-kube-api-access-lg22s" (OuterVolumeSpecName: "kube-api-access-lg22s") pod "d7b9be63-7c78-4d97-87e7-8efd09c2669b" (UID: "d7b9be63-7c78-4d97-87e7-8efd09c2669b"). InnerVolumeSpecName "kube-api-access-lg22s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.681645 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="ee91c9e3-c2f7-48a3-8a78-19f6dc262e26" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.199:6080/vnc_lite.html\": dial tcp 10.217.0.199:6080: connect: connection refused" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.684594 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zrdjt"] Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.691571 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.691750 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="93240875-2cab-44e0-b475-20f46cb4850e" containerName="nova-cell0-conductor-conductor" containerID="cri-o://a1d0c0172507606f6c4fa928a3e9234afb2ec885eb8b341a5de04336c4f109af" gracePeriod=30 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.699273 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zrdjt"] Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.718436 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jv5mj"] Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.723481 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.724627 4812 generic.go:334] "Generic (PLEG): container finished" podID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerID="52a6492fd3d0deacb1f3de4099c2af1b8e13196fcc94e5057991492ec70dceb1" exitCode=0 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.724650 4812 generic.go:334] "Generic (PLEG): container finished" podID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerID="7dfb7bb5f541d3e6a7b34ba4b2765ad1a014b13d9acd5887236f22389f857686" exitCode=0 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.724657 4812 generic.go:334] "Generic (PLEG): container finished" podID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerID="583024e52dfa2d0198e9e008ff828f760750f1c77c129940527456b669651af8" exitCode=0 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.724650 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerDied","Data":"52a6492fd3d0deacb1f3de4099c2af1b8e13196fcc94e5057991492ec70dceb1"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.724758 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerDied","Data":"7dfb7bb5f541d3e6a7b34ba4b2765ad1a014b13d9acd5887236f22389f857686"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.724848 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerDied","Data":"583024e52dfa2d0198e9e008ff828f760750f1c77c129940527456b669651af8"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.724860 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerDied","Data":"e7a4a9c734691227c7726486773c640c7222fb5fb72a6dfe0e67b5f73d49779c"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.724664 4812 generic.go:334] "Generic (PLEG): container finished" podID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerID="e7a4a9c734691227c7726486773c640c7222fb5fb72a6dfe0e67b5f73d49779c" exitCode=0 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.724879 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="87b80396-f87d-435e-8478-0ecb34bccd94" containerName="nova-cell1-conductor-conductor" containerID="cri-o://5589fa331fbb111d8059fa5d842a1f4818d7b185257f4ad95b60c65b7ba3a2fd" gracePeriod=30 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.724922 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerDied","Data":"6f8586c7b6379e41339cf4003e181ac17b8000c4b6cf916c10b8bd5974063830"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.724887 4812 generic.go:334] "Generic (PLEG): container finished" podID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerID="6f8586c7b6379e41339cf4003e181ac17b8000c4b6cf916c10b8bd5974063830" exitCode=0 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.725132 4812 generic.go:334] "Generic (PLEG): container finished" podID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerID="369329005ba708cb3da74c4eb5f269c7a9bc67780612f12579a184b149f8f6a3" exitCode=0 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.725143 4812 generic.go:334] "Generic (PLEG): container finished" podID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerID="5c72f1df163389e7bb1686785b1e834a0fcaae9c25f9b0e0118c50ce389f8124" exitCode=0 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.725151 4812 generic.go:334] "Generic (PLEG): container finished" podID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerID="0f2648d680c181c55b3d5b4aac28a54db4a0cc1f041a8502edaf196dec2f5d2b" exitCode=0 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.725157 4812 generic.go:334] "Generic (PLEG): container finished" podID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerID="f42cebfcb67a0c82f8b5ea0093df23a7f3db4c4d55c27eba7281fffcdffb9145" exitCode=0 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.725172 4812 generic.go:334] "Generic (PLEG): container finished" podID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerID="3737b1e379814063ab73fb73d8cadf66e16906093ac70c6122f3dd28c3ef1165" exitCode=0 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.725179 4812 generic.go:334] "Generic (PLEG): container finished" podID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerID="95ee721a93b130c2b838cad7d2589d756c66158d110fc387dea0a070f5cab34b" exitCode=0 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.725186 4812 generic.go:334] "Generic (PLEG): container finished" podID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerID="85c687b61ffed21faaf0d22af2f315067e640622918bcd4af610650579b3e1ea" exitCode=0 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.725200 4812 generic.go:334] "Generic (PLEG): container finished" podID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerID="d1f7937bf24142a9d61506b61ddf9aa74aef7f7791c9f2de08d17609dc7039ce" exitCode=0 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.725207 4812 generic.go:334] "Generic (PLEG): container finished" podID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerID="ec513a4be128ca8f5f9f823a774f668bfdc27e8b505dfc23a7f565ab756b025f" exitCode=0 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.725254 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerDied","Data":"369329005ba708cb3da74c4eb5f269c7a9bc67780612f12579a184b149f8f6a3"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.725278 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerDied","Data":"5c72f1df163389e7bb1686785b1e834a0fcaae9c25f9b0e0118c50ce389f8124"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.725288 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerDied","Data":"0f2648d680c181c55b3d5b4aac28a54db4a0cc1f041a8502edaf196dec2f5d2b"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.725298 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerDied","Data":"f42cebfcb67a0c82f8b5ea0093df23a7f3db4c4d55c27eba7281fffcdffb9145"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.725306 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerDied","Data":"3737b1e379814063ab73fb73d8cadf66e16906093ac70c6122f3dd28c3ef1165"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.725315 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerDied","Data":"95ee721a93b130c2b838cad7d2589d756c66158d110fc387dea0a070f5cab34b"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.725322 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerDied","Data":"85c687b61ffed21faaf0d22af2f315067e640622918bcd4af610650579b3e1ea"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.725346 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerDied","Data":"d1f7937bf24142a9d61506b61ddf9aa74aef7f7791c9f2de08d17609dc7039ce"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.725355 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerDied","Data":"ec513a4be128ca8f5f9f823a774f668bfdc27e8b505dfc23a7f565ab756b025f"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.727746 4812 generic.go:334] "Generic (PLEG): container finished" podID="ee91c9e3-c2f7-48a3-8a78-19f6dc262e26" containerID="632ed76758afffe7deabbb497be96385b1fa68fdc85d0a39f726b5ca3baf0880" exitCode=0 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.727797 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26","Type":"ContainerDied","Data":"632ed76758afffe7deabbb497be96385b1fa68fdc85d0a39f726b5ca3baf0880"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.728996 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cfe0ed1-d668-482c-8303-2ca93bdfc057-metrics-certs-tls-certs\") pod \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.729032 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8cfe0ed1-d668-482c-8303-2ca93bdfc057-ovn-rundir\") pod \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.729071 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f3dca2d-7c6c-428d-9789-1463444fe46f-var-run\") pod \"4f3dca2d-7c6c-428d-9789-1463444fe46f\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.729088 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f3dca2d-7c6c-428d-9789-1463444fe46f-combined-ca-bundle\") pod \"4f3dca2d-7c6c-428d-9789-1463444fe46f\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.729135 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cfe0ed1-d668-482c-8303-2ca93bdfc057-combined-ca-bundle\") pod \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.729159 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f3dca2d-7c6c-428d-9789-1463444fe46f-ovn-controller-tls-certs\") pod \"4f3dca2d-7c6c-428d-9789-1463444fe46f\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.729185 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f3dca2d-7c6c-428d-9789-1463444fe46f-var-log-ovn\") pod \"4f3dca2d-7c6c-428d-9789-1463444fe46f\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.729227 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8cfe0ed1-d668-482c-8303-2ca93bdfc057-ovs-rundir\") pod \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.729228 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cfe0ed1-d668-482c-8303-2ca93bdfc057-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "8cfe0ed1-d668-482c-8303-2ca93bdfc057" (UID: "8cfe0ed1-d668-482c-8303-2ca93bdfc057"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.729244 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26sgz\" (UniqueName: \"kubernetes.io/projected/8cfe0ed1-d668-482c-8303-2ca93bdfc057-kube-api-access-26sgz\") pod \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.729265 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfe0ed1-d668-482c-8303-2ca93bdfc057-config\") pod \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\" (UID: \"8cfe0ed1-d668-482c-8303-2ca93bdfc057\") " Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.729311 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f3dca2d-7c6c-428d-9789-1463444fe46f-scripts\") pod \"4f3dca2d-7c6c-428d-9789-1463444fe46f\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.729358 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-475jv\" (UniqueName: \"kubernetes.io/projected/4f3dca2d-7c6c-428d-9789-1463444fe46f-kube-api-access-475jv\") pod \"4f3dca2d-7c6c-428d-9789-1463444fe46f\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.729391 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f3dca2d-7c6c-428d-9789-1463444fe46f-var-run-ovn\") pod \"4f3dca2d-7c6c-428d-9789-1463444fe46f\" (UID: \"4f3dca2d-7c6c-428d-9789-1463444fe46f\") " Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.729760 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f3dca2d-7c6c-428d-9789-1463444fe46f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4f3dca2d-7c6c-428d-9789-1463444fe46f" (UID: "4f3dca2d-7c6c-428d-9789-1463444fe46f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.729791 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f3dca2d-7c6c-428d-9789-1463444fe46f-var-run" (OuterVolumeSpecName: "var-run") pod "4f3dca2d-7c6c-428d-9789-1463444fe46f" (UID: "4f3dca2d-7c6c-428d-9789-1463444fe46f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.729921 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cfe0ed1-d668-482c-8303-2ca93bdfc057-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "8cfe0ed1-d668-482c-8303-2ca93bdfc057" (UID: "8cfe0ed1-d668-482c-8303-2ca93bdfc057"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.730909 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cfe0ed1-d668-482c-8303-2ca93bdfc057-config" (OuterVolumeSpecName: "config") pod "8cfe0ed1-d668-482c-8303-2ca93bdfc057" (UID: "8cfe0ed1-d668-482c-8303-2ca93bdfc057"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.731241 4812 generic.go:334] "Generic (PLEG): container finished" podID="56f4002e-a9bd-462d-b5ed-ce5ea166ec16" containerID="421d219729fffc3e229f6f163aa63a1f6a6b2a4dd6565bac6b4f81cad77dfa57" exitCode=143 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.731327 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"56f4002e-a9bd-462d-b5ed-ce5ea166ec16","Type":"ContainerDied","Data":"421d219729fffc3e229f6f163aa63a1f6a6b2a4dd6565bac6b4f81cad77dfa57"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.731497 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f3dca2d-7c6c-428d-9789-1463444fe46f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4f3dca2d-7c6c-428d-9789-1463444fe46f" (UID: "4f3dca2d-7c6c-428d-9789-1463444fe46f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.732985 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f3dca2d-7c6c-428d-9789-1463444fe46f-scripts" (OuterVolumeSpecName: "scripts") pod "4f3dca2d-7c6c-428d-9789-1463444fe46f" (UID: "4f3dca2d-7c6c-428d-9789-1463444fe46f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.733406 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lg22s\" (UniqueName: \"kubernetes.io/projected/d7b9be63-7c78-4d97-87e7-8efd09c2669b-kube-api-access-lg22s\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.733496 4812 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f3dca2d-7c6c-428d-9789-1463444fe46f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.733564 4812 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/8cfe0ed1-d668-482c-8303-2ca93bdfc057-ovn-rundir\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.733626 4812 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f3dca2d-7c6c-428d-9789-1463444fe46f-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.733682 4812 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f3dca2d-7c6c-428d-9789-1463444fe46f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.733738 4812 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/8cfe0ed1-d668-482c-8303-2ca93bdfc057-ovs-rundir\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.733793 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cfe0ed1-d668-482c-8303-2ca93bdfc057-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.733877 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f3dca2d-7c6c-428d-9789-1463444fe46f-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.739367 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jv5mj"] Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.743106 4812 generic.go:334] "Generic (PLEG): container finished" podID="ce28537c-ff5d-4318-bc95-8a29da6aae53" containerID="3551f287cbecbb04cb08a8e33cd37a12bda36c20f012d009c7acdca3dc72561a" exitCode=143 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.743215 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ce28537c-ff5d-4318-bc95-8a29da6aae53","Type":"ContainerDied","Data":"3551f287cbecbb04cb08a8e33cd37a12bda36c20f012d009c7acdca3dc72561a"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.745399 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-d48t8" event={"ID":"4f3dca2d-7c6c-428d-9789-1463444fe46f","Type":"ContainerDied","Data":"fa4205f170679e4765dd002880f1f84cf71f114abf126553d1d5a71badd9a9bc"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.745435 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-d48t8" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.745442 4812 scope.go:117] "RemoveContainer" containerID="d5ff05e4d3f2fbc8607a3a9a2e2d57c02a86f1e0f1e41e800c7ae06df2e9f0dc" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.747195 4812 generic.go:334] "Generic (PLEG): container finished" podID="8f7453e8-f9c5-4588-80b8-82bba37e1514" containerID="efc44587faa3ea03e56f32c183bd4613a63898038cbd2f133bb473692484cc9e" exitCode=137 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.750383 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-n2bml_8cfe0ed1-d668-482c-8303-2ca93bdfc057/openstack-network-exporter/0.log" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.750465 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-n2bml" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.751042 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-n2bml" event={"ID":"8cfe0ed1-d668-482c-8303-2ca93bdfc057","Type":"ContainerDied","Data":"7b608d902f8126cf5abff924aed7bf979e6a0f29b24dbc92e7e25d48f3863032"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.754197 4812 generic.go:334] "Generic (PLEG): container finished" podID="ba9f8224-4f20-4c27-b242-3385791aed68" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" exitCode=0 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.754241 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cgt9p" event={"ID":"ba9f8224-4f20-4c27-b242-3385791aed68","Type":"ContainerDied","Data":"20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.756653 4812 generic.go:334] "Generic (PLEG): container finished" podID="cf04fa0a-bd1d-442f-afff-05fe1ebdeafa" containerID="9eecdb130173be4362c4b01b4d7a91e9916967b8217d2c08f6922d9d056d6e8b" exitCode=0 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.756750 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f58df686c-jn8qq" event={"ID":"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa","Type":"ContainerDied","Data":"9eecdb130173be4362c4b01b4d7a91e9916967b8217d2c08f6922d9d056d6e8b"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.757912 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" event={"ID":"d7b9be63-7c78-4d97-87e7-8efd09c2669b","Type":"ContainerDied","Data":"678ed26199d83c2f2720c5a67c94fb04859f4e7a7afdd5e3d0f856d59fc34f92"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.757983 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7f54fb65-xnfjt" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.760565 4812 generic.go:334] "Generic (PLEG): container finished" podID="fde2bf26-3775-4ffe-83d7-1bc11d36c1e3" containerID="c13e6786a49a879ebdafe7e2706bbeadf9a5620f8ec5d41a07a487a04ba9d615" exitCode=143 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.760683 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" event={"ID":"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3","Type":"ContainerDied","Data":"c13e6786a49a879ebdafe7e2706bbeadf9a5620f8ec5d41a07a487a04ba9d615"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.764540 4812 generic.go:334] "Generic (PLEG): container finished" podID="2e1ab98f-9015-432e-90f2-4692dc37c99e" containerID="4cc9d9368a0a7cba1482b1fd1cdcb122538087b4b0ce32c9891541bd780042c8" exitCode=143 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.764601 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b4d7f8ffb-jqsz8" event={"ID":"2e1ab98f-9015-432e-90f2-4692dc37c99e","Type":"ContainerDied","Data":"4cc9d9368a0a7cba1482b1fd1cdcb122538087b4b0ce32c9891541bd780042c8"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.766938 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_dfeb89d5-a81d-411c-8808-ae9f506780e2/ovsdbserver-sb/0.log" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.766987 4812 generic.go:334] "Generic (PLEG): container finished" podID="dfeb89d5-a81d-411c-8808-ae9f506780e2" containerID="b80b07ff93663ea7089effb52d3abdb13f3c43cfcfd43f87f18a84303d9f1a1c" exitCode=2 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.767004 4812 generic.go:334] "Generic (PLEG): container finished" podID="dfeb89d5-a81d-411c-8808-ae9f506780e2" containerID="542fa987ba71008e6087798dc608e5f12d1c0e5b34bd8042efb1db753a6ec061" exitCode=143 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.767053 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"dfeb89d5-a81d-411c-8808-ae9f506780e2","Type":"ContainerDied","Data":"b80b07ff93663ea7089effb52d3abdb13f3c43cfcfd43f87f18a84303d9f1a1c"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.767610 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"dfeb89d5-a81d-411c-8808-ae9f506780e2","Type":"ContainerDied","Data":"542fa987ba71008e6087798dc608e5f12d1c0e5b34bd8042efb1db753a6ec061"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.769483 4812 generic.go:334] "Generic (PLEG): container finished" podID="de0c0304-db1e-4a1e-8b7e-5700c55fc9ed" containerID="5afde10d2f90a0c5d1bc9ea9375923c024c51c32da257a3842c5489dcc1fd794" exitCode=143 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.769530 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed","Type":"ContainerDied","Data":"5afde10d2f90a0c5d1bc9ea9375923c024c51c32da257a3842c5489dcc1fd794"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.771865 4812 generic.go:334] "Generic (PLEG): container finished" podID="a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9" containerID="bf019b3765313bce34a72ee4466da8ea1de124ec7d57584290484bf4ecf926a5" exitCode=143 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.771918 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9","Type":"ContainerDied","Data":"bf019b3765313bce34a72ee4466da8ea1de124ec7d57584290484bf4ecf926a5"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.776983 4812 generic.go:334] "Generic (PLEG): container finished" podID="418d4320-50e7-4767-bed7-db43af1583f0" containerID="37219908d6616db696c761cac954619d4884750ea17c429640079a84f18bb5e8" exitCode=0 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.777004 4812 generic.go:334] "Generic (PLEG): container finished" podID="418d4320-50e7-4767-bed7-db43af1583f0" containerID="c1a3bf87878ce67c52c317f9bd13b4bf250280872ea6f819e3464fba67ab5710" exitCode=0 Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.777023 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-86f4846457-t2fl4" event={"ID":"418d4320-50e7-4767-bed7-db43af1583f0","Type":"ContainerDied","Data":"37219908d6616db696c761cac954619d4884750ea17c429640079a84f18bb5e8"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.777041 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-86f4846457-t2fl4" event={"ID":"418d4320-50e7-4767-bed7-db43af1583f0","Type":"ContainerDied","Data":"c1a3bf87878ce67c52c317f9bd13b4bf250280872ea6f819e3464fba67ab5710"} Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.778629 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f3dca2d-7c6c-428d-9789-1463444fe46f-kube-api-access-475jv" (OuterVolumeSpecName: "kube-api-access-475jv") pod "4f3dca2d-7c6c-428d-9789-1463444fe46f" (UID: "4f3dca2d-7c6c-428d-9789-1463444fe46f"). InnerVolumeSpecName "kube-api-access-475jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.779290 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cfe0ed1-d668-482c-8303-2ca93bdfc057-kube-api-access-26sgz" (OuterVolumeSpecName: "kube-api-access-26sgz") pod "8cfe0ed1-d668-482c-8303-2ca93bdfc057" (UID: "8cfe0ed1-d668-482c-8303-2ca93bdfc057"). InnerVolumeSpecName "kube-api-access-26sgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.793516 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7b9be63-7c78-4d97-87e7-8efd09c2669b" (UID: "d7b9be63-7c78-4d97-87e7-8efd09c2669b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:33 crc kubenswrapper[4812]: E1124 19:39:33.798979 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 542fa987ba71008e6087798dc608e5f12d1c0e5b34bd8042efb1db753a6ec061 is running failed: container process not found" containerID="542fa987ba71008e6087798dc608e5f12d1c0e5b34bd8042efb1db753a6ec061" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 24 19:39:33 crc kubenswrapper[4812]: E1124 19:39:33.800594 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 542fa987ba71008e6087798dc608e5f12d1c0e5b34bd8042efb1db753a6ec061 is running failed: container process not found" containerID="542fa987ba71008e6087798dc608e5f12d1c0e5b34bd8042efb1db753a6ec061" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 24 19:39:33 crc kubenswrapper[4812]: E1124 19:39:33.801022 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 542fa987ba71008e6087798dc608e5f12d1c0e5b34bd8042efb1db753a6ec061 is running failed: container process not found" containerID="542fa987ba71008e6087798dc608e5f12d1c0e5b34bd8042efb1db753a6ec061" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 24 19:39:33 crc kubenswrapper[4812]: E1124 19:39:33.801045 4812 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 542fa987ba71008e6087798dc608e5f12d1c0e5b34bd8042efb1db753a6ec061 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="dfeb89d5-a81d-411c-8808-ae9f506780e2" containerName="ovsdbserver-sb" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.835870 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.835899 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26sgz\" (UniqueName: \"kubernetes.io/projected/8cfe0ed1-d668-482c-8303-2ca93bdfc057-kube-api-access-26sgz\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.835908 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-475jv\" (UniqueName: \"kubernetes.io/projected/4f3dca2d-7c6c-428d-9789-1463444fe46f-kube-api-access-475jv\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.875232 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d7b9be63-7c78-4d97-87e7-8efd09c2669b" (UID: "d7b9be63-7c78-4d97-87e7-8efd09c2669b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.896531 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f3dca2d-7c6c-428d-9789-1463444fe46f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f3dca2d-7c6c-428d-9789-1463444fe46f" (UID: "4f3dca2d-7c6c-428d-9789-1463444fe46f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.931926 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-config" (OuterVolumeSpecName: "config") pod "d7b9be63-7c78-4d97-87e7-8efd09c2669b" (UID: "d7b9be63-7c78-4d97-87e7-8efd09c2669b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.939028 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.939057 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.939067 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f3dca2d-7c6c-428d-9789-1463444fe46f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.950689 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d7b9be63-7c78-4d97-87e7-8efd09c2669b" (UID: "d7b9be63-7c78-4d97-87e7-8efd09c2669b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:33 crc kubenswrapper[4812]: I1124 19:39:33.970869 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d7b9be63-7c78-4d97-87e7-8efd09c2669b" (UID: "d7b9be63-7c78-4d97-87e7-8efd09c2669b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.000999 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cfe0ed1-d668-482c-8303-2ca93bdfc057-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cfe0ed1-d668-482c-8303-2ca93bdfc057" (UID: "8cfe0ed1-d668-482c-8303-2ca93bdfc057"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.049959 4812 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.049990 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7b9be63-7c78-4d97-87e7-8efd09c2669b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.050001 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cfe0ed1-d668-482c-8303-2ca93bdfc057-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.116526 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement0bb7-account-delete-m56jl"] Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.136937 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron448d-account-delete-qskvr"] Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.156897 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 19:39:34 crc kubenswrapper[4812]: E1124 19:39:34.160603 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1612fa99105c6c6db36c69df3ab73e15ab7582ce223f0eb33cb1b46662439f92" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 24 19:39:34 crc kubenswrapper[4812]: E1124 19:39:34.163185 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1612fa99105c6c6db36c69df3ab73e15ab7582ce223f0eb33cb1b46662439f92" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.167119 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder5926-account-delete-lvwv5"] Nov 24 19:39:34 crc kubenswrapper[4812]: E1124 19:39:34.181326 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1612fa99105c6c6db36c69df3ab73e15ab7582ce223f0eb33cb1b46662439f92" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 24 19:39:34 crc kubenswrapper[4812]: E1124 19:39:34.181444 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="0648d4b9-0096-4092-b2b9-70e23f9c863c" containerName="ovn-northd" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.188655 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cfe0ed1-d668-482c-8303-2ca93bdfc057-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "8cfe0ed1-d668-482c-8303-2ca93bdfc057" (UID: "8cfe0ed1-d668-482c-8303-2ca93bdfc057"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.189816 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-xnfjt"] Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.192420 4812 scope.go:117] "RemoveContainer" containerID="62d03c4a9e5685c9c943a0d1a4db0073f9bc0452d073dbf66734cda8bf0c38b1" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.195832 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7f54fb65-xnfjt"] Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.201666 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.219150 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f3dca2d-7c6c-428d-9789-1463444fe46f-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "4f3dca2d-7c6c-428d-9789-1463444fe46f" (UID: "4f3dca2d-7c6c-428d-9789-1463444fe46f"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.228782 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.238647 4812 scope.go:117] "RemoveContainer" containerID="1f40489248bad75264493cd7ecc63fec04273b7705afa1d923049f768b10f769" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.240165 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_dfeb89d5-a81d-411c-8808-ae9f506780e2/ovsdbserver-sb/0.log" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.240239 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.259628 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f7453e8-f9c5-4588-80b8-82bba37e1514-combined-ca-bundle\") pod \"8f7453e8-f9c5-4588-80b8-82bba37e1514\" (UID: \"8f7453e8-f9c5-4588-80b8-82bba37e1514\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.259757 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72qwx\" (UniqueName: \"kubernetes.io/projected/8f7453e8-f9c5-4588-80b8-82bba37e1514-kube-api-access-72qwx\") pod \"8f7453e8-f9c5-4588-80b8-82bba37e1514\" (UID: \"8f7453e8-f9c5-4588-80b8-82bba37e1514\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.259863 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f7453e8-f9c5-4588-80b8-82bba37e1514-openstack-config-secret\") pod \"8f7453e8-f9c5-4588-80b8-82bba37e1514\" (UID: \"8f7453e8-f9c5-4588-80b8-82bba37e1514\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.259899 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f7453e8-f9c5-4588-80b8-82bba37e1514-openstack-config\") pod \"8f7453e8-f9c5-4588-80b8-82bba37e1514\" (UID: \"8f7453e8-f9c5-4588-80b8-82bba37e1514\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.260403 4812 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f3dca2d-7c6c-428d-9789-1463444fe46f-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.260416 4812 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8cfe0ed1-d668-482c-8303-2ca93bdfc057-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.272493 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f7453e8-f9c5-4588-80b8-82bba37e1514-kube-api-access-72qwx" (OuterVolumeSpecName: "kube-api-access-72qwx") pod "8f7453e8-f9c5-4588-80b8-82bba37e1514" (UID: "8f7453e8-f9c5-4588-80b8-82bba37e1514"). InnerVolumeSpecName "kube-api-access-72qwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.277551 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance7fd2-account-delete-vl62k"] Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.302584 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f7453e8-f9c5-4588-80b8-82bba37e1514-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "8f7453e8-f9c5-4588-80b8-82bba37e1514" (UID: "8f7453e8-f9c5-4588-80b8-82bba37e1514"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.317219 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f7453e8-f9c5-4588-80b8-82bba37e1514-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f7453e8-f9c5-4588-80b8-82bba37e1514" (UID: "8f7453e8-f9c5-4588-80b8-82bba37e1514"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.343822 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f7453e8-f9c5-4588-80b8-82bba37e1514-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "8f7453e8-f9c5-4588-80b8-82bba37e1514" (UID: "8f7453e8-f9c5-4588-80b8-82bba37e1514"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.366345 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-internal-tls-certs\") pod \"418d4320-50e7-4767-bed7-db43af1583f0\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.366402 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/418d4320-50e7-4767-bed7-db43af1583f0-run-httpd\") pod \"418d4320-50e7-4767-bed7-db43af1583f0\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.366426 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"dfeb89d5-a81d-411c-8808-ae9f506780e2\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.366450 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-config-data\") pod \"418d4320-50e7-4767-bed7-db43af1583f0\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.366481 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-vencrypt-tls-certs\") pod \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\" (UID: \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.366503 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-nova-novncproxy-tls-certs\") pod \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\" (UID: \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.366518 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/418d4320-50e7-4767-bed7-db43af1583f0-log-httpd\") pod \"418d4320-50e7-4767-bed7-db43af1583f0\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.366549 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dfeb89d5-a81d-411c-8808-ae9f506780e2-ovsdb-rundir\") pod \"dfeb89d5-a81d-411c-8808-ae9f506780e2\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.366564 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfeb89d5-a81d-411c-8808-ae9f506780e2-ovsdbserver-sb-tls-certs\") pod \"dfeb89d5-a81d-411c-8808-ae9f506780e2\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.366584 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfeb89d5-a81d-411c-8808-ae9f506780e2-config\") pod \"dfeb89d5-a81d-411c-8808-ae9f506780e2\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.366620 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bnxv\" (UniqueName: \"kubernetes.io/projected/dfeb89d5-a81d-411c-8808-ae9f506780e2-kube-api-access-9bnxv\") pod \"dfeb89d5-a81d-411c-8808-ae9f506780e2\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.366649 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6pmc\" (UniqueName: \"kubernetes.io/projected/418d4320-50e7-4767-bed7-db43af1583f0-kube-api-access-j6pmc\") pod \"418d4320-50e7-4767-bed7-db43af1583f0\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.366693 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-combined-ca-bundle\") pod \"418d4320-50e7-4767-bed7-db43af1583f0\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.366719 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-combined-ca-bundle\") pod \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\" (UID: \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.366739 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/418d4320-50e7-4767-bed7-db43af1583f0-etc-swift\") pod \"418d4320-50e7-4767-bed7-db43af1583f0\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.366773 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfeb89d5-a81d-411c-8808-ae9f506780e2-metrics-certs-tls-certs\") pod \"dfeb89d5-a81d-411c-8808-ae9f506780e2\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.366808 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-config-data\") pod \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\" (UID: \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.366833 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-public-tls-certs\") pod \"418d4320-50e7-4767-bed7-db43af1583f0\" (UID: \"418d4320-50e7-4767-bed7-db43af1583f0\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.366853 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trjwl\" (UniqueName: \"kubernetes.io/projected/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-kube-api-access-trjwl\") pod \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\" (UID: \"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.366915 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfeb89d5-a81d-411c-8808-ae9f506780e2-combined-ca-bundle\") pod \"dfeb89d5-a81d-411c-8808-ae9f506780e2\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.366943 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dfeb89d5-a81d-411c-8808-ae9f506780e2-scripts\") pod \"dfeb89d5-a81d-411c-8808-ae9f506780e2\" (UID: \"dfeb89d5-a81d-411c-8808-ae9f506780e2\") " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.367300 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72qwx\" (UniqueName: \"kubernetes.io/projected/8f7453e8-f9c5-4588-80b8-82bba37e1514-kube-api-access-72qwx\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.367311 4812 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f7453e8-f9c5-4588-80b8-82bba37e1514-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.367320 4812 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f7453e8-f9c5-4588-80b8-82bba37e1514-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.367328 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f7453e8-f9c5-4588-80b8-82bba37e1514-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.367519 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfeb89d5-a81d-411c-8808-ae9f506780e2-config" (OuterVolumeSpecName: "config") pod "dfeb89d5-a81d-411c-8808-ae9f506780e2" (UID: "dfeb89d5-a81d-411c-8808-ae9f506780e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.368318 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfeb89d5-a81d-411c-8808-ae9f506780e2-scripts" (OuterVolumeSpecName: "scripts") pod "dfeb89d5-a81d-411c-8808-ae9f506780e2" (UID: "dfeb89d5-a81d-411c-8808-ae9f506780e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.369325 4812 scope.go:117] "RemoveContainer" containerID="b78e0685c8bdcfef9fa9fdc10eabd6b7f30b9fa81246e5fa6583ae72d1a875d6" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.370429 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/418d4320-50e7-4767-bed7-db43af1583f0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "418d4320-50e7-4767-bed7-db43af1583f0" (UID: "418d4320-50e7-4767-bed7-db43af1583f0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.374006 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfeb89d5-a81d-411c-8808-ae9f506780e2-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "dfeb89d5-a81d-411c-8808-ae9f506780e2" (UID: "dfeb89d5-a81d-411c-8808-ae9f506780e2"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.374906 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/418d4320-50e7-4767-bed7-db43af1583f0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "418d4320-50e7-4767-bed7-db43af1583f0" (UID: "418d4320-50e7-4767-bed7-db43af1583f0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.381825 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/418d4320-50e7-4767-bed7-db43af1583f0-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "418d4320-50e7-4767-bed7-db43af1583f0" (UID: "418d4320-50e7-4767-bed7-db43af1583f0"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.389682 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfeb89d5-a81d-411c-8808-ae9f506780e2-kube-api-access-9bnxv" (OuterVolumeSpecName: "kube-api-access-9bnxv") pod "dfeb89d5-a81d-411c-8808-ae9f506780e2" (UID: "dfeb89d5-a81d-411c-8808-ae9f506780e2"). InnerVolumeSpecName "kube-api-access-9bnxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.391927 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/418d4320-50e7-4767-bed7-db43af1583f0-kube-api-access-j6pmc" (OuterVolumeSpecName: "kube-api-access-j6pmc") pod "418d4320-50e7-4767-bed7-db43af1583f0" (UID: "418d4320-50e7-4767-bed7-db43af1583f0"). InnerVolumeSpecName "kube-api-access-j6pmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.392029 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "dfeb89d5-a81d-411c-8808-ae9f506780e2" (UID: "dfeb89d5-a81d-411c-8808-ae9f506780e2"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.408321 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-kube-api-access-trjwl" (OuterVolumeSpecName: "kube-api-access-trjwl") pod "ee91c9e3-c2f7-48a3-8a78-19f6dc262e26" (UID: "ee91c9e3-c2f7-48a3-8a78-19f6dc262e26"). InnerVolumeSpecName "kube-api-access-trjwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.447520 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-config-data" (OuterVolumeSpecName: "config-data") pod "ee91c9e3-c2f7-48a3-8a78-19f6dc262e26" (UID: "ee91c9e3-c2f7-48a3-8a78-19f6dc262e26"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: W1124 19:39:34.457643 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2d42cca_2cc7_4b6a_9f1d_215f522b4c82.slice/crio-78c456259fa0894750b2097a40cb0c89a5e41533a162992101df1bd5e34c787e WatchSource:0}: Error finding container 78c456259fa0894750b2097a40cb0c89a5e41533a162992101df1bd5e34c787e: Status 404 returned error can't find the container with id 78c456259fa0894750b2097a40cb0c89a5e41533a162992101df1bd5e34c787e Nov 24 19:39:34 crc kubenswrapper[4812]: E1124 19:39:34.478384 4812 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.478408 4812 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/418d4320-50e7-4767-bed7-db43af1583f0-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.478439 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.478448 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trjwl\" (UniqueName: \"kubernetes.io/projected/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-kube-api-access-trjwl\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: E1124 19:39:34.478475 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-config-data podName:db2cd03e-b999-48ea-b540-7fd35356ba8b nodeName:}" failed. No retries permitted until 2025-11-24 19:39:38.478454268 +0000 UTC m=+1372.267406639 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-config-data") pod "rabbitmq-cell1-server-0" (UID: "db2cd03e-b999-48ea-b540-7fd35356ba8b") : configmap "rabbitmq-cell1-config-data" not found Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.478508 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dfeb89d5-a81d-411c-8808-ae9f506780e2-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.478547 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/418d4320-50e7-4767-bed7-db43af1583f0-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.478571 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.478583 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/418d4320-50e7-4767-bed7-db43af1583f0-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.478594 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dfeb89d5-a81d-411c-8808-ae9f506780e2-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.478603 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfeb89d5-a81d-411c-8808-ae9f506780e2-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.478614 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bnxv\" (UniqueName: \"kubernetes.io/projected/dfeb89d5-a81d-411c-8808-ae9f506780e2-kube-api-access-9bnxv\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.478623 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6pmc\" (UniqueName: \"kubernetes.io/projected/418d4320-50e7-4767-bed7-db43af1583f0-kube-api-access-j6pmc\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.480546 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee91c9e3-c2f7-48a3-8a78-19f6dc262e26" (UID: "ee91c9e3-c2f7-48a3-8a78-19f6dc262e26"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.497517 4812 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.517715 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfeb89d5-a81d-411c-8808-ae9f506780e2-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "dfeb89d5-a81d-411c-8808-ae9f506780e2" (UID: "dfeb89d5-a81d-411c-8808-ae9f506780e2"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.521588 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "ee91c9e3-c2f7-48a3-8a78-19f6dc262e26" (UID: "ee91c9e3-c2f7-48a3-8a78-19f6dc262e26"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.524896 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-config-data" (OuterVolumeSpecName: "config-data") pod "418d4320-50e7-4767-bed7-db43af1583f0" (UID: "418d4320-50e7-4767-bed7-db43af1583f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.546756 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi5538-account-delete-t9sww"] Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.580314 4812 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.580642 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfeb89d5-a81d-411c-8808-ae9f506780e2-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.580653 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.580662 4812 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.580671 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.582894 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell063c1-account-delete-9lhdk"] Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.615134 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfeb89d5-a81d-411c-8808-ae9f506780e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfeb89d5-a81d-411c-8808-ae9f506780e2" (UID: "dfeb89d5-a81d-411c-8808-ae9f506780e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.615375 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "418d4320-50e7-4767-bed7-db43af1583f0" (UID: "418d4320-50e7-4767-bed7-db43af1583f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.635702 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "418d4320-50e7-4767-bed7-db43af1583f0" (UID: "418d4320-50e7-4767-bed7-db43af1583f0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.672929 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-n2bml"] Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.680030 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-n2bml"] Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.682444 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.682474 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfeb89d5-a81d-411c-8808-ae9f506780e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.682483 4812 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.684215 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-d48t8"] Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.689984 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-d48t8"] Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.731273 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfeb89d5-a81d-411c-8808-ae9f506780e2-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "dfeb89d5-a81d-411c-8808-ae9f506780e2" (UID: "dfeb89d5-a81d-411c-8808-ae9f506780e2"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.785043 4812 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfeb89d5-a81d-411c-8808-ae9f506780e2-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.827748 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "ee91c9e3-c2f7-48a3-8a78-19f6dc262e26" (UID: "ee91c9e3-c2f7-48a3-8a78-19f6dc262e26"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.889437 4812 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.929300 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell063c1-account-delete-9lhdk" event={"ID":"7c8539e2-3b3b-488b-9111-d59aa7317490","Type":"ContainerStarted","Data":"188993a747e2ccdf97c80db73f5a7857980951204b931360a0a221f4d58536a8"} Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.931137 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement0bb7-account-delete-m56jl" event={"ID":"b12268ff-1299-46c4-8937-8c4dc00f7dc5","Type":"ContainerStarted","Data":"b31271e817b6cbe93c188af2bf94c180b3e0e23ccb62090380412786781d78c1"} Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.935636 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder5926-account-delete-lvwv5" event={"ID":"d1b0b239-3d7b-40c3-a599-f3f74d452813","Type":"ContainerStarted","Data":"887cc2b17ff121668f2000018bdc31443036cc336d570f9b6969a8addfc3e61d"} Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.938540 4812 generic.go:334] "Generic (PLEG): container finished" podID="72d5ebd7-a16f-49bb-a80e-1138d5f197d3" containerID="b424d3652698c522c799ba60499e354d09adfff1a34968540b33cb676b432c7e" exitCode=0 Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.938598 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbicanfac2-account-delete-v4hfz" event={"ID":"72d5ebd7-a16f-49bb-a80e-1138d5f197d3","Type":"ContainerDied","Data":"b424d3652698c522c799ba60499e354d09adfff1a34968540b33cb676b432c7e"} Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.942900 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi5538-account-delete-t9sww" event={"ID":"2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9","Type":"ContainerStarted","Data":"7ba603237c638b8b95b23a9796088c70a29ebfff6750c60097d85338ef551e9a"} Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.948414 4812 generic.go:334] "Generic (PLEG): container finished" podID="93240875-2cab-44e0-b475-20f46cb4850e" containerID="a1d0c0172507606f6c4fa928a3e9234afb2ec885eb8b341a5de04336c4f109af" exitCode=0 Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.948475 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"93240875-2cab-44e0-b475-20f46cb4850e","Type":"ContainerDied","Data":"a1d0c0172507606f6c4fa928a3e9234afb2ec885eb8b341a5de04336c4f109af"} Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.965112 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "418d4320-50e7-4767-bed7-db43af1583f0" (UID: "418d4320-50e7-4767-bed7-db43af1583f0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:34 crc kubenswrapper[4812]: I1124 19:39:34.993016 4812 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/418d4320-50e7-4767-bed7-db43af1583f0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:35 crc kubenswrapper[4812]: I1124 19:39:35.025665 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29c07c15-c183-4546-8c7b-574776382e9b" path="/var/lib/kubelet/pods/29c07c15-c183-4546-8c7b-574776382e9b/volumes" Nov 24 19:39:35 crc kubenswrapper[4812]: I1124 19:39:35.026313 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f3dca2d-7c6c-428d-9789-1463444fe46f" path="/var/lib/kubelet/pods/4f3dca2d-7c6c-428d-9789-1463444fe46f/volumes" Nov 24 19:39:35 crc kubenswrapper[4812]: I1124 19:39:35.027001 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cfe0ed1-d668-482c-8303-2ca93bdfc057" path="/var/lib/kubelet/pods/8cfe0ed1-d668-482c-8303-2ca93bdfc057/volumes" Nov 24 19:39:35 crc kubenswrapper[4812]: I1124 19:39:35.035786 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844" path="/var/lib/kubelet/pods/8e2bcc51-ac60-4c5f-b0ac-6b32ebe0a844/volumes" Nov 24 19:39:35 crc kubenswrapper[4812]: I1124 19:39:35.036481 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f7453e8-f9c5-4588-80b8-82bba37e1514" path="/var/lib/kubelet/pods/8f7453e8-f9c5-4588-80b8-82bba37e1514/volumes" Nov 24 19:39:35 crc kubenswrapper[4812]: I1124 19:39:35.036970 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a238488e-998f-478f-8163-452bb47b4dfd" path="/var/lib/kubelet/pods/a238488e-998f-478f-8163-452bb47b4dfd/volumes" Nov 24 19:39:35 crc kubenswrapper[4812]: I1124 19:39:35.040387 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7b9be63-7c78-4d97-87e7-8efd09c2669b" path="/var/lib/kubelet/pods/d7b9be63-7c78-4d97-87e7-8efd09c2669b/volumes" Nov 24 19:39:35 crc kubenswrapper[4812]: I1124 19:39:35.041659 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance7fd2-account-delete-vl62k" event={"ID":"c2d42cca-2cc7-4b6a-9f1d-215f522b4c82","Type":"ContainerStarted","Data":"78c456259fa0894750b2097a40cb0c89a5e41533a162992101df1bd5e34c787e"} Nov 24 19:39:35 crc kubenswrapper[4812]: I1124 19:39:35.044616 4812 generic.go:334] "Generic (PLEG): container finished" podID="5388a6e8-73d7-4097-9521-b47649b4c6c8" containerID="a5b96a8d42c3405967068d87a03841996bd50f3213a845dde3d04bbf1ba0f3ff" exitCode=0 Nov 24 19:39:35 crc kubenswrapper[4812]: I1124 19:39:35.044713 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6474fd5f77-zwqph" event={"ID":"5388a6e8-73d7-4097-9521-b47649b4c6c8","Type":"ContainerDied","Data":"a5b96a8d42c3405967068d87a03841996bd50f3213a845dde3d04bbf1ba0f3ff"} Nov 24 19:39:35 crc kubenswrapper[4812]: I1124 19:39:35.057249 4812 scope.go:117] "RemoveContainer" containerID="efc44587faa3ea03e56f32c183bd4613a63898038cbd2f133bb473692484cc9e" Nov 24 19:39:35 crc kubenswrapper[4812]: I1124 19:39:35.057381 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 19:39:35 crc kubenswrapper[4812]: I1124 19:39:35.060499 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_dfeb89d5-a81d-411c-8808-ae9f506780e2/ovsdbserver-sb/0.log" Nov 24 19:39:35 crc kubenswrapper[4812]: I1124 19:39:35.060571 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"dfeb89d5-a81d-411c-8808-ae9f506780e2","Type":"ContainerDied","Data":"cbdf6f20669301fbd2866d17b92f7a44649a69a94ca6c94ae200cebc5be35a8c"} Nov 24 19:39:35 crc kubenswrapper[4812]: I1124 19:39:35.060615 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 19:39:35 crc kubenswrapper[4812]: I1124 19:39:35.086669 4812 generic.go:334] "Generic (PLEG): container finished" podID="1ea33728-2445-4a6f-9ac6-ab0aeff9f95c" containerID="c006dae87ca76eaf2e493bd8a0fbad15d198c2c947c2d61201bc1d4f3e9c9d97" exitCode=0 Nov 24 19:39:35 crc kubenswrapper[4812]: I1124 19:39:35.086752 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c","Type":"ContainerDied","Data":"c006dae87ca76eaf2e493bd8a0fbad15d198c2c947c2d61201bc1d4f3e9c9d97"} Nov 24 19:39:35 crc kubenswrapper[4812]: I1124 19:39:35.088638 4812 generic.go:334] "Generic (PLEG): container finished" podID="fde2bf26-3775-4ffe-83d7-1bc11d36c1e3" containerID="44d0bec5b4e03ee1c205aff04d3d557f0b6f05d3d6adee7a3fd558ceba3cbd6b" exitCode=0 Nov 24 19:39:35 crc kubenswrapper[4812]: I1124 19:39:35.088704 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" event={"ID":"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3","Type":"ContainerDied","Data":"44d0bec5b4e03ee1c205aff04d3d557f0b6f05d3d6adee7a3fd558ceba3cbd6b"} Nov 24 19:39:35 crc kubenswrapper[4812]: I1124 19:39:35.089697 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ee91c9e3-c2f7-48a3-8a78-19f6dc262e26","Type":"ContainerDied","Data":"c7714efb1d8ef338030911e95ba5501768f5bdd583b8c2b72ac5c89c3f066ed0"} Nov 24 19:39:35 crc kubenswrapper[4812]: I1124 19:39:35.089760 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.101113 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-86f4846457-t2fl4" event={"ID":"418d4320-50e7-4767-bed7-db43af1583f0","Type":"ContainerDied","Data":"8896e5afc4cb3d3e2a72bc8141f236974d95f7861b6c4367e44b1f7fb86fe9c4"} Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.101487 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-86f4846457-t2fl4" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.107709 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron448d-account-delete-qskvr" event={"ID":"b72df6d7-27f7-49e5-93d9-4069db72b602","Type":"ContainerStarted","Data":"fc1c492c9e28818aec5eb4f22cf758519cfc946a5a19208132fe5282f31a336a"} Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:35.124940 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a1d0c0172507606f6c4fa928a3e9234afb2ec885eb8b341a5de04336c4f109af is running failed: container process not found" containerID="a1d0c0172507606f6c4fa928a3e9234afb2ec885eb8b341a5de04336c4f109af" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:35.125153 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a1d0c0172507606f6c4fa928a3e9234afb2ec885eb8b341a5de04336c4f109af is running failed: container process not found" containerID="a1d0c0172507606f6c4fa928a3e9234afb2ec885eb8b341a5de04336c4f109af" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:35.125336 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a1d0c0172507606f6c4fa928a3e9234afb2ec885eb8b341a5de04336c4f109af is running failed: container process not found" containerID="a1d0c0172507606f6c4fa928a3e9234afb2ec885eb8b341a5de04336c4f109af" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:35.125489 4812 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a1d0c0172507606f6c4fa928a3e9234afb2ec885eb8b341a5de04336c4f109af is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="93240875-2cab-44e0-b475-20f46cb4850e" containerName="nova-cell0-conductor-conductor" Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:35.308558 4812 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:35.308621 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-config-data podName:7cb8ede6-6163-4906-89f7-7fe6458edc36 nodeName:}" failed. No retries permitted until 2025-11-24 19:39:39.30860683 +0000 UTC m=+1373.097559201 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-config-data") pod "rabbitmq-server-0" (UID: "7cb8ede6-6163-4906-89f7-7fe6458edc36") : configmap "rabbitmq-config-data" not found Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.500860 4812 scope.go:117] "RemoveContainer" containerID="b80b07ff93663ea7089effb52d3abdb13f3c43cfcfd43f87f18a84303d9f1a1c" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.550300 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.690932 4812 scope.go:117] "RemoveContainer" containerID="542fa987ba71008e6087798dc608e5f12d1c0e5b34bd8042efb1db753a6ec061" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.693732 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6474fd5f77-zwqph" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.714859 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.714942 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-kolla-config\") pod \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.715026 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-combined-ca-bundle\") pod \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.715078 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-galera-tls-certs\") pod \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.715101 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g686\" (UniqueName: \"kubernetes.io/projected/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-kube-api-access-4g686\") pod \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.715119 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-config-data-default\") pod \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.715156 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-operator-scripts\") pod \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.715173 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-config-data-generated\") pod \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\" (UID: \"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c\") " Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.715966 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "1ea33728-2445-4a6f-9ac6-ab0aeff9f95c" (UID: "1ea33728-2445-4a6f-9ac6-ab0aeff9f95c"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.715989 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "1ea33728-2445-4a6f-9ac6-ab0aeff9f95c" (UID: "1ea33728-2445-4a6f-9ac6-ab0aeff9f95c"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.716465 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "1ea33728-2445-4a6f-9ac6-ab0aeff9f95c" (UID: "1ea33728-2445-4a6f-9ac6-ab0aeff9f95c"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.716566 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ea33728-2445-4a6f-9ac6-ab0aeff9f95c" (UID: "1ea33728-2445-4a6f-9ac6-ab0aeff9f95c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.754280 4812 scope.go:117] "RemoveContainer" containerID="632ed76758afffe7deabbb497be96385b1fa68fdc85d0a39f726b5ca3baf0880" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.762002 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-kube-api-access-4g686" (OuterVolumeSpecName: "kube-api-access-4g686") pod "1ea33728-2445-4a6f-9ac6-ab0aeff9f95c" (UID: "1ea33728-2445-4a6f-9ac6-ab0aeff9f95c"). InnerVolumeSpecName "kube-api-access-4g686". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.780767 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "mysql-db") pod "1ea33728-2445-4a6f-9ac6-ab0aeff9f95c" (UID: "1ea33728-2445-4a6f-9ac6-ab0aeff9f95c"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.781041 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ea33728-2445-4a6f-9ac6-ab0aeff9f95c" (UID: "1ea33728-2445-4a6f-9ac6-ab0aeff9f95c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.817771 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jphk8\" (UniqueName: \"kubernetes.io/projected/5388a6e8-73d7-4097-9521-b47649b4c6c8-kube-api-access-jphk8\") pod \"5388a6e8-73d7-4097-9521-b47649b4c6c8\" (UID: \"5388a6e8-73d7-4097-9521-b47649b4c6c8\") " Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.817893 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5388a6e8-73d7-4097-9521-b47649b4c6c8-config-data-custom\") pod \"5388a6e8-73d7-4097-9521-b47649b4c6c8\" (UID: \"5388a6e8-73d7-4097-9521-b47649b4c6c8\") " Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.817954 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5388a6e8-73d7-4097-9521-b47649b4c6c8-combined-ca-bundle\") pod \"5388a6e8-73d7-4097-9521-b47649b4c6c8\" (UID: \"5388a6e8-73d7-4097-9521-b47649b4c6c8\") " Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.817999 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5388a6e8-73d7-4097-9521-b47649b4c6c8-logs\") pod \"5388a6e8-73d7-4097-9521-b47649b4c6c8\" (UID: \"5388a6e8-73d7-4097-9521-b47649b4c6c8\") " Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.818058 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5388a6e8-73d7-4097-9521-b47649b4c6c8-config-data\") pod \"5388a6e8-73d7-4097-9521-b47649b4c6c8\" (UID: \"5388a6e8-73d7-4097-9521-b47649b4c6c8\") " Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.818443 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.818457 4812 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.818468 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.818477 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g686\" (UniqueName: \"kubernetes.io/projected/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-kube-api-access-4g686\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.818486 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.818494 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.818502 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.818917 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5388a6e8-73d7-4097-9521-b47649b4c6c8-logs" (OuterVolumeSpecName: "logs") pod "5388a6e8-73d7-4097-9521-b47649b4c6c8" (UID: "5388a6e8-73d7-4097-9521-b47649b4c6c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.821015 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="de0c0304-db1e-4a1e-8b7e-5700c55fc9ed" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.167:8776/healthcheck\": read tcp 10.217.0.2:53462->10.217.0.167:8776: read: connection reset by peer" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.831631 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5388a6e8-73d7-4097-9521-b47649b4c6c8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5388a6e8-73d7-4097-9521-b47649b4c6c8" (UID: "5388a6e8-73d7-4097-9521-b47649b4c6c8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.832645 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5388a6e8-73d7-4097-9521-b47649b4c6c8-kube-api-access-jphk8" (OuterVolumeSpecName: "kube-api-access-jphk8") pod "5388a6e8-73d7-4097-9521-b47649b4c6c8" (UID: "5388a6e8-73d7-4097-9521-b47649b4c6c8"). InnerVolumeSpecName "kube-api-access-jphk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.878626 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.878950 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="09c615e5-0d09-4775-b559-6883b6dc280b" containerName="ceilometer-central-agent" containerID="cri-o://8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0" gracePeriod=30 Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.878991 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="09c615e5-0d09-4775-b559-6883b6dc280b" containerName="proxy-httpd" containerID="cri-o://a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2" gracePeriod=30 Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.879099 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="09c615e5-0d09-4775-b559-6883b6dc280b" containerName="sg-core" containerID="cri-o://b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63" gracePeriod=30 Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.879152 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="09c615e5-0d09-4775-b559-6883b6dc280b" containerName="ceilometer-notification-agent" containerID="cri-o://fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267" gracePeriod=30 Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.922119 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jphk8\" (UniqueName: \"kubernetes.io/projected/5388a6e8-73d7-4097-9521-b47649b4c6c8-kube-api-access-jphk8\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.922152 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5388a6e8-73d7-4097-9521-b47649b4c6c8-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.922164 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5388a6e8-73d7-4097-9521-b47649b4c6c8-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:35.934173 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.053722 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.053944 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="174dd2a1-292b-4e07-8ed8-48d4109e9f57" containerName="memcached" containerID="cri-o://ea38aff07288e1f408c0f6650bafdbaba5ec40ffed460a32bda946e57bb0490c" gracePeriod=30 Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.065663 4812 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.104523 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5388a6e8-73d7-4097-9521-b47649b4c6c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5388a6e8-73d7-4097-9521-b47649b4c6c8" (UID: "5388a6e8-73d7-4097-9521-b47649b4c6c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.105704 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "1ea33728-2445-4a6f-9ac6-ab0aeff9f95c" (UID: "1ea33728-2445-4a6f-9ac6-ab0aeff9f95c"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.130245 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-84l42"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.145383 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-62xwh"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.145432 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-62xwh"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.149319 4812 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.149453 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5388a6e8-73d7-4097-9521-b47649b4c6c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.149468 4812 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.170508 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5388a6e8-73d7-4097-9521-b47649b4c6c8-config-data" (OuterVolumeSpecName: "config-data") pod "5388a6e8-73d7-4097-9521-b47649b4c6c8" (UID: "5388a6e8-73d7-4097-9521-b47649b4c6c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.170569 4812 generic.go:334] "Generic (PLEG): container finished" podID="56f4002e-a9bd-462d-b5ed-ce5ea166ec16" containerID="a37153fef35e2218bd72e8d349e3157d2b4261a45df7d03e4deb1d5ee538a030" exitCode=0 Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.170661 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"56f4002e-a9bd-462d-b5ed-ce5ea166ec16","Type":"ContainerDied","Data":"a37153fef35e2218bd72e8d349e3157d2b4261a45df7d03e4deb1d5ee538a030"} Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.185548 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-84l42"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.207597 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi5538-account-delete-t9sww" event={"ID":"2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9","Type":"ContainerStarted","Data":"d0bcc42b8192418057b3b366ea961bd0ce3afbaab1670843d8937a73bc38c555"} Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.216795 4812 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novaapi5538-account-delete-t9sww" secret="" err="secret \"galera-openstack-dockercfg-zbtr8\" not found" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.228464 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="09c615e5-0d09-4775-b559-6883b6dc280b" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.201:3000/\": dial tcp 10.217.0.201:3000: connect: connection refused" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.234744 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell063c1-account-delete-9lhdk" event={"ID":"7c8539e2-3b3b-488b-9111-d59aa7317490","Type":"ContainerStarted","Data":"9e8766dff59643e0ac7ad1773abbd1f12836d5051ab75fbe8a53e51dc0236e21"} Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.235301 4812 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novacell063c1-account-delete-9lhdk" secret="" err="secret \"galera-openstack-dockercfg-zbtr8\" not found" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.239614 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-c8f69d86c-2v9p4"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.239885 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-c8f69d86c-2v9p4" podUID="c4f68d8b-5bb9-4778-9416-1e9f82bba8f3" containerName="keystone-api" containerID="cri-o://d758e81f7d47380dbb825bd4c9f03b18b5d028cad105d8ab2aa48cd43176fb2a" gracePeriod=30 Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.268114 4812 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.268550 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7c8539e2-3b3b-488b-9111-d59aa7317490-operator-scripts podName:7c8539e2-3b3b-488b-9111-d59aa7317490 nodeName:}" failed. No retries permitted until 2025-11-24 19:39:36.768520516 +0000 UTC m=+1370.557472887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7c8539e2-3b3b-488b-9111-d59aa7317490-operator-scripts") pod "novacell063c1-account-delete-9lhdk" (UID: "7c8539e2-3b3b-488b-9111-d59aa7317490") : configmap "openstack-scripts" not found Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.268965 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystonec995-account-delete-t2md4"] Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.269380 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c07c15-c183-4546-8c7b-574776382e9b" containerName="openstack-network-exporter" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.269397 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c07c15-c183-4546-8c7b-574776382e9b" containerName="openstack-network-exporter" Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.269411 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="418d4320-50e7-4767-bed7-db43af1583f0" containerName="proxy-server" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.269417 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="418d4320-50e7-4767-bed7-db43af1583f0" containerName="proxy-server" Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.269432 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5388a6e8-73d7-4097-9521-b47649b4c6c8" containerName="barbican-worker" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.269438 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5388a6e8-73d7-4097-9521-b47649b4c6c8" containerName="barbican-worker" Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.269448 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f3dca2d-7c6c-428d-9789-1463444fe46f" containerName="ovn-controller" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.269455 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f3dca2d-7c6c-428d-9789-1463444fe46f" containerName="ovn-controller" Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.273745 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea33728-2445-4a6f-9ac6-ab0aeff9f95c" containerName="galera" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.273774 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea33728-2445-4a6f-9ac6-ab0aeff9f95c" containerName="galera" Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.273798 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b9be63-7c78-4d97-87e7-8efd09c2669b" containerName="dnsmasq-dns" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.273804 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b9be63-7c78-4d97-87e7-8efd09c2669b" containerName="dnsmasq-dns" Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.273818 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfeb89d5-a81d-411c-8808-ae9f506780e2" containerName="openstack-network-exporter" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.273824 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfeb89d5-a81d-411c-8808-ae9f506780e2" containerName="openstack-network-exporter" Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.273855 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfeb89d5-a81d-411c-8808-ae9f506780e2" containerName="ovsdbserver-sb" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.273864 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfeb89d5-a81d-411c-8808-ae9f506780e2" containerName="ovsdbserver-sb" Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.273883 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c07c15-c183-4546-8c7b-574776382e9b" containerName="ovsdbserver-nb" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.273889 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c07c15-c183-4546-8c7b-574776382e9b" containerName="ovsdbserver-nb" Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.270172 4812 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.274056 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-operator-scripts podName:2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9 nodeName:}" failed. No retries permitted until 2025-11-24 19:39:36.77402877 +0000 UTC m=+1370.562981141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-operator-scripts") pod "novaapi5538-account-delete-t9sww" (UID: "2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9") : configmap "openstack-scripts" not found Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.274989 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="418d4320-50e7-4767-bed7-db43af1583f0" containerName="proxy-httpd" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.275003 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="418d4320-50e7-4767-bed7-db43af1583f0" containerName="proxy-httpd" Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.275023 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cfe0ed1-d668-482c-8303-2ca93bdfc057" containerName="openstack-network-exporter" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.275031 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cfe0ed1-d668-482c-8303-2ca93bdfc057" containerName="openstack-network-exporter" Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.275066 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5388a6e8-73d7-4097-9521-b47649b4c6c8" containerName="barbican-worker-log" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.275073 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5388a6e8-73d7-4097-9521-b47649b4c6c8" containerName="barbican-worker-log" Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.275082 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea33728-2445-4a6f-9ac6-ab0aeff9f95c" containerName="mysql-bootstrap" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.275088 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea33728-2445-4a6f-9ac6-ab0aeff9f95c" containerName="mysql-bootstrap" Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.275100 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee91c9e3-c2f7-48a3-8a78-19f6dc262e26" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.275106 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee91c9e3-c2f7-48a3-8a78-19f6dc262e26" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.275146 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b9be63-7c78-4d97-87e7-8efd09c2669b" containerName="init" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.275154 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b9be63-7c78-4d97-87e7-8efd09c2669b" containerName="init" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.275708 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f3dca2d-7c6c-428d-9789-1463444fe46f" containerName="ovn-controller" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.275727 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfeb89d5-a81d-411c-8808-ae9f506780e2" containerName="ovsdbserver-sb" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.275736 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="29c07c15-c183-4546-8c7b-574776382e9b" containerName="ovsdbserver-nb" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.275744 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea33728-2445-4a6f-9ac6-ab0aeff9f95c" containerName="galera" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.275790 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="5388a6e8-73d7-4097-9521-b47649b4c6c8" containerName="barbican-worker-log" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.275799 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cfe0ed1-d668-482c-8303-2ca93bdfc057" containerName="openstack-network-exporter" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.275812 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="418d4320-50e7-4767-bed7-db43af1583f0" containerName="proxy-server" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.275820 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="29c07c15-c183-4546-8c7b-574776382e9b" containerName="openstack-network-exporter" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.270393 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5388a6e8-73d7-4097-9521-b47649b4c6c8-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.276794 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.281500 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfeb89d5-a81d-411c-8808-ae9f506780e2" containerName="openstack-network-exporter" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.288539 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="418d4320-50e7-4767-bed7-db43af1583f0" containerName="proxy-httpd" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.288565 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee91c9e3-c2f7-48a3-8a78-19f6dc262e26" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.288575 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b9be63-7c78-4d97-87e7-8efd09c2669b" containerName="dnsmasq-dns" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.288594 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="5388a6e8-73d7-4097-9521-b47649b4c6c8" containerName="barbican-worker" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.289444 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1ea33728-2445-4a6f-9ac6-ab0aeff9f95c","Type":"ContainerDied","Data":"81cbdbc0e4b9a56a7fb4ff4568fcc82b7298ac4cf48d0b2c832c246076473a95"} Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.289480 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystonec995-account-delete-t2md4"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.289495 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.289788 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystonec995-account-delete-t2md4" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.293853 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-srrps"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.309042 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-f58df686c-jn8qq" podUID="cf04fa0a-bd1d-442f-afff-05fe1ebdeafa" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.161:9696/\": dial tcp 10.217.0.161:9696: connect: connection refused" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.311741 4812 generic.go:334] "Generic (PLEG): container finished" podID="de0c0304-db1e-4a1e-8b7e-5700c55fc9ed" containerID="4984c0a89981ef7e9ccb2912cd1e3db25ba3f333055dcbc10b30cb69f8c45835" exitCode=0 Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.311814 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed","Type":"ContainerDied","Data":"4984c0a89981ef7e9ccb2912cd1e3db25ba3f333055dcbc10b30cb69f8c45835"} Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.312882 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/novaapi5538-account-delete-t9sww" podStartSLOduration=5.3128633149999995 podStartE2EDuration="5.312863315s" podCreationTimestamp="2025-11-24 19:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:39:36.23465956 +0000 UTC m=+1370.023611931" watchObservedRunningTime="2025-11-24 19:39:36.312863315 +0000 UTC m=+1370.101815686" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.319158 4812 generic.go:334] "Generic (PLEG): container finished" podID="b12268ff-1299-46c4-8937-8c4dc00f7dc5" containerID="0809a53c850e8919822e704dda8799d073f705e0a2901f82f0b34719ebf12895" exitCode=0 Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.319495 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement0bb7-account-delete-m56jl" event={"ID":"b12268ff-1299-46c4-8937-8c4dc00f7dc5","Type":"ContainerDied","Data":"0809a53c850e8919822e704dda8799d073f705e0a2901f82f0b34719ebf12895"} Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.327059 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-srrps"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.343543 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystonec995-account-delete-t2md4"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.347672 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-c995-account-create-vdjsf"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.351989 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/novacell063c1-account-delete-9lhdk" podStartSLOduration=5.351971628 podStartE2EDuration="5.351971628s" podCreationTimestamp="2025-11-24 19:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:39:36.258745063 +0000 UTC m=+1370.047697434" watchObservedRunningTime="2025-11-24 19:39:36.351971628 +0000 UTC m=+1370.140923999" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.352593 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" event={"ID":"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3","Type":"ContainerDied","Data":"942bc5cdf6f1cd2fd24a5d3e49e9e3e57fc49522e24b1ffe010062052809a919"} Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.352644 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="942bc5cdf6f1cd2fd24a5d3e49e9e3e57fc49522e24b1ffe010062052809a919" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.361286 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-c995-account-create-vdjsf"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.366690 4812 generic.go:334] "Generic (PLEG): container finished" podID="d1b0b239-3d7b-40c3-a599-f3f74d452813" containerID="160b25a1390b6fcfeffce5f61c64ce498bcf9f9bb67ab059982c912baa5f9678" exitCode=0 Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.366755 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder5926-account-delete-lvwv5" event={"ID":"d1b0b239-3d7b-40c3-a599-f3f74d452813","Type":"ContainerDied","Data":"160b25a1390b6fcfeffce5f61c64ce498bcf9f9bb67ab059982c912baa5f9678"} Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.377370 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-2w8kz"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.377839 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnfr9\" (UniqueName: \"kubernetes.io/projected/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-kube-api-access-tnfr9\") pod \"keystonec995-account-delete-t2md4\" (UID: \"2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c\") " pod="openstack/keystonec995-account-delete-t2md4" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.377947 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-operator-scripts\") pod \"keystonec995-account-delete-t2md4\" (UID: \"2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c\") " pod="openstack/keystonec995-account-delete-t2md4" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.392539 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-2w8kz"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.405673 4812 generic.go:334] "Generic (PLEG): container finished" podID="09c615e5-0d09-4775-b559-6883b6dc280b" containerID="b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63" exitCode=2 Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.405777 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09c615e5-0d09-4775-b559-6883b6dc280b","Type":"ContainerDied","Data":"b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63"} Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.415648 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"93240875-2cab-44e0-b475-20f46cb4850e","Type":"ContainerDied","Data":"a91aa1ed2e3a138c6cd18478434b1ec1518f2e76b65517d1a8b48e280f559523"} Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.415687 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a91aa1ed2e3a138c6cd18478434b1ec1518f2e76b65517d1a8b48e280f559523" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.418450 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron448d-account-delete-qskvr"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.426098 4812 generic.go:334] "Generic (PLEG): container finished" podID="cf01449d-a52d-488f-bd93-b5f84b57fb13" containerID="0246dd3196f30fe8b017001268ba69633ac8f5832eaad85f9a757939e217b9fb" exitCode=0 Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.426174 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-448d-account-create-z7k4c"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.426201 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cf01449d-a52d-488f-bd93-b5f84b57fb13","Type":"ContainerDied","Data":"0246dd3196f30fe8b017001268ba69633ac8f5832eaad85f9a757939e217b9fb"} Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.433571 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-448d-account-create-z7k4c"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.472590 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6474fd5f77-zwqph" event={"ID":"5388a6e8-73d7-4097-9521-b47649b4c6c8","Type":"ContainerDied","Data":"20027080f2b1aa6654230e68a7287fd0f79e9920ece752829987525cae68b110"} Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.472619 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6474fd5f77-zwqph" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.479148 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnfr9\" (UniqueName: \"kubernetes.io/projected/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-kube-api-access-tnfr9\") pod \"keystonec995-account-delete-t2md4\" (UID: \"2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c\") " pod="openstack/keystonec995-account-delete-t2md4" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.479266 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-operator-scripts\") pod \"keystonec995-account-delete-t2md4\" (UID: \"2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c\") " pod="openstack/keystonec995-account-delete-t2md4" Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.479425 4812 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.479463 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-operator-scripts podName:2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c nodeName:}" failed. No retries permitted until 2025-11-24 19:39:36.979451159 +0000 UTC m=+1370.768403530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-operator-scripts") pod "keystonec995-account-delete-t2md4" (UID: "2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c") : configmap "openstack-scripts" not found Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.479831 4812 generic.go:334] "Generic (PLEG): container finished" podID="1c8509a7-6885-4238-9da3-b214b5f8868e" containerID="d07bf2cbab76f1bef1a940806ac6f6323ae423f3a69064aeb96db6f329d5665f" exitCode=0 Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.479871 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-574f454648-szvd4" event={"ID":"1c8509a7-6885-4238-9da3-b214b5f8868e","Type":"ContainerDied","Data":"d07bf2cbab76f1bef1a940806ac6f6323ae423f3a69064aeb96db6f329d5665f"} Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.481145 4812 generic.go:334] "Generic (PLEG): container finished" podID="b72df6d7-27f7-49e5-93d9-4069db72b602" containerID="de2924e496c4d6a4ca668dec3242cded056b7265571b0684e6ca59824356a9c8" exitCode=0 Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.481305 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron448d-account-delete-qskvr" event={"ID":"b72df6d7-27f7-49e5-93d9-4069db72b602","Type":"ContainerDied","Data":"de2924e496c4d6a4ca668dec3242cded056b7265571b0684e6ca59824356a9c8"} Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.483761 4812 projected.go:194] Error preparing data for projected volume kube-api-access-tnfr9 for pod openstack/keystonec995-account-delete-t2md4: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.483850 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-kube-api-access-tnfr9 podName:2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c nodeName:}" failed. No retries permitted until 2025-11-24 19:39:36.983837642 +0000 UTC m=+1370.772790013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tnfr9" (UniqueName: "kubernetes.io/projected/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-kube-api-access-tnfr9") pod "keystonec995-account-delete-t2md4" (UID: "2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.488674 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="b9dbc131-33f8-4df7-88ba-2d90e93a436c" containerName="kube-state-metrics" containerID="cri-o://8ae9509f408e8d7a118f2c286e6243f94fd2c8cc3503c8aa239b550061eb119e" gracePeriod=30 Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.559877 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b4d7f8ffb-jqsz8" podUID="2e1ab98f-9015-432e-90f2-4692dc37c99e" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.559904 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-b4d7f8ffb-jqsz8" podUID="2e1ab98f-9015-432e-90f2-4692dc37c99e" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": dial tcp 10.217.0.162:9311: connect: connection refused" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.606583 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="b2264eb6-f494-4800-832b-d1e1d02daf4e" containerName="galera" containerID="cri-o://38f77e06a7c796fa3248ed4ee27a5c9f23b546d855d2d2787a29ffdee9ac28be" gracePeriod=30 Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.629211 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-wv687"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.639097 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-wv687"] Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.646935 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.656655 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.656682 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.661316 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.661367 4812 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-cgt9p" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovsdb-server" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.667077 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi5538-account-delete-t9sww"] Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.668664 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.669942 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.669965 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-cgt9p" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovs-vswitchd" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.676402 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-5538-account-create-b9jdq"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.684798 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-5538-account-create-b9jdq"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.711555 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-86f4846457-t2fl4"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.716931 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.732433 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-86f4846457-t2fl4"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.744324 4812 scope.go:117] "RemoveContainer" containerID="37219908d6616db696c761cac954619d4884750ea17c429640079a84f18bb5e8" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.746385 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.747211 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.770941 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.773419 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.785071 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-logs\") pod \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\" (UID: \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\") " Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.785183 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93240875-2cab-44e0-b475-20f46cb4850e-config-data\") pod \"93240875-2cab-44e0-b475-20f46cb4850e\" (UID: \"93240875-2cab-44e0-b475-20f46cb4850e\") " Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.785229 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-config-data\") pod \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\" (UID: \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\") " Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.785562 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-combined-ca-bundle\") pod \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\" (UID: \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\") " Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.785654 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-config-data-custom\") pod \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\" (UID: \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\") " Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.785839 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w2jx\" (UniqueName: \"kubernetes.io/projected/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-kube-api-access-2w2jx\") pod \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\" (UID: \"fde2bf26-3775-4ffe-83d7-1bc11d36c1e3\") " Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.786040 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93240875-2cab-44e0-b475-20f46cb4850e-combined-ca-bundle\") pod \"93240875-2cab-44e0-b475-20f46cb4850e\" (UID: \"93240875-2cab-44e0-b475-20f46cb4850e\") " Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.786340 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvwmd\" (UniqueName: \"kubernetes.io/projected/93240875-2cab-44e0-b475-20f46cb4850e-kube-api-access-xvwmd\") pod \"93240875-2cab-44e0-b475-20f46cb4850e\" (UID: \"93240875-2cab-44e0-b475-20f46cb4850e\") " Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.787069 4812 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.787120 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-operator-scripts podName:2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9 nodeName:}" failed. No retries permitted until 2025-11-24 19:39:37.787106644 +0000 UTC m=+1371.576059015 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-operator-scripts") pod "novaapi5538-account-delete-t9sww" (UID: "2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9") : configmap "openstack-scripts" not found Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.788143 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-logs" (OuterVolumeSpecName: "logs") pod "fde2bf26-3775-4ffe-83d7-1bc11d36c1e3" (UID: "fde2bf26-3775-4ffe-83d7-1bc11d36c1e3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.806798 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.807420 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-24r56"] Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.807953 4812 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.808010 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7c8539e2-3b3b-488b-9111-d59aa7317490-operator-scripts podName:7c8539e2-3b3b-488b-9111-d59aa7317490 nodeName:}" failed. No retries permitted until 2025-11-24 19:39:37.807995058 +0000 UTC m=+1371.596947419 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7c8539e2-3b3b-488b-9111-d59aa7317490-operator-scripts") pod "novacell063c1-account-delete-9lhdk" (UID: "7c8539e2-3b3b-488b-9111-d59aa7317490") : configmap "openstack-scripts" not found Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.825343 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-24r56"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.834700 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.834818 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.838981 4812 scope.go:117] "RemoveContainer" containerID="c1a3bf87878ce67c52c317f9bd13b4bf250280872ea6f819e3464fba67ab5710" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.840207 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-kube-api-access-2w2jx" (OuterVolumeSpecName: "kube-api-access-2w2jx") pod "fde2bf26-3775-4ffe-83d7-1bc11d36c1e3" (UID: "fde2bf26-3775-4ffe-83d7-1bc11d36c1e3"). InnerVolumeSpecName "kube-api-access-2w2jx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.853765 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fde2bf26-3775-4ffe-83d7-1bc11d36c1e3" (UID: "fde2bf26-3775-4ffe-83d7-1bc11d36c1e3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.853851 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-63c1-account-create-zcbsq"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.868479 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-63c1-account-create-zcbsq"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.868568 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell063c1-account-delete-9lhdk"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.884138 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93240875-2cab-44e0-b475-20f46cb4850e-kube-api-access-xvwmd" (OuterVolumeSpecName: "kube-api-access-xvwmd") pod "93240875-2cab-44e0-b475-20f46cb4850e" (UID: "93240875-2cab-44e0-b475-20f46cb4850e"). InnerVolumeSpecName "kube-api-access-xvwmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.893231 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.895454 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w2jx\" (UniqueName: \"kubernetes.io/projected/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-kube-api-access-2w2jx\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.895473 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvwmd\" (UniqueName: \"kubernetes.io/projected/93240875-2cab-44e0-b475-20f46cb4850e-kube-api-access-xvwmd\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.895486 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.902049 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6474fd5f77-zwqph"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.907511 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-6474fd5f77-zwqph"] Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.951683 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93240875-2cab-44e0-b475-20f46cb4850e-config-data" (OuterVolumeSpecName: "config-data") pod "93240875-2cab-44e0-b475-20f46cb4850e" (UID: "93240875-2cab-44e0-b475-20f46cb4850e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.979223 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fde2bf26-3775-4ffe-83d7-1bc11d36c1e3" (UID: "fde2bf26-3775-4ffe-83d7-1bc11d36c1e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.982601 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-config-data" (OuterVolumeSpecName: "config-data") pod "fde2bf26-3775-4ffe-83d7-1bc11d36c1e3" (UID: "fde2bf26-3775-4ffe-83d7-1bc11d36c1e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.997554 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnfr9\" (UniqueName: \"kubernetes.io/projected/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-kube-api-access-tnfr9\") pod \"keystonec995-account-delete-t2md4\" (UID: \"2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c\") " pod="openstack/keystonec995-account-delete-t2md4" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.997712 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-operator-scripts\") pod \"keystonec995-account-delete-t2md4\" (UID: \"2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c\") " pod="openstack/keystonec995-account-delete-t2md4" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.997862 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93240875-2cab-44e0-b475-20f46cb4850e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.997879 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:36 crc kubenswrapper[4812]: I1124 19:39:36.997908 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.997984 4812 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 24 19:39:36 crc kubenswrapper[4812]: E1124 19:39:36.998034 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-operator-scripts podName:2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c nodeName:}" failed. No retries permitted until 2025-11-24 19:39:37.998019556 +0000 UTC m=+1371.786971927 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-operator-scripts") pod "keystonec995-account-delete-t2md4" (UID: "2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c") : configmap "openstack-scripts" not found Nov 24 19:39:37 crc kubenswrapper[4812]: E1124 19:39:37.004572 4812 projected.go:194] Error preparing data for projected volume kube-api-access-tnfr9 for pod openstack/keystonec995-account-delete-t2md4: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 24 19:39:37 crc kubenswrapper[4812]: E1124 19:39:37.004636 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-kube-api-access-tnfr9 podName:2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c nodeName:}" failed. No retries permitted until 2025-11-24 19:39:38.004615471 +0000 UTC m=+1371.793567842 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tnfr9" (UniqueName: "kubernetes.io/projected/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-kube-api-access-tnfr9") pod "keystonec995-account-delete-t2md4" (UID: "2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.014940 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93240875-2cab-44e0-b475-20f46cb4850e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93240875-2cab-44e0-b475-20f46cb4850e" (UID: "93240875-2cab-44e0-b475-20f46cb4850e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.053035 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea33728-2445-4a6f-9ac6-ab0aeff9f95c" path="/var/lib/kubelet/pods/1ea33728-2445-4a6f-9ac6-ab0aeff9f95c/volumes" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.054109 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ff63a07-b659-4f65-a752-4b244ec5b470" path="/var/lib/kubelet/pods/1ff63a07-b659-4f65-a752-4b244ec5b470/volumes" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.054938 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="418d4320-50e7-4767-bed7-db43af1583f0" path="/var/lib/kubelet/pods/418d4320-50e7-4767-bed7-db43af1583f0/volumes" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.056401 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49981ca5-ad04-4746-8900-a440bb82bc36" path="/var/lib/kubelet/pods/49981ca5-ad04-4746-8900-a440bb82bc36/volumes" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.057285 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5388a6e8-73d7-4097-9521-b47649b4c6c8" path="/var/lib/kubelet/pods/5388a6e8-73d7-4097-9521-b47649b4c6c8/volumes" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.058063 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53b4924c-df56-43c7-a032-83eb18d1eca0" path="/var/lib/kubelet/pods/53b4924c-df56-43c7-a032-83eb18d1eca0/volumes" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.059392 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5530b75c-ecf7-4774-8d4e-1acd7c14d729" path="/var/lib/kubelet/pods/5530b75c-ecf7-4774-8d4e-1acd7c14d729/volumes" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.060034 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="689ace22-4dce-411e-8558-8e84de48d105" path="/var/lib/kubelet/pods/689ace22-4dce-411e-8558-8e84de48d105/volumes" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.060724 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a6064d5-ead6-4ac9-8c53-6c72585d76ff" path="/var/lib/kubelet/pods/6a6064d5-ead6-4ac9-8c53-6c72585d76ff/volumes" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.061428 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf8ae591-fb65-4513-b7eb-fd8f13c22761" path="/var/lib/kubelet/pods/cf8ae591-fb65-4513-b7eb-fd8f13c22761/volumes" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.062692 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da6aa924-f282-445c-8aed-71f9e3282b55" path="/var/lib/kubelet/pods/da6aa924-f282-445c-8aed-71f9e3282b55/volumes" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.063540 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfeb89d5-a81d-411c-8808-ae9f506780e2" path="/var/lib/kubelet/pods/dfeb89d5-a81d-411c-8808-ae9f506780e2/volumes" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.064252 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5a20600-c4a1-4a64-9c8b-c8cf8680abac" path="/var/lib/kubelet/pods/e5a20600-c4a1-4a64-9c8b-c8cf8680abac/volumes" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.065480 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee91c9e3-c2f7-48a3-8a78-19f6dc262e26" path="/var/lib/kubelet/pods/ee91c9e3-c2f7-48a3-8a78-19f6dc262e26/volumes" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.066111 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff101b79-8551-4d29-b686-d476abad5900" path="/var/lib/kubelet/pods/ff101b79-8551-4d29-b686-d476abad5900/volumes" Nov 24 19:39:37 crc kubenswrapper[4812]: E1124 19:39:37.088384 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-tnfr9 operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystonec995-account-delete-t2md4" podUID="2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.089381 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-574f454648-szvd4" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.091422 4812 scope.go:117] "RemoveContainer" containerID="c006dae87ca76eaf2e493bd8a0fbad15d198c2c947c2d61201bc1d4f3e9c9d97" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.095760 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.099135 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93240875-2cab-44e0-b475-20f46cb4850e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.105366 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.142918 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.192292 4812 scope.go:117] "RemoveContainer" containerID="fa420cfb2941e121e115e251da95c5f8bb88cdb054bc433739412a5b4c2401d3" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.199952 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-internal-tls-certs\") pod \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.200011 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-combined-ca-bundle\") pod \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.200039 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-public-tls-certs\") pod \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.200586 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-internal-tls-certs\") pod \"1c8509a7-6885-4238-9da3-b214b5f8868e\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.200623 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8dvx\" (UniqueName: \"kubernetes.io/projected/1c8509a7-6885-4238-9da3-b214b5f8868e-kube-api-access-x8dvx\") pod \"1c8509a7-6885-4238-9da3-b214b5f8868e\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.200900 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnjdq\" (UniqueName: \"kubernetes.io/projected/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-kube-api-access-mnjdq\") pod \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.212763 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c8509a7-6885-4238-9da3-b214b5f8868e-kube-api-access-x8dvx" (OuterVolumeSpecName: "kube-api-access-x8dvx") pod "1c8509a7-6885-4238-9da3-b214b5f8868e" (UID: "1c8509a7-6885-4238-9da3-b214b5f8868e"). InnerVolumeSpecName "kube-api-access-x8dvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.225628 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-kube-api-access-mnjdq" (OuterVolumeSpecName: "kube-api-access-mnjdq") pod "56f4002e-a9bd-462d-b5ed-ce5ea166ec16" (UID: "56f4002e-a9bd-462d-b5ed-ce5ea166ec16"). InnerVolumeSpecName "kube-api-access-mnjdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.230166 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-scripts\") pod \"1c8509a7-6885-4238-9da3-b214b5f8868e\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.230328 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-public-tls-certs\") pod \"cf01449d-a52d-488f-bd93-b5f84b57fb13\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.230525 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-config-data\") pod \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.230554 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-scripts\") pod \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.230698 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-logs\") pod \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.230721 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf01449d-a52d-488f-bd93-b5f84b57fb13-httpd-run\") pod \"cf01449d-a52d-488f-bd93-b5f84b57fb13\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.230849 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cf01449d-a52d-488f-bd93-b5f84b57fb13\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.230869 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-combined-ca-bundle\") pod \"1c8509a7-6885-4238-9da3-b214b5f8868e\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.230892 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-config-data\") pod \"1c8509a7-6885-4238-9da3-b214b5f8868e\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.231022 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf01449d-a52d-488f-bd93-b5f84b57fb13-logs\") pod \"cf01449d-a52d-488f-bd93-b5f84b57fb13\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.231054 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-logs\") pod \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.231219 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-scripts\") pod \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.231421 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-config-data\") pod \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.231442 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-public-tls-certs\") pod \"1c8509a7-6885-4238-9da3-b214b5f8868e\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.231865 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-logs" (OuterVolumeSpecName: "logs") pod "de0c0304-db1e-4a1e-8b7e-5700c55fc9ed" (UID: "de0c0304-db1e-4a1e-8b7e-5700c55fc9ed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.231968 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-config-data-custom\") pod \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.231994 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-etc-machine-id\") pod \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.232008 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.232138 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-internal-tls-certs\") pod \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.232470 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89ksh\" (UniqueName: \"kubernetes.io/projected/cf01449d-a52d-488f-bd93-b5f84b57fb13-kube-api-access-89ksh\") pod \"cf01449d-a52d-488f-bd93-b5f84b57fb13\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.232493 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-combined-ca-bundle\") pod \"cf01449d-a52d-488f-bd93-b5f84b57fb13\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.232532 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf01449d-a52d-488f-bd93-b5f84b57fb13-logs" (OuterVolumeSpecName: "logs") pod "cf01449d-a52d-488f-bd93-b5f84b57fb13" (UID: "cf01449d-a52d-488f-bd93-b5f84b57fb13"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.232592 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-scripts" (OuterVolumeSpecName: "scripts") pod "1c8509a7-6885-4238-9da3-b214b5f8868e" (UID: "1c8509a7-6885-4238-9da3-b214b5f8868e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.232794 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-scripts\") pod \"cf01449d-a52d-488f-bd93-b5f84b57fb13\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.232822 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-config-data\") pod \"cf01449d-a52d-488f-bd93-b5f84b57fb13\" (UID: \"cf01449d-a52d-488f-bd93-b5f84b57fb13\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.232963 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-httpd-run\") pod \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.233101 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c8509a7-6885-4238-9da3-b214b5f8868e-logs\") pod \"1c8509a7-6885-4238-9da3-b214b5f8868e\" (UID: \"1c8509a7-6885-4238-9da3-b214b5f8868e\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.233126 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj264\" (UniqueName: \"kubernetes.io/projected/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-kube-api-access-mj264\") pod \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\" (UID: \"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.233147 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-combined-ca-bundle\") pod \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\" (UID: \"56f4002e-a9bd-462d-b5ed-ce5ea166ec16\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.234406 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf01449d-a52d-488f-bd93-b5f84b57fb13-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "cf01449d-a52d-488f-bd93-b5f84b57fb13" (UID: "cf01449d-a52d-488f-bd93-b5f84b57fb13"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.234440 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "de0c0304-db1e-4a1e-8b7e-5700c55fc9ed" (UID: "de0c0304-db1e-4a1e-8b7e-5700c55fc9ed"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.236922 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c8509a7-6885-4238-9da3-b214b5f8868e-logs" (OuterVolumeSpecName: "logs") pod "1c8509a7-6885-4238-9da3-b214b5f8868e" (UID: "1c8509a7-6885-4238-9da3-b214b5f8868e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.243840 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c8509a7-6885-4238-9da3-b214b5f8868e-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.243860 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8dvx\" (UniqueName: \"kubernetes.io/projected/1c8509a7-6885-4238-9da3-b214b5f8868e-kube-api-access-x8dvx\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.243870 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnjdq\" (UniqueName: \"kubernetes.io/projected/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-kube-api-access-mnjdq\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.243879 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.243887 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf01449d-a52d-488f-bd93-b5f84b57fb13-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.243898 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf01449d-a52d-488f-bd93-b5f84b57fb13-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.243907 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.243915 4812 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.243964 4812 scope.go:117] "RemoveContainer" containerID="a5b96a8d42c3405967068d87a03841996bd50f3213a845dde3d04bbf1ba0f3ff" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.245200 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "56f4002e-a9bd-462d-b5ed-ce5ea166ec16" (UID: "56f4002e-a9bd-462d-b5ed-ce5ea166ec16"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.248279 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-scripts" (OuterVolumeSpecName: "scripts") pod "de0c0304-db1e-4a1e-8b7e-5700c55fc9ed" (UID: "de0c0304-db1e-4a1e-8b7e-5700c55fc9ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.249419 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-scripts" (OuterVolumeSpecName: "scripts") pod "56f4002e-a9bd-462d-b5ed-ce5ea166ec16" (UID: "56f4002e-a9bd-462d-b5ed-ce5ea166ec16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.249744 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.254566 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-logs" (OuterVolumeSpecName: "logs") pod "56f4002e-a9bd-462d-b5ed-ce5ea166ec16" (UID: "56f4002e-a9bd-462d-b5ed-ce5ea166ec16"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.255523 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "cf01449d-a52d-488f-bd93-b5f84b57fb13" (UID: "cf01449d-a52d-488f-bd93-b5f84b57fb13"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.260966 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-kube-api-access-mj264" (OuterVolumeSpecName: "kube-api-access-mj264") pod "de0c0304-db1e-4a1e-8b7e-5700c55fc9ed" (UID: "de0c0304-db1e-4a1e-8b7e-5700c55fc9ed"). InnerVolumeSpecName "kube-api-access-mj264". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.262225 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-scripts" (OuterVolumeSpecName: "scripts") pod "cf01449d-a52d-488f-bd93-b5f84b57fb13" (UID: "cf01449d-a52d-488f-bd93-b5f84b57fb13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.268620 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "56f4002e-a9bd-462d-b5ed-ce5ea166ec16" (UID: "56f4002e-a9bd-462d-b5ed-ce5ea166ec16"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.268763 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "de0c0304-db1e-4a1e-8b7e-5700c55fc9ed" (UID: "de0c0304-db1e-4a1e-8b7e-5700c55fc9ed"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.269230 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf01449d-a52d-488f-bd93-b5f84b57fb13-kube-api-access-89ksh" (OuterVolumeSpecName: "kube-api-access-89ksh") pod "cf01449d-a52d-488f-bd93-b5f84b57fb13" (UID: "cf01449d-a52d-488f-bd93-b5f84b57fb13"). InnerVolumeSpecName "kube-api-access-89ksh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.285750 4812 scope.go:117] "RemoveContainer" containerID="8394c58793ba4d118153a08a1542de897146c92d9f0bb3f90ac84fef380e3cf0" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.289687 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.307426 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbicanfac2-account-delete-v4hfz" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.315385 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de0c0304-db1e-4a1e-8b7e-5700c55fc9ed" (UID: "de0c0304-db1e-4a1e-8b7e-5700c55fc9ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.321093 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.344920 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce28537c-ff5d-4318-bc95-8a29da6aae53-combined-ca-bundle\") pod \"ce28537c-ff5d-4318-bc95-8a29da6aae53\" (UID: \"ce28537c-ff5d-4318-bc95-8a29da6aae53\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.344991 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fph6p\" (UniqueName: \"kubernetes.io/projected/ce28537c-ff5d-4318-bc95-8a29da6aae53-kube-api-access-fph6p\") pod \"ce28537c-ff5d-4318-bc95-8a29da6aae53\" (UID: \"ce28537c-ff5d-4318-bc95-8a29da6aae53\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.345022 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce28537c-ff5d-4318-bc95-8a29da6aae53-config-data\") pod \"ce28537c-ff5d-4318-bc95-8a29da6aae53\" (UID: \"ce28537c-ff5d-4318-bc95-8a29da6aae53\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.345152 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce28537c-ff5d-4318-bc95-8a29da6aae53-nova-metadata-tls-certs\") pod \"ce28537c-ff5d-4318-bc95-8a29da6aae53\" (UID: \"ce28537c-ff5d-4318-bc95-8a29da6aae53\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.345248 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce28537c-ff5d-4318-bc95-8a29da6aae53-logs\") pod \"ce28537c-ff5d-4318-bc95-8a29da6aae53\" (UID: \"ce28537c-ff5d-4318-bc95-8a29da6aae53\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.345605 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj264\" (UniqueName: \"kubernetes.io/projected/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-kube-api-access-mj264\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.345618 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.345627 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.345635 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.345653 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.345661 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.349961 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce28537c-ff5d-4318-bc95-8a29da6aae53-logs" (OuterVolumeSpecName: "logs") pod "ce28537c-ff5d-4318-bc95-8a29da6aae53" (UID: "ce28537c-ff5d-4318-bc95-8a29da6aae53"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.363550 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.363610 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.363623 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89ksh\" (UniqueName: \"kubernetes.io/projected/cf01449d-a52d-488f-bd93-b5f84b57fb13-kube-api-access-89ksh\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.363636 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.363647 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.386050 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce28537c-ff5d-4318-bc95-8a29da6aae53-kube-api-access-fph6p" (OuterVolumeSpecName: "kube-api-access-fph6p") pod "ce28537c-ff5d-4318-bc95-8a29da6aae53" (UID: "ce28537c-ff5d-4318-bc95-8a29da6aae53"). InnerVolumeSpecName "kube-api-access-fph6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.389608 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.461487 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce28537c-ff5d-4318-bc95-8a29da6aae53-config-data" (OuterVolumeSpecName: "config-data") pod "ce28537c-ff5d-4318-bc95-8a29da6aae53" (UID: "ce28537c-ff5d-4318-bc95-8a29da6aae53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.463688 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce28537c-ff5d-4318-bc95-8a29da6aae53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce28537c-ff5d-4318-bc95-8a29da6aae53" (UID: "ce28537c-ff5d-4318-bc95-8a29da6aae53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.464129 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48k4h\" (UniqueName: \"kubernetes.io/projected/09c615e5-0d09-4775-b559-6883b6dc280b-kube-api-access-48k4h\") pod \"09c615e5-0d09-4775-b559-6883b6dc280b\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.464197 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09c615e5-0d09-4775-b559-6883b6dc280b-run-httpd\") pod \"09c615e5-0d09-4775-b559-6883b6dc280b\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.464221 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-combined-ca-bundle\") pod \"2e1ab98f-9015-432e-90f2-4692dc37c99e\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.464238 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-scripts\") pod \"09c615e5-0d09-4775-b559-6883b6dc280b\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.464264 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-internal-tls-certs\") pod \"2e1ab98f-9015-432e-90f2-4692dc37c99e\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.464282 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-combined-ca-bundle\") pod \"09c615e5-0d09-4775-b559-6883b6dc280b\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.464299 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rb4x\" (UniqueName: \"kubernetes.io/projected/72d5ebd7-a16f-49bb-a80e-1138d5f197d3-kube-api-access-8rb4x\") pod \"72d5ebd7-a16f-49bb-a80e-1138d5f197d3\" (UID: \"72d5ebd7-a16f-49bb-a80e-1138d5f197d3\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.464329 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn922\" (UniqueName: \"kubernetes.io/projected/2e1ab98f-9015-432e-90f2-4692dc37c99e-kube-api-access-fn922\") pod \"2e1ab98f-9015-432e-90f2-4692dc37c99e\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.464416 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e1ab98f-9015-432e-90f2-4692dc37c99e-logs\") pod \"2e1ab98f-9015-432e-90f2-4692dc37c99e\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.464439 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72d5ebd7-a16f-49bb-a80e-1138d5f197d3-operator-scripts\") pod \"72d5ebd7-a16f-49bb-a80e-1138d5f197d3\" (UID: \"72d5ebd7-a16f-49bb-a80e-1138d5f197d3\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.464461 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09c615e5-0d09-4775-b559-6883b6dc280b-log-httpd\") pod \"09c615e5-0d09-4775-b559-6883b6dc280b\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.464539 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-sg-core-conf-yaml\") pod \"09c615e5-0d09-4775-b559-6883b6dc280b\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.464604 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-config-data\") pod \"2e1ab98f-9015-432e-90f2-4692dc37c99e\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.464631 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-public-tls-certs\") pod \"2e1ab98f-9015-432e-90f2-4692dc37c99e\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.464655 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-config-data\") pod \"09c615e5-0d09-4775-b559-6883b6dc280b\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.464672 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-ceilometer-tls-certs\") pod \"09c615e5-0d09-4775-b559-6883b6dc280b\" (UID: \"09c615e5-0d09-4775-b559-6883b6dc280b\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.464689 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-config-data-custom\") pod \"2e1ab98f-9015-432e-90f2-4692dc37c99e\" (UID: \"2e1ab98f-9015-432e-90f2-4692dc37c99e\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.465870 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf01449d-a52d-488f-bd93-b5f84b57fb13" (UID: "cf01449d-a52d-488f-bd93-b5f84b57fb13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.466869 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09c615e5-0d09-4775-b559-6883b6dc280b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "09c615e5-0d09-4775-b559-6883b6dc280b" (UID: "09c615e5-0d09-4775-b559-6883b6dc280b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.476809 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72d5ebd7-a16f-49bb-a80e-1138d5f197d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "72d5ebd7-a16f-49bb-a80e-1138d5f197d3" (UID: "72d5ebd7-a16f-49bb-a80e-1138d5f197d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.477883 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09c615e5-0d09-4775-b559-6883b6dc280b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "09c615e5-0d09-4775-b559-6883b6dc280b" (UID: "09c615e5-0d09-4775-b559-6883b6dc280b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.477275 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e1ab98f-9015-432e-90f2-4692dc37c99e-logs" (OuterVolumeSpecName: "logs") pod "2e1ab98f-9015-432e-90f2-4692dc37c99e" (UID: "2e1ab98f-9015-432e-90f2-4692dc37c99e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.478014 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce28537c-ff5d-4318-bc95-8a29da6aae53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.478043 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fph6p\" (UniqueName: \"kubernetes.io/projected/ce28537c-ff5d-4318-bc95-8a29da6aae53-kube-api-access-fph6p\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.478068 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce28537c-ff5d-4318-bc95-8a29da6aae53-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.478082 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce28537c-ff5d-4318-bc95-8a29da6aae53-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.482420 4812 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.485818 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e1ab98f-9015-432e-90f2-4692dc37c99e-kube-api-access-fn922" (OuterVolumeSpecName: "kube-api-access-fn922") pod "2e1ab98f-9015-432e-90f2-4692dc37c99e" (UID: "2e1ab98f-9015-432e-90f2-4692dc37c99e"). InnerVolumeSpecName "kube-api-access-fn922". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.486113 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-scripts" (OuterVolumeSpecName: "scripts") pod "09c615e5-0d09-4775-b559-6883b6dc280b" (UID: "09c615e5-0d09-4775-b559-6883b6dc280b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.486322 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09c615e5-0d09-4775-b559-6883b6dc280b-kube-api-access-48k4h" (OuterVolumeSpecName: "kube-api-access-48k4h") pod "09c615e5-0d09-4775-b559-6883b6dc280b" (UID: "09c615e5-0d09-4775-b559-6883b6dc280b"). InnerVolumeSpecName "kube-api-access-48k4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.489318 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72d5ebd7-a16f-49bb-a80e-1138d5f197d3-kube-api-access-8rb4x" (OuterVolumeSpecName: "kube-api-access-8rb4x") pod "72d5ebd7-a16f-49bb-a80e-1138d5f197d3" (UID: "72d5ebd7-a16f-49bb-a80e-1138d5f197d3"). InnerVolumeSpecName "kube-api-access-8rb4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.489805 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c8509a7-6885-4238-9da3-b214b5f8868e" (UID: "1c8509a7-6885-4238-9da3-b214b5f8868e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.498332 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.498331 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cf01449d-a52d-488f-bd93-b5f84b57fb13","Type":"ContainerDied","Data":"c255d371213c86389ccde603aedf29d3762bbabbcf68894c469acb43318c4a54"} Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.498501 4812 scope.go:117] "RemoveContainer" containerID="0246dd3196f30fe8b017001268ba69633ac8f5832eaad85f9a757939e217b9fb" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.499023 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2e1ab98f-9015-432e-90f2-4692dc37c99e" (UID: "2e1ab98f-9015-432e-90f2-4692dc37c99e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.510201 4812 generic.go:334] "Generic (PLEG): container finished" podID="2e1ab98f-9015-432e-90f2-4692dc37c99e" containerID="0e2652f399100af755bd463b87697dab3c3766ed7ffde1ccbef750bb41e7d20c" exitCode=0 Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.510263 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b4d7f8ffb-jqsz8" event={"ID":"2e1ab98f-9015-432e-90f2-4692dc37c99e","Type":"ContainerDied","Data":"0e2652f399100af755bd463b87697dab3c3766ed7ffde1ccbef750bb41e7d20c"} Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.510283 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b4d7f8ffb-jqsz8" event={"ID":"2e1ab98f-9015-432e-90f2-4692dc37c99e","Type":"ContainerDied","Data":"a73d77712fdb135d27de8994c1a6c7f28c01894b719bfb07965ead8a121b2062"} Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.510331 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b4d7f8ffb-jqsz8" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.519319 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-config-data" (OuterVolumeSpecName: "config-data") pod "1c8509a7-6885-4238-9da3-b214b5f8868e" (UID: "1c8509a7-6885-4238-9da3-b214b5f8868e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.537062 4812 generic.go:334] "Generic (PLEG): container finished" podID="2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9" containerID="d0bcc42b8192418057b3b366ea961bd0ce3afbaab1670843d8937a73bc38c555" exitCode=1 Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.537121 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi5538-account-delete-t9sww" event={"ID":"2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9","Type":"ContainerDied","Data":"d0bcc42b8192418057b3b366ea961bd0ce3afbaab1670843d8937a73bc38c555"} Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.537510 4812 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novaapi5538-account-delete-t9sww" secret="" err="secret \"galera-openstack-dockercfg-zbtr8\" not found" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.537547 4812 scope.go:117] "RemoveContainer" containerID="d0bcc42b8192418057b3b366ea961bd0ce3afbaab1670843d8937a73bc38c555" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.541930 4812 scope.go:117] "RemoveContainer" containerID="289e772fea34cd9b5573b8563f47c8b619a399e0ab44c84b1b81de861433540a" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.544223 4812 generic.go:334] "Generic (PLEG): container finished" podID="a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9" containerID="936b8b6a5274f750529b939168932b838cfb5b6deb53493684bf1a8efdb874f6" exitCode=0 Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.544316 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9","Type":"ContainerDied","Data":"936b8b6a5274f750529b939168932b838cfb5b6deb53493684bf1a8efdb874f6"} Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.544401 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9","Type":"ContainerDied","Data":"f4db2447b431d51c943a31dcf8f8575862b333ed58807da40fec9643fb3c419e"} Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.544465 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.562518 4812 generic.go:334] "Generic (PLEG): container finished" podID="b9dbc131-33f8-4df7-88ba-2d90e93a436c" containerID="8ae9509f408e8d7a118f2c286e6243f94fd2c8cc3503c8aa239b550061eb119e" exitCode=2 Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.562984 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b9dbc131-33f8-4df7-88ba-2d90e93a436c","Type":"ContainerDied","Data":"8ae9509f408e8d7a118f2c286e6243f94fd2c8cc3503c8aa239b550061eb119e"} Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.566833 4812 generic.go:334] "Generic (PLEG): container finished" podID="7c8539e2-3b3b-488b-9111-d59aa7317490" containerID="9e8766dff59643e0ac7ad1773abbd1f12836d5051ab75fbe8a53e51dc0236e21" exitCode=0 Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.566894 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell063c1-account-delete-9lhdk" event={"ID":"7c8539e2-3b3b-488b-9111-d59aa7317490","Type":"ContainerDied","Data":"9e8766dff59643e0ac7ad1773abbd1f12836d5051ab75fbe8a53e51dc0236e21"} Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.567486 4812 scope.go:117] "RemoveContainer" containerID="0e2652f399100af755bd463b87697dab3c3766ed7ffde1ccbef750bb41e7d20c" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.573736 4812 generic.go:334] "Generic (PLEG): container finished" podID="174dd2a1-292b-4e07-8ed8-48d4109e9f57" containerID="ea38aff07288e1f408c0f6650bafdbaba5ec40ffed460a32bda946e57bb0490c" exitCode=0 Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.573908 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"174dd2a1-292b-4e07-8ed8-48d4109e9f57","Type":"ContainerDied","Data":"ea38aff07288e1f408c0f6650bafdbaba5ec40ffed460a32bda946e57bb0490c"} Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.578850 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plzzk\" (UniqueName: \"kubernetes.io/projected/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-kube-api-access-plzzk\") pod \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.578944 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-logs\") pod \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.578988 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-combined-ca-bundle\") pod \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.579011 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-internal-tls-certs\") pod \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.579044 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-public-tls-certs\") pod \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.579148 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-config-data\") pod \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\" (UID: \"a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.579986 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fn922\" (UniqueName: \"kubernetes.io/projected/2e1ab98f-9015-432e-90f2-4692dc37c99e-kube-api-access-fn922\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.580006 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e1ab98f-9015-432e-90f2-4692dc37c99e-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.580016 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72d5ebd7-a16f-49bb-a80e-1138d5f197d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.580031 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09c615e5-0d09-4775-b559-6883b6dc280b-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.580043 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.580053 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.580066 4812 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.580077 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.580089 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.580100 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48k4h\" (UniqueName: \"kubernetes.io/projected/09c615e5-0d09-4775-b559-6883b6dc280b-kube-api-access-48k4h\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.580111 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.580120 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09c615e5-0d09-4775-b559-6883b6dc280b-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.580131 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rb4x\" (UniqueName: \"kubernetes.io/projected/72d5ebd7-a16f-49bb-a80e-1138d5f197d3-kube-api-access-8rb4x\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.580551 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.580677 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"56f4002e-a9bd-462d-b5ed-ce5ea166ec16","Type":"ContainerDied","Data":"2eaa5b687f6579881db404a651bac16524d0b6ab0ef469c215247b1c55749d79"} Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.581206 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-logs" (OuterVolumeSpecName: "logs") pod "a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9" (UID: "a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.583886 4812 generic.go:334] "Generic (PLEG): container finished" podID="09c615e5-0d09-4775-b559-6883b6dc280b" containerID="a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2" exitCode=0 Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.583963 4812 generic.go:334] "Generic (PLEG): container finished" podID="09c615e5-0d09-4775-b559-6883b6dc280b" containerID="fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267" exitCode=0 Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.584029 4812 generic.go:334] "Generic (PLEG): container finished" podID="09c615e5-0d09-4775-b559-6883b6dc280b" containerID="8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0" exitCode=0 Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.584110 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09c615e5-0d09-4775-b559-6883b6dc280b","Type":"ContainerDied","Data":"a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2"} Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.584172 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09c615e5-0d09-4775-b559-6883b6dc280b","Type":"ContainerDied","Data":"fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267"} Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.584227 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09c615e5-0d09-4775-b559-6883b6dc280b","Type":"ContainerDied","Data":"8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0"} Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.584279 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"09c615e5-0d09-4775-b559-6883b6dc280b","Type":"ContainerDied","Data":"4f851252ee23d67d22e90d2078e9411ca959bda541556e2caded4cc701cdeee5"} Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.584438 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.609794 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-574f454648-szvd4" event={"ID":"1c8509a7-6885-4238-9da3-b214b5f8868e","Type":"ContainerDied","Data":"737ba922531e24113a510f760940859511d64804cfa011f58aebb0fed14cef4e"} Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.609904 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-574f454648-szvd4" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.636720 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56f4002e-a9bd-462d-b5ed-ce5ea166ec16" (UID: "56f4002e-a9bd-462d-b5ed-ce5ea166ec16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.656845 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "de0c0304-db1e-4a1e-8b7e-5700c55fc9ed" (UID: "de0c0304-db1e-4a1e-8b7e-5700c55fc9ed"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.656990 4812 scope.go:117] "RemoveContainer" containerID="4cc9d9368a0a7cba1482b1fd1cdcb122538087b4b0ce32c9891541bd780042c8" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.657617 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="7cb8ede6-6163-4906-89f7-7fe6458edc36" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.680561 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-kube-api-access-plzzk" (OuterVolumeSpecName: "kube-api-access-plzzk") pod "a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9" (UID: "a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9"). InnerVolumeSpecName "kube-api-access-plzzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.681675 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plzzk\" (UniqueName: \"kubernetes.io/projected/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-kube-api-access-plzzk\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.681695 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-logs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.681704 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.681713 4812 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.703524 4812 generic.go:334] "Generic (PLEG): container finished" podID="ce28537c-ff5d-4318-bc95-8a29da6aae53" containerID="83ad3cb937149c006757630969320727679b254e01b1b68d6042c9ca2d9a7b07" exitCode=0 Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.703704 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.704280 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ce28537c-ff5d-4318-bc95-8a29da6aae53","Type":"ContainerDied","Data":"83ad3cb937149c006757630969320727679b254e01b1b68d6042c9ca2d9a7b07"} Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.704472 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ce28537c-ff5d-4318-bc95-8a29da6aae53","Type":"ContainerDied","Data":"4f856e21ab2f67e4837e0420d5d6704ccea9a30bfeea083fd0b4293a9359d1c0"} Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.732339 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="db2cd03e-b999-48ea-b540-7fd35356ba8b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.105:5671: connect: connection refused" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.733391 4812 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.789796 4812 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: E1124 19:39:37.789875 4812 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 24 19:39:37 crc kubenswrapper[4812]: E1124 19:39:37.789927 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-operator-scripts podName:2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9 nodeName:}" failed. No retries permitted until 2025-11-24 19:39:39.789908969 +0000 UTC m=+1373.578861340 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-operator-scripts") pod "novaapi5538-account-delete-t9sww" (UID: "2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9") : configmap "openstack-scripts" not found Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.812045 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.812076 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e1ab98f-9015-432e-90f2-4692dc37c99e" (UID: "2e1ab98f-9015-432e-90f2-4692dc37c99e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.812618 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"de0c0304-db1e-4a1e-8b7e-5700c55fc9ed","Type":"ContainerDied","Data":"8400e0f5c9e4eb63ab3189b007b4d18c10dc4d7ede0f8b4a402d159ad599d536"} Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.812649 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.821554 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "09c615e5-0d09-4775-b559-6883b6dc280b" (UID: "09c615e5-0d09-4775-b559-6883b6dc280b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.835206 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-config-data" (OuterVolumeSpecName: "config-data") pod "56f4002e-a9bd-462d-b5ed-ce5ea166ec16" (UID: "56f4002e-a9bd-462d-b5ed-ce5ea166ec16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.839709 4812 scope.go:117] "RemoveContainer" containerID="0e2652f399100af755bd463b87697dab3c3766ed7ffde1ccbef750bb41e7d20c" Nov 24 19:39:37 crc kubenswrapper[4812]: E1124 19:39:37.840361 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e2652f399100af755bd463b87697dab3c3766ed7ffde1ccbef750bb41e7d20c\": container with ID starting with 0e2652f399100af755bd463b87697dab3c3766ed7ffde1ccbef750bb41e7d20c not found: ID does not exist" containerID="0e2652f399100af755bd463b87697dab3c3766ed7ffde1ccbef750bb41e7d20c" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.840390 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e2652f399100af755bd463b87697dab3c3766ed7ffde1ccbef750bb41e7d20c"} err="failed to get container status \"0e2652f399100af755bd463b87697dab3c3766ed7ffde1ccbef750bb41e7d20c\": rpc error: code = NotFound desc = could not find container \"0e2652f399100af755bd463b87697dab3c3766ed7ffde1ccbef750bb41e7d20c\": container with ID starting with 0e2652f399100af755bd463b87697dab3c3766ed7ffde1ccbef750bb41e7d20c not found: ID does not exist" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.840409 4812 scope.go:117] "RemoveContainer" containerID="4cc9d9368a0a7cba1482b1fd1cdcb122538087b4b0ce32c9891541bd780042c8" Nov 24 19:39:37 crc kubenswrapper[4812]: E1124 19:39:37.841652 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cc9d9368a0a7cba1482b1fd1cdcb122538087b4b0ce32c9891541bd780042c8\": container with ID starting with 4cc9d9368a0a7cba1482b1fd1cdcb122538087b4b0ce32c9891541bd780042c8 not found: ID does not exist" containerID="4cc9d9368a0a7cba1482b1fd1cdcb122538087b4b0ce32c9891541bd780042c8" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.841675 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cc9d9368a0a7cba1482b1fd1cdcb122538087b4b0ce32c9891541bd780042c8"} err="failed to get container status \"4cc9d9368a0a7cba1482b1fd1cdcb122538087b4b0ce32c9891541bd780042c8\": rpc error: code = NotFound desc = could not find container \"4cc9d9368a0a7cba1482b1fd1cdcb122538087b4b0ce32c9891541bd780042c8\": container with ID starting with 4cc9d9368a0a7cba1482b1fd1cdcb122538087b4b0ce32c9891541bd780042c8 not found: ID does not exist" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.841688 4812 scope.go:117] "RemoveContainer" containerID="936b8b6a5274f750529b939168932b838cfb5b6deb53493684bf1a8efdb874f6" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.844516 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance7fd2-account-delete-vl62k" event={"ID":"c2d42cca-2cc7-4b6a-9f1d-215f522b4c82","Type":"ContainerStarted","Data":"d5fae8072f9f408d92aa3d01e0eff2bf277ff66460adda263052312bf707a170"} Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.866507 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbicanfac2-account-delete-v4hfz" event={"ID":"72d5ebd7-a16f-49bb-a80e-1138d5f197d3","Type":"ContainerDied","Data":"2ac7a4c2fa6ad3cf878bfa556ab1e046777c3c27ed6701e1ced8ec7359e2802a"} Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.866537 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ac7a4c2fa6ad3cf878bfa556ab1e046777c3c27ed6701e1ced8ec7359e2802a" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.862777 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-94c5495b6-f8ptk" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.845250 4812 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/glance7fd2-account-delete-vl62k" secret="" err="secret \"galera-openstack-dockercfg-zbtr8\" not found" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.862736 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbicanfac2-account-delete-v4hfz" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.862878 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystonec995-account-delete-t2md4" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.862865 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.885939 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance7fd2-account-delete-vl62k" podStartSLOduration=7.885923291 podStartE2EDuration="7.885923291s" podCreationTimestamp="2025-11-24 19:39:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 19:39:37.873594087 +0000 UTC m=+1371.662546488" watchObservedRunningTime="2025-11-24 19:39:37.885923291 +0000 UTC m=+1371.674875662" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.893028 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.893055 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.893065 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: E1124 19:39:37.893115 4812 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 24 19:39:37 crc kubenswrapper[4812]: E1124 19:39:37.893153 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7c8539e2-3b3b-488b-9111-d59aa7317490-operator-scripts podName:7c8539e2-3b3b-488b-9111-d59aa7317490 nodeName:}" failed. No retries permitted until 2025-11-24 19:39:39.893140183 +0000 UTC m=+1373.682092544 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7c8539e2-3b3b-488b-9111-d59aa7317490-operator-scripts") pod "novacell063c1-account-delete-9lhdk" (UID: "7c8539e2-3b3b-488b-9111-d59aa7317490") : configmap "openstack-scripts" not found Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.919276 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "56f4002e-a9bd-462d-b5ed-ce5ea166ec16" (UID: "56f4002e-a9bd-462d-b5ed-ce5ea166ec16"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.967916 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9" (UID: "a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.981518 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cf01449d-a52d-488f-bd93-b5f84b57fb13" (UID: "cf01449d-a52d-488f-bd93-b5f84b57fb13"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.994118 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9dbc131-33f8-4df7-88ba-2d90e93a436c-kube-state-metrics-tls-certs\") pod \"b9dbc131-33f8-4df7-88ba-2d90e93a436c\" (UID: \"b9dbc131-33f8-4df7-88ba-2d90e93a436c\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.994159 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9dbc131-33f8-4df7-88ba-2d90e93a436c-combined-ca-bundle\") pod \"b9dbc131-33f8-4df7-88ba-2d90e93a436c\" (UID: \"b9dbc131-33f8-4df7-88ba-2d90e93a436c\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.994249 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t8lt\" (UniqueName: \"kubernetes.io/projected/b9dbc131-33f8-4df7-88ba-2d90e93a436c-kube-api-access-9t8lt\") pod \"b9dbc131-33f8-4df7-88ba-2d90e93a436c\" (UID: \"b9dbc131-33f8-4df7-88ba-2d90e93a436c\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.994316 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b9dbc131-33f8-4df7-88ba-2d90e93a436c-kube-state-metrics-tls-config\") pod \"b9dbc131-33f8-4df7-88ba-2d90e93a436c\" (UID: \"b9dbc131-33f8-4df7-88ba-2d90e93a436c\") " Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.994706 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.995028 4812 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56f4002e-a9bd-462d-b5ed-ce5ea166ec16-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: I1124 19:39:37.995046 4812 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:37 crc kubenswrapper[4812]: E1124 19:39:37.995242 4812 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 24 19:39:37 crc kubenswrapper[4812]: E1124 19:39:37.995313 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-operator-scripts podName:c2d42cca-2cc7-4b6a-9f1d-215f522b4c82 nodeName:}" failed. No retries permitted until 2025-11-24 19:39:38.495294036 +0000 UTC m=+1372.284246407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-operator-scripts") pod "glance7fd2-account-delete-vl62k" (UID: "c2d42cca-2cc7-4b6a-9f1d-215f522b4c82") : configmap "openstack-scripts" not found Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.041310 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9dbc131-33f8-4df7-88ba-2d90e93a436c-kube-api-access-9t8lt" (OuterVolumeSpecName: "kube-api-access-9t8lt") pod "b9dbc131-33f8-4df7-88ba-2d90e93a436c" (UID: "b9dbc131-33f8-4df7-88ba-2d90e93a436c"). InnerVolumeSpecName "kube-api-access-9t8lt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.051831 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9" (UID: "a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.052484 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9dbc131-33f8-4df7-88ba-2d90e93a436c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9dbc131-33f8-4df7-88ba-2d90e93a436c" (UID: "b9dbc131-33f8-4df7-88ba-2d90e93a436c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.062789 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce28537c-ff5d-4318-bc95-8a29da6aae53-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ce28537c-ff5d-4318-bc95-8a29da6aae53" (UID: "ce28537c-ff5d-4318-bc95-8a29da6aae53"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.066891 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-config-data" (OuterVolumeSpecName: "config-data") pod "de0c0304-db1e-4a1e-8b7e-5700c55fc9ed" (UID: "de0c0304-db1e-4a1e-8b7e-5700c55fc9ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.069926 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1c8509a7-6885-4238-9da3-b214b5f8868e" (UID: "1c8509a7-6885-4238-9da3-b214b5f8868e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.083788 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-config-data" (OuterVolumeSpecName: "config-data") pod "cf01449d-a52d-488f-bd93-b5f84b57fb13" (UID: "cf01449d-a52d-488f-bd93-b5f84b57fb13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.097251 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-config-data" (OuterVolumeSpecName: "config-data") pod "a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9" (UID: "a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.101530 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-operator-scripts\") pod \"keystonec995-account-delete-t2md4\" (UID: \"2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c\") " pod="openstack/keystonec995-account-delete-t2md4" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.101761 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnfr9\" (UniqueName: \"kubernetes.io/projected/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-kube-api-access-tnfr9\") pod \"keystonec995-account-delete-t2md4\" (UID: \"2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c\") " pod="openstack/keystonec995-account-delete-t2md4" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.102696 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t8lt\" (UniqueName: \"kubernetes.io/projected/b9dbc131-33f8-4df7-88ba-2d90e93a436c-kube-api-access-9t8lt\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.102819 4812 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce28537c-ff5d-4318-bc95-8a29da6aae53-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.102893 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.102908 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.102917 4812 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.105018 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf01449d-a52d-488f-bd93-b5f84b57fb13-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.107574 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9dbc131-33f8-4df7-88ba-2d90e93a436c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.107609 4812 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.108094 4812 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.108144 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-operator-scripts podName:2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c nodeName:}" failed. No retries permitted until 2025-11-24 19:39:40.108128868 +0000 UTC m=+1373.897081239 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-operator-scripts") pod "keystonec995-account-delete-t2md4" (UID: "2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c") : configmap "openstack-scripts" not found Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.115124 4812 projected.go:194] Error preparing data for projected volume kube-api-access-tnfr9 for pod openstack/keystonec995-account-delete-t2md4: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.115432 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-kube-api-access-tnfr9 podName:2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c nodeName:}" failed. No retries permitted until 2025-11-24 19:39:40.115214526 +0000 UTC m=+1373.904166887 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tnfr9" (UniqueName: "kubernetes.io/projected/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-kube-api-access-tnfr9") pod "keystonec995-account-delete-t2md4" (UID: "2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.117155 4812 scope.go:117] "RemoveContainer" containerID="bf019b3765313bce34a72ee4466da8ea1de124ec7d57584290484bf4ecf926a5" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.127648 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-config-data" (OuterVolumeSpecName: "config-data") pod "09c615e5-0d09-4775-b559-6883b6dc280b" (UID: "09c615e5-0d09-4775-b559-6883b6dc280b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.129905 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystonec995-account-delete-t2md4" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.133259 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2e1ab98f-9015-432e-90f2-4692dc37c99e" (UID: "2e1ab98f-9015-432e-90f2-4692dc37c99e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.144390 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.151461 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-94c5495b6-f8ptk"] Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.170964 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-94c5495b6-f8ptk"] Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.175673 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="43fd51907e2ff8ea08d221fd8a7a09a85a3f90b61cf1114371ca7d05e3dcf447" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.178513 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.179749 4812 scope.go:117] "RemoveContainer" containerID="936b8b6a5274f750529b939168932b838cfb5b6deb53493684bf1a8efdb874f6" Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.184415 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"936b8b6a5274f750529b939168932b838cfb5b6deb53493684bf1a8efdb874f6\": container with ID starting with 936b8b6a5274f750529b939168932b838cfb5b6deb53493684bf1a8efdb874f6 not found: ID does not exist" containerID="936b8b6a5274f750529b939168932b838cfb5b6deb53493684bf1a8efdb874f6" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.184472 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"936b8b6a5274f750529b939168932b838cfb5b6deb53493684bf1a8efdb874f6"} err="failed to get container status \"936b8b6a5274f750529b939168932b838cfb5b6deb53493684bf1a8efdb874f6\": rpc error: code = NotFound desc = could not find container \"936b8b6a5274f750529b939168932b838cfb5b6deb53493684bf1a8efdb874f6\": container with ID starting with 936b8b6a5274f750529b939168932b838cfb5b6deb53493684bf1a8efdb874f6 not found: ID does not exist" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.184508 4812 scope.go:117] "RemoveContainer" containerID="bf019b3765313bce34a72ee4466da8ea1de124ec7d57584290484bf4ecf926a5" Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.184472 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="43fd51907e2ff8ea08d221fd8a7a09a85a3f90b61cf1114371ca7d05e3dcf447" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.185056 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf019b3765313bce34a72ee4466da8ea1de124ec7d57584290484bf4ecf926a5\": container with ID starting with bf019b3765313bce34a72ee4466da8ea1de124ec7d57584290484bf4ecf926a5 not found: ID does not exist" containerID="bf019b3765313bce34a72ee4466da8ea1de124ec7d57584290484bf4ecf926a5" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.185087 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf019b3765313bce34a72ee4466da8ea1de124ec7d57584290484bf4ecf926a5"} err="failed to get container status \"bf019b3765313bce34a72ee4466da8ea1de124ec7d57584290484bf4ecf926a5\": rpc error: code = NotFound desc = could not find container \"bf019b3765313bce34a72ee4466da8ea1de124ec7d57584290484bf4ecf926a5\": container with ID starting with bf019b3765313bce34a72ee4466da8ea1de124ec7d57584290484bf4ecf926a5 not found: ID does not exist" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.188157 4812 scope.go:117] "RemoveContainer" containerID="a37153fef35e2218bd72e8d349e3157d2b4261a45df7d03e4deb1d5ee538a030" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.192487 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.192613 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="43fd51907e2ff8ea08d221fd8a7a09a85a3f90b61cf1114371ca7d05e3dcf447" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.192843 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="36aff13b-5f97-40c0-8c88-64c93ce91bcb" containerName="nova-scheduler-scheduler" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.203500 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.212398 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.212429 4812 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.227516 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "de0c0304-db1e-4a1e-8b7e-5700c55fc9ed" (UID: "de0c0304-db1e-4a1e-8b7e-5700c55fc9ed"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.230574 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.231316 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9" (UID: "a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.236093 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "09c615e5-0d09-4775-b559-6883b6dc280b" (UID: "09c615e5-0d09-4775-b559-6883b6dc280b"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.248147 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1c8509a7-6885-4238-9da3-b214b5f8868e" (UID: "1c8509a7-6885-4238-9da3-b214b5f8868e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.248530 4812 scope.go:117] "RemoveContainer" containerID="421d219729fffc3e229f6f163aa63a1f6a6b2a4dd6565bac6b4f81cad77dfa57" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.249611 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9dbc131-33f8-4df7-88ba-2d90e93a436c-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "b9dbc131-33f8-4df7-88ba-2d90e93a436c" (UID: "b9dbc131-33f8-4df7-88ba-2d90e93a436c"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.250271 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.253472 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9dbc131-33f8-4df7-88ba-2d90e93a436c-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "b9dbc131-33f8-4df7-88ba-2d90e93a436c" (UID: "b9dbc131-33f8-4df7-88ba-2d90e93a436c"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.257208 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.266287 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-config-data" (OuterVolumeSpecName: "config-data") pod "2e1ab98f-9015-432e-90f2-4692dc37c99e" (UID: "2e1ab98f-9015-432e-90f2-4692dc37c99e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.273503 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09c615e5-0d09-4775-b559-6883b6dc280b" (UID: "09c615e5-0d09-4775-b559-6883b6dc280b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.295082 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2e1ab98f-9015-432e-90f2-4692dc37c99e" (UID: "2e1ab98f-9015-432e-90f2-4692dc37c99e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.306240 4812 scope.go:117] "RemoveContainer" containerID="a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.313841 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/174dd2a1-292b-4e07-8ed8-48d4109e9f57-memcached-tls-certs\") pod \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\" (UID: \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\") " Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.313891 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx9q6\" (UniqueName: \"kubernetes.io/projected/174dd2a1-292b-4e07-8ed8-48d4109e9f57-kube-api-access-qx9q6\") pod \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\" (UID: \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\") " Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.313956 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/174dd2a1-292b-4e07-8ed8-48d4109e9f57-kolla-config\") pod \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\" (UID: \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\") " Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.314030 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/174dd2a1-292b-4e07-8ed8-48d4109e9f57-config-data\") pod \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\" (UID: \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\") " Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.314073 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174dd2a1-292b-4e07-8ed8-48d4109e9f57-combined-ca-bundle\") pod \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\" (UID: \"174dd2a1-292b-4e07-8ed8-48d4109e9f57\") " Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.314577 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.314600 4812 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.314615 4812 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e1ab98f-9015-432e-90f2-4692dc37c99e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.314627 4812 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.314640 4812 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9dbc131-33f8-4df7-88ba-2d90e93a436c-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.314652 4812 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.314664 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09c615e5-0d09-4775-b559-6883b6dc280b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.314675 4812 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c8509a7-6885-4238-9da3-b214b5f8868e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.314685 4812 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b9dbc131-33f8-4df7-88ba-2d90e93a436c-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.314936 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/174dd2a1-292b-4e07-8ed8-48d4109e9f57-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "174dd2a1-292b-4e07-8ed8-48d4109e9f57" (UID: "174dd2a1-292b-4e07-8ed8-48d4109e9f57"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.317022 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/174dd2a1-292b-4e07-8ed8-48d4109e9f57-config-data" (OuterVolumeSpecName: "config-data") pod "174dd2a1-292b-4e07-8ed8-48d4109e9f57" (UID: "174dd2a1-292b-4e07-8ed8-48d4109e9f57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.324643 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/174dd2a1-292b-4e07-8ed8-48d4109e9f57-kube-api-access-qx9q6" (OuterVolumeSpecName: "kube-api-access-qx9q6") pod "174dd2a1-292b-4e07-8ed8-48d4109e9f57" (UID: "174dd2a1-292b-4e07-8ed8-48d4109e9f57"). InnerVolumeSpecName "kube-api-access-qx9q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.350972 4812 scope.go:117] "RemoveContainer" containerID="b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.351553 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/174dd2a1-292b-4e07-8ed8-48d4109e9f57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "174dd2a1-292b-4e07-8ed8-48d4109e9f57" (UID: "174dd2a1-292b-4e07-8ed8-48d4109e9f57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.376024 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.382511 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.387533 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder5926-account-delete-lvwv5" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.405178 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/174dd2a1-292b-4e07-8ed8-48d4109e9f57-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "174dd2a1-292b-4e07-8ed8-48d4109e9f57" (UID: "174dd2a1-292b-4e07-8ed8-48d4109e9f57"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.413554 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron448d-account-delete-qskvr" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.414004 4812 scope.go:117] "RemoveContainer" containerID="fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.416206 4812 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/174dd2a1-292b-4e07-8ed8-48d4109e9f57-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.416297 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx9q6\" (UniqueName: \"kubernetes.io/projected/174dd2a1-292b-4e07-8ed8-48d4109e9f57-kube-api-access-qx9q6\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.416367 4812 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/174dd2a1-292b-4e07-8ed8-48d4109e9f57-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.416422 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/174dd2a1-292b-4e07-8ed8-48d4109e9f57-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.416473 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/174dd2a1-292b-4e07-8ed8-48d4109e9f57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.425411 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement0bb7-account-delete-m56jl" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.455288 4812 scope.go:117] "RemoveContainer" containerID="8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0" Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.457665 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5589fa331fbb111d8059fa5d842a1f4818d7b185257f4ad95b60c65b7ba3a2fd" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.459506 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5589fa331fbb111d8059fa5d842a1f4818d7b185257f4ad95b60c65b7ba3a2fd" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.460760 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5589fa331fbb111d8059fa5d842a1f4818d7b185257f4ad95b60c65b7ba3a2fd" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.460798 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="87b80396-f87d-435e-8478-0ecb34bccd94" containerName="nova-cell1-conductor-conductor" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.487743 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-b4d7f8ffb-jqsz8"] Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.502932 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-b4d7f8ffb-jqsz8"] Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.510047 4812 scope.go:117] "RemoveContainer" containerID="a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2" Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.515706 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2\": container with ID starting with a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2 not found: ID does not exist" containerID="a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.515762 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2"} err="failed to get container status \"a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2\": rpc error: code = NotFound desc = could not find container \"a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2\": container with ID starting with a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2 not found: ID does not exist" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.515791 4812 scope.go:117] "RemoveContainer" containerID="b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.515885 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.516753 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63\": container with ID starting with b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63 not found: ID does not exist" containerID="b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.516798 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63"} err="failed to get container status \"b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63\": rpc error: code = NotFound desc = could not find container \"b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63\": container with ID starting with b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63 not found: ID does not exist" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.516821 4812 scope.go:117] "RemoveContainer" containerID="fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.517517 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsdsg\" (UniqueName: \"kubernetes.io/projected/b12268ff-1299-46c4-8937-8c4dc00f7dc5-kube-api-access-jsdsg\") pod \"b12268ff-1299-46c4-8937-8c4dc00f7dc5\" (UID: \"b12268ff-1299-46c4-8937-8c4dc00f7dc5\") " Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.517533 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267\": container with ID starting with fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267 not found: ID does not exist" containerID="fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.517554 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267"} err="failed to get container status \"fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267\": rpc error: code = NotFound desc = could not find container \"fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267\": container with ID starting with fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267 not found: ID does not exist" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.517569 4812 scope.go:117] "RemoveContainer" containerID="8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.517592 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72df6d7-27f7-49e5-93d9-4069db72b602-operator-scripts\") pod \"b72df6d7-27f7-49e5-93d9-4069db72b602\" (UID: \"b72df6d7-27f7-49e5-93d9-4069db72b602\") " Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.522629 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12268ff-1299-46c4-8937-8c4dc00f7dc5-operator-scripts\") pod \"b12268ff-1299-46c4-8937-8c4dc00f7dc5\" (UID: \"b12268ff-1299-46c4-8937-8c4dc00f7dc5\") " Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.522722 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgxq6\" (UniqueName: \"kubernetes.io/projected/b72df6d7-27f7-49e5-93d9-4069db72b602-kube-api-access-dgxq6\") pod \"b72df6d7-27f7-49e5-93d9-4069db72b602\" (UID: \"b72df6d7-27f7-49e5-93d9-4069db72b602\") " Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.522802 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znmhl\" (UniqueName: \"kubernetes.io/projected/d1b0b239-3d7b-40c3-a599-f3f74d452813-kube-api-access-znmhl\") pod \"d1b0b239-3d7b-40c3-a599-f3f74d452813\" (UID: \"d1b0b239-3d7b-40c3-a599-f3f74d452813\") " Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.522844 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1b0b239-3d7b-40c3-a599-f3f74d452813-operator-scripts\") pod \"d1b0b239-3d7b-40c3-a599-f3f74d452813\" (UID: \"d1b0b239-3d7b-40c3-a599-f3f74d452813\") " Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.520087 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b72df6d7-27f7-49e5-93d9-4069db72b602-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b72df6d7-27f7-49e5-93d9-4069db72b602" (UID: "b72df6d7-27f7-49e5-93d9-4069db72b602"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.523105 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b12268ff-1299-46c4-8937-8c4dc00f7dc5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b12268ff-1299-46c4-8937-8c4dc00f7dc5" (UID: "b12268ff-1299-46c4-8937-8c4dc00f7dc5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.523528 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1b0b239-3d7b-40c3-a599-f3f74d452813-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1b0b239-3d7b-40c3-a599-f3f74d452813" (UID: "d1b0b239-3d7b-40c3-a599-f3f74d452813"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.523587 4812 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.523634 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-operator-scripts podName:c2d42cca-2cc7-4b6a-9f1d-215f522b4c82 nodeName:}" failed. No retries permitted until 2025-11-24 19:39:39.523618956 +0000 UTC m=+1373.312571447 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-operator-scripts") pod "glance7fd2-account-delete-vl62k" (UID: "c2d42cca-2cc7-4b6a-9f1d-215f522b4c82") : configmap "openstack-scripts" not found Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.523721 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1b0b239-3d7b-40c3-a599-f3f74d452813-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.523735 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72df6d7-27f7-49e5-93d9-4069db72b602-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.523745 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b12268ff-1299-46c4-8937-8c4dc00f7dc5-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.523803 4812 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.523822 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-config-data podName:db2cd03e-b999-48ea-b540-7fd35356ba8b nodeName:}" failed. No retries permitted until 2025-11-24 19:39:46.523816011 +0000 UTC m=+1380.312768372 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-config-data") pod "rabbitmq-cell1-server-0" (UID: "db2cd03e-b999-48ea-b540-7fd35356ba8b") : configmap "rabbitmq-cell1-config-data" not found Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.521707 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0\": container with ID starting with 8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0 not found: ID does not exist" containerID="8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.523903 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0"} err="failed to get container status \"8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0\": rpc error: code = NotFound desc = could not find container \"8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0\": container with ID starting with 8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0 not found: ID does not exist" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.524381 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.525775 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b72df6d7-27f7-49e5-93d9-4069db72b602-kube-api-access-dgxq6" (OuterVolumeSpecName: "kube-api-access-dgxq6") pod "b72df6d7-27f7-49e5-93d9-4069db72b602" (UID: "b72df6d7-27f7-49e5-93d9-4069db72b602"). InnerVolumeSpecName "kube-api-access-dgxq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.524810 4812 scope.go:117] "RemoveContainer" containerID="a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.528804 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1b0b239-3d7b-40c3-a599-f3f74d452813-kube-api-access-znmhl" (OuterVolumeSpecName: "kube-api-access-znmhl") pod "d1b0b239-3d7b-40c3-a599-f3f74d452813" (UID: "d1b0b239-3d7b-40c3-a599-f3f74d452813"). InnerVolumeSpecName "kube-api-access-znmhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.529270 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2"} err="failed to get container status \"a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2\": rpc error: code = NotFound desc = could not find container \"a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2\": container with ID starting with a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2 not found: ID does not exist" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.529322 4812 scope.go:117] "RemoveContainer" containerID="b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.530470 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63"} err="failed to get container status \"b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63\": rpc error: code = NotFound desc = could not find container \"b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63\": container with ID starting with b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63 not found: ID does not exist" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.530488 4812 scope.go:117] "RemoveContainer" containerID="fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.531015 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b12268ff-1299-46c4-8937-8c4dc00f7dc5-kube-api-access-jsdsg" (OuterVolumeSpecName: "kube-api-access-jsdsg") pod "b12268ff-1299-46c4-8937-8c4dc00f7dc5" (UID: "b12268ff-1299-46c4-8937-8c4dc00f7dc5"). InnerVolumeSpecName "kube-api-access-jsdsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.531089 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267"} err="failed to get container status \"fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267\": rpc error: code = NotFound desc = could not find container \"fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267\": container with ID starting with fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267 not found: ID does not exist" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.531110 4812 scope.go:117] "RemoveContainer" containerID="8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.532660 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0"} err="failed to get container status \"8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0\": rpc error: code = NotFound desc = could not find container \"8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0\": container with ID starting with 8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0 not found: ID does not exist" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.532694 4812 scope.go:117] "RemoveContainer" containerID="a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.535176 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2"} err="failed to get container status \"a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2\": rpc error: code = NotFound desc = could not find container \"a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2\": container with ID starting with a047b4c90f79462ec8f26ccbf381aa780bbd398cd0630cada953ad052e5fd2f2 not found: ID does not exist" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.535222 4812 scope.go:117] "RemoveContainer" containerID="b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.535485 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63"} err="failed to get container status \"b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63\": rpc error: code = NotFound desc = could not find container \"b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63\": container with ID starting with b33b7f87502d745f31f318f8a33ecc6c9bab3e55ca65da40c0e94735074f9e63 not found: ID does not exist" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.535513 4812 scope.go:117] "RemoveContainer" containerID="fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.536623 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267"} err="failed to get container status \"fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267\": rpc error: code = NotFound desc = could not find container \"fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267\": container with ID starting with fdbc71e67e080e99e72ca6b3ad940c4d5589e6164e76ee7dc0311d7d028d4267 not found: ID does not exist" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.536707 4812 scope.go:117] "RemoveContainer" containerID="8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.539060 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.542739 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0"} err="failed to get container status \"8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0\": rpc error: code = NotFound desc = could not find container \"8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0\": container with ID starting with 8d6286bc10f2fa2125378fc4559f1812d3d363e53a32703d3eec51ebc4a2bef0 not found: ID does not exist" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.542779 4812 scope.go:117] "RemoveContainer" containerID="d07bf2cbab76f1bef1a940806ac6f6323ae423f3a69064aeb96db6f329d5665f" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.543697 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.549970 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.556649 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.563554 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-574f454648-szvd4"] Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.564142 4812 scope.go:117] "RemoveContainer" containerID="692e428119dfde862fe0aeb20102df094a9cb17fc1bcce8548dce4d9b4546e92" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.569528 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-574f454648-szvd4"] Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.580705 4812 scope.go:117] "RemoveContainer" containerID="83ad3cb937149c006757630969320727679b254e01b1b68d6042c9ca2d9a7b07" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.625456 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znmhl\" (UniqueName: \"kubernetes.io/projected/d1b0b239-3d7b-40c3-a599-f3f74d452813-kube-api-access-znmhl\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.625511 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsdsg\" (UniqueName: \"kubernetes.io/projected/b12268ff-1299-46c4-8937-8c4dc00f7dc5-kube-api-access-jsdsg\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.625524 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgxq6\" (UniqueName: \"kubernetes.io/projected/b72df6d7-27f7-49e5-93d9-4069db72b602-kube-api-access-dgxq6\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.655388 4812 scope.go:117] "RemoveContainer" containerID="3551f287cbecbb04cb08a8e33cd37a12bda36c20f012d009c7acdca3dc72561a" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.676228 4812 scope.go:117] "RemoveContainer" containerID="83ad3cb937149c006757630969320727679b254e01b1b68d6042c9ca2d9a7b07" Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.676533 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83ad3cb937149c006757630969320727679b254e01b1b68d6042c9ca2d9a7b07\": container with ID starting with 83ad3cb937149c006757630969320727679b254e01b1b68d6042c9ca2d9a7b07 not found: ID does not exist" containerID="83ad3cb937149c006757630969320727679b254e01b1b68d6042c9ca2d9a7b07" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.676580 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83ad3cb937149c006757630969320727679b254e01b1b68d6042c9ca2d9a7b07"} err="failed to get container status \"83ad3cb937149c006757630969320727679b254e01b1b68d6042c9ca2d9a7b07\": rpc error: code = NotFound desc = could not find container \"83ad3cb937149c006757630969320727679b254e01b1b68d6042c9ca2d9a7b07\": container with ID starting with 83ad3cb937149c006757630969320727679b254e01b1b68d6042c9ca2d9a7b07 not found: ID does not exist" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.676640 4812 scope.go:117] "RemoveContainer" containerID="3551f287cbecbb04cb08a8e33cd37a12bda36c20f012d009c7acdca3dc72561a" Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.677098 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3551f287cbecbb04cb08a8e33cd37a12bda36c20f012d009c7acdca3dc72561a\": container with ID starting with 3551f287cbecbb04cb08a8e33cd37a12bda36c20f012d009c7acdca3dc72561a not found: ID does not exist" containerID="3551f287cbecbb04cb08a8e33cd37a12bda36c20f012d009c7acdca3dc72561a" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.677122 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3551f287cbecbb04cb08a8e33cd37a12bda36c20f012d009c7acdca3dc72561a"} err="failed to get container status \"3551f287cbecbb04cb08a8e33cd37a12bda36c20f012d009c7acdca3dc72561a\": rpc error: code = NotFound desc = could not find container \"3551f287cbecbb04cb08a8e33cd37a12bda36c20f012d009c7acdca3dc72561a\": container with ID starting with 3551f287cbecbb04cb08a8e33cd37a12bda36c20f012d009c7acdca3dc72561a not found: ID does not exist" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.677144 4812 scope.go:117] "RemoveContainer" containerID="4984c0a89981ef7e9ccb2912cd1e3db25ba3f333055dcbc10b30cb69f8c45835" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.730040 4812 scope.go:117] "RemoveContainer" containerID="5afde10d2f90a0c5d1bc9ea9375923c024c51c32da257a3842c5489dcc1fd794" Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.850196 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 38f77e06a7c796fa3248ed4ee27a5c9f23b546d855d2d2787a29ffdee9ac28be is running failed: container process not found" containerID="38f77e06a7c796fa3248ed4ee27a5c9f23b546d855d2d2787a29ffdee9ac28be" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.850512 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 38f77e06a7c796fa3248ed4ee27a5c9f23b546d855d2d2787a29ffdee9ac28be is running failed: container process not found" containerID="38f77e06a7c796fa3248ed4ee27a5c9f23b546d855d2d2787a29ffdee9ac28be" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.851045 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 38f77e06a7c796fa3248ed4ee27a5c9f23b546d855d2d2787a29ffdee9ac28be is running failed: container process not found" containerID="38f77e06a7c796fa3248ed4ee27a5c9f23b546d855d2d2787a29ffdee9ac28be" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 24 19:39:38 crc kubenswrapper[4812]: E1124 19:39:38.851091 4812 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 38f77e06a7c796fa3248ed4ee27a5c9f23b546d855d2d2787a29ffdee9ac28be is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="b2264eb6-f494-4800-832b-d1e1d02daf4e" containerName="galera" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.873472 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b9dbc131-33f8-4df7-88ba-2d90e93a436c","Type":"ContainerDied","Data":"53e80f140a60dea173919a25bc6a589b453849483c9dc0c47422cfab29e78f8a"} Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.873566 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.879352 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement0bb7-account-delete-m56jl" event={"ID":"b12268ff-1299-46c4-8937-8c4dc00f7dc5","Type":"ContainerDied","Data":"b31271e817b6cbe93c188af2bf94c180b3e0e23ccb62090380412786781d78c1"} Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.879385 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b31271e817b6cbe93c188af2bf94c180b3e0e23ccb62090380412786781d78c1" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.879436 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement0bb7-account-delete-m56jl" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.882509 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0648d4b9-0096-4092-b2b9-70e23f9c863c/ovn-northd/0.log" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.882542 4812 generic.go:334] "Generic (PLEG): container finished" podID="0648d4b9-0096-4092-b2b9-70e23f9c863c" containerID="1612fa99105c6c6db36c69df3ab73e15ab7582ce223f0eb33cb1b46662439f92" exitCode=139 Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.882606 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0648d4b9-0096-4092-b2b9-70e23f9c863c","Type":"ContainerDied","Data":"1612fa99105c6c6db36c69df3ab73e15ab7582ce223f0eb33cb1b46662439f92"} Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.886488 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.888750 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"174dd2a1-292b-4e07-8ed8-48d4109e9f57","Type":"ContainerDied","Data":"97b35843411bd80efad5538b8fdc3d425ac2482bcd0f688f1834505060b3b6aa"} Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.894316 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder5926-account-delete-lvwv5" event={"ID":"d1b0b239-3d7b-40c3-a599-f3f74d452813","Type":"ContainerDied","Data":"887cc2b17ff121668f2000018bdc31443036cc336d570f9b6969a8addfc3e61d"} Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.894400 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="887cc2b17ff121668f2000018bdc31443036cc336d570f9b6969a8addfc3e61d" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.894453 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder5926-account-delete-lvwv5" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.896947 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi5538-account-delete-t9sww" event={"ID":"2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9","Type":"ContainerStarted","Data":"77dc3f6e74abb3b93503952b87f654fbfaef75da24f4b57f53b2e69b9f1e4f01"} Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.897093 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/novaapi5538-account-delete-t9sww" podUID="2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9" containerName="mariadb-account-delete" containerID="cri-o://77dc3f6e74abb3b93503952b87f654fbfaef75da24f4b57f53b2e69b9f1e4f01" gracePeriod=30 Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.898722 4812 scope.go:117] "RemoveContainer" containerID="8ae9509f408e8d7a118f2c286e6243f94fd2c8cc3503c8aa239b550061eb119e" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.919274 4812 generic.go:334] "Generic (PLEG): container finished" podID="b2264eb6-f494-4800-832b-d1e1d02daf4e" containerID="38f77e06a7c796fa3248ed4ee27a5c9f23b546d855d2d2787a29ffdee9ac28be" exitCode=0 Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.919325 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b2264eb6-f494-4800-832b-d1e1d02daf4e","Type":"ContainerDied","Data":"38f77e06a7c796fa3248ed4ee27a5c9f23b546d855d2d2787a29ffdee9ac28be"} Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.938881 4812 scope.go:117] "RemoveContainer" containerID="ea38aff07288e1f408c0f6650bafdbaba5ec40ffed460a32bda946e57bb0490c" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.939314 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystonec995-account-delete-t2md4" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.941450 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron448d-account-delete-qskvr" event={"ID":"b72df6d7-27f7-49e5-93d9-4069db72b602","Type":"ContainerDied","Data":"fc1c492c9e28818aec5eb4f22cf758519cfc946a5a19208132fe5282f31a336a"} Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.941531 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron448d-account-delete-qskvr" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.959463 4812 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/glance7fd2-account-delete-vl62k" secret="" err="secret \"galera-openstack-dockercfg-zbtr8\" not found" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.960115 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.977556 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09c615e5-0d09-4775-b559-6883b6dc280b" path="/var/lib/kubelet/pods/09c615e5-0d09-4775-b559-6883b6dc280b/volumes" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.978353 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c8509a7-6885-4238-9da3-b214b5f8868e" path="/var/lib/kubelet/pods/1c8509a7-6885-4238-9da3-b214b5f8868e/volumes" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.979527 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e1ab98f-9015-432e-90f2-4692dc37c99e" path="/var/lib/kubelet/pods/2e1ab98f-9015-432e-90f2-4692dc37c99e/volumes" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.980122 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56f4002e-a9bd-462d-b5ed-ce5ea166ec16" path="/var/lib/kubelet/pods/56f4002e-a9bd-462d-b5ed-ce5ea166ec16/volumes" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.980923 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93240875-2cab-44e0-b475-20f46cb4850e" path="/var/lib/kubelet/pods/93240875-2cab-44e0-b475-20f46cb4850e/volumes" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.981904 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9" path="/var/lib/kubelet/pods/a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9/volumes" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.982553 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce28537c-ff5d-4318-bc95-8a29da6aae53" path="/var/lib/kubelet/pods/ce28537c-ff5d-4318-bc95-8a29da6aae53/volumes" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.983195 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf01449d-a52d-488f-bd93-b5f84b57fb13" path="/var/lib/kubelet/pods/cf01449d-a52d-488f-bd93-b5f84b57fb13/volumes" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.984455 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de0c0304-db1e-4a1e-8b7e-5700c55fc9ed" path="/var/lib/kubelet/pods/de0c0304-db1e-4a1e-8b7e-5700c55fc9ed/volumes" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.985014 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fde2bf26-3775-4ffe-83d7-1bc11d36c1e3" path="/var/lib/kubelet/pods/fde2bf26-3775-4ffe-83d7-1bc11d36c1e3/volumes" Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.985796 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.985826 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 24 19:39:38 crc kubenswrapper[4812]: I1124 19:39:38.986989 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.089857 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.123675 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystonec995-account-delete-t2md4"] Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.128276 4812 scope.go:117] "RemoveContainer" containerID="de2924e496c4d6a4ca668dec3242cded056b7265571b0684e6ca59824356a9c8" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.132665 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystonec995-account-delete-t2md4"] Nov 24 19:39:39 crc kubenswrapper[4812]: E1124 19:39:39.170280 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1612fa99105c6c6db36c69df3ab73e15ab7582ce223f0eb33cb1b46662439f92 is running failed: container process not found" containerID="1612fa99105c6c6db36c69df3ab73e15ab7582ce223f0eb33cb1b46662439f92" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 24 19:39:39 crc kubenswrapper[4812]: E1124 19:39:39.172908 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1612fa99105c6c6db36c69df3ab73e15ab7582ce223f0eb33cb1b46662439f92 is running failed: container process not found" containerID="1612fa99105c6c6db36c69df3ab73e15ab7582ce223f0eb33cb1b46662439f92" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 24 19:39:39 crc kubenswrapper[4812]: E1124 19:39:39.173398 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1612fa99105c6c6db36c69df3ab73e15ab7582ce223f0eb33cb1b46662439f92 is running failed: container process not found" containerID="1612fa99105c6c6db36c69df3ab73e15ab7582ce223f0eb33cb1b46662439f92" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 24 19:39:39 crc kubenswrapper[4812]: E1124 19:39:39.173451 4812 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1612fa99105c6c6db36c69df3ab73e15ab7582ce223f0eb33cb1b46662439f92 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="0648d4b9-0096-4092-b2b9-70e23f9c863c" containerName="ovn-northd" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.234668 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2264eb6-f494-4800-832b-d1e1d02daf4e-operator-scripts\") pod \"b2264eb6-f494-4800-832b-d1e1d02daf4e\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.234878 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdgzx\" (UniqueName: \"kubernetes.io/projected/b2264eb6-f494-4800-832b-d1e1d02daf4e-kube-api-access-mdgzx\") pod \"b2264eb6-f494-4800-832b-d1e1d02daf4e\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.235035 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b2264eb6-f494-4800-832b-d1e1d02daf4e-config-data-generated\") pod \"b2264eb6-f494-4800-832b-d1e1d02daf4e\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.235185 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b2264eb6-f494-4800-832b-d1e1d02daf4e-kolla-config\") pod \"b2264eb6-f494-4800-832b-d1e1d02daf4e\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.237150 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"b2264eb6-f494-4800-832b-d1e1d02daf4e\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.237596 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2264eb6-f494-4800-832b-d1e1d02daf4e-combined-ca-bundle\") pod \"b2264eb6-f494-4800-832b-d1e1d02daf4e\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.237626 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2264eb6-f494-4800-832b-d1e1d02daf4e-galera-tls-certs\") pod \"b2264eb6-f494-4800-832b-d1e1d02daf4e\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.237643 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b2264eb6-f494-4800-832b-d1e1d02daf4e-config-data-default\") pod \"b2264eb6-f494-4800-832b-d1e1d02daf4e\" (UID: \"b2264eb6-f494-4800-832b-d1e1d02daf4e\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.235583 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2264eb6-f494-4800-832b-d1e1d02daf4e-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "b2264eb6-f494-4800-832b-d1e1d02daf4e" (UID: "b2264eb6-f494-4800-832b-d1e1d02daf4e"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.235797 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2264eb6-f494-4800-832b-d1e1d02daf4e-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "b2264eb6-f494-4800-832b-d1e1d02daf4e" (UID: "b2264eb6-f494-4800-832b-d1e1d02daf4e"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.235996 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2264eb6-f494-4800-832b-d1e1d02daf4e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b2264eb6-f494-4800-832b-d1e1d02daf4e" (UID: "b2264eb6-f494-4800-832b-d1e1d02daf4e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.238530 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b2264eb6-f494-4800-832b-d1e1d02daf4e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.238602 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.238726 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b2264eb6-f494-4800-832b-d1e1d02daf4e-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.238754 4812 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b2264eb6-f494-4800-832b-d1e1d02daf4e-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.238779 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnfr9\" (UniqueName: \"kubernetes.io/projected/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c-kube-api-access-tnfr9\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.238707 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2264eb6-f494-4800-832b-d1e1d02daf4e-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "b2264eb6-f494-4800-832b-d1e1d02daf4e" (UID: "b2264eb6-f494-4800-832b-d1e1d02daf4e"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.255490 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2264eb6-f494-4800-832b-d1e1d02daf4e-kube-api-access-mdgzx" (OuterVolumeSpecName: "kube-api-access-mdgzx") pod "b2264eb6-f494-4800-832b-d1e1d02daf4e" (UID: "b2264eb6-f494-4800-832b-d1e1d02daf4e"). InnerVolumeSpecName "kube-api-access-mdgzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.276643 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "mysql-db") pod "b2264eb6-f494-4800-832b-d1e1d02daf4e" (UID: "b2264eb6-f494-4800-832b-d1e1d02daf4e"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.283025 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2264eb6-f494-4800-832b-d1e1d02daf4e-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "b2264eb6-f494-4800-832b-d1e1d02daf4e" (UID: "b2264eb6-f494-4800-832b-d1e1d02daf4e"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.287284 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2264eb6-f494-4800-832b-d1e1d02daf4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2264eb6-f494-4800-832b-d1e1d02daf4e" (UID: "b2264eb6-f494-4800-832b-d1e1d02daf4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.304955 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0648d4b9-0096-4092-b2b9-70e23f9c863c/ovn-northd/0.log" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.305074 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.316254 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell063c1-account-delete-9lhdk" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.341323 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdgzx\" (UniqueName: \"kubernetes.io/projected/b2264eb6-f494-4800-832b-d1e1d02daf4e-kube-api-access-mdgzx\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.341382 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.341394 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2264eb6-f494-4800-832b-d1e1d02daf4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.341403 4812 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2264eb6-f494-4800-832b-d1e1d02daf4e-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.341412 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b2264eb6-f494-4800-832b-d1e1d02daf4e-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: E1124 19:39:39.341472 4812 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 24 19:39:39 crc kubenswrapper[4812]: E1124 19:39:39.341557 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-config-data podName:7cb8ede6-6163-4906-89f7-7fe6458edc36 nodeName:}" failed. No retries permitted until 2025-11-24 19:39:47.341537756 +0000 UTC m=+1381.130490127 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-config-data") pod "rabbitmq-server-0" (UID: "7cb8ede6-6163-4906-89f7-7fe6458edc36") : configmap "rabbitmq-config-data" not found Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.377584 4812 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.443459 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbjsg\" (UniqueName: \"kubernetes.io/projected/7c8539e2-3b3b-488b-9111-d59aa7317490-kube-api-access-pbjsg\") pod \"7c8539e2-3b3b-488b-9111-d59aa7317490\" (UID: \"7c8539e2-3b3b-488b-9111-d59aa7317490\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.443578 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0648d4b9-0096-4092-b2b9-70e23f9c863c-combined-ca-bundle\") pod \"0648d4b9-0096-4092-b2b9-70e23f9c863c\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.443623 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c8539e2-3b3b-488b-9111-d59aa7317490-operator-scripts\") pod \"7c8539e2-3b3b-488b-9111-d59aa7317490\" (UID: \"7c8539e2-3b3b-488b-9111-d59aa7317490\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.443652 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0648d4b9-0096-4092-b2b9-70e23f9c863c-scripts\") pod \"0648d4b9-0096-4092-b2b9-70e23f9c863c\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.443686 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0648d4b9-0096-4092-b2b9-70e23f9c863c-metrics-certs-tls-certs\") pod \"0648d4b9-0096-4092-b2b9-70e23f9c863c\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.443712 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz55c\" (UniqueName: \"kubernetes.io/projected/0648d4b9-0096-4092-b2b9-70e23f9c863c-kube-api-access-nz55c\") pod \"0648d4b9-0096-4092-b2b9-70e23f9c863c\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.443749 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0648d4b9-0096-4092-b2b9-70e23f9c863c-ovn-rundir\") pod \"0648d4b9-0096-4092-b2b9-70e23f9c863c\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.443796 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0648d4b9-0096-4092-b2b9-70e23f9c863c-ovn-northd-tls-certs\") pod \"0648d4b9-0096-4092-b2b9-70e23f9c863c\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.443846 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0648d4b9-0096-4092-b2b9-70e23f9c863c-config\") pod \"0648d4b9-0096-4092-b2b9-70e23f9c863c\" (UID: \"0648d4b9-0096-4092-b2b9-70e23f9c863c\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.444170 4812 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.444539 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0648d4b9-0096-4092-b2b9-70e23f9c863c-config" (OuterVolumeSpecName: "config") pod "0648d4b9-0096-4092-b2b9-70e23f9c863c" (UID: "0648d4b9-0096-4092-b2b9-70e23f9c863c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.447523 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c8539e2-3b3b-488b-9111-d59aa7317490-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c8539e2-3b3b-488b-9111-d59aa7317490" (UID: "7c8539e2-3b3b-488b-9111-d59aa7317490"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.448076 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0648d4b9-0096-4092-b2b9-70e23f9c863c-scripts" (OuterVolumeSpecName: "scripts") pod "0648d4b9-0096-4092-b2b9-70e23f9c863c" (UID: "0648d4b9-0096-4092-b2b9-70e23f9c863c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.448644 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0648d4b9-0096-4092-b2b9-70e23f9c863c-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "0648d4b9-0096-4092-b2b9-70e23f9c863c" (UID: "0648d4b9-0096-4092-b2b9-70e23f9c863c"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.453184 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0648d4b9-0096-4092-b2b9-70e23f9c863c-kube-api-access-nz55c" (OuterVolumeSpecName: "kube-api-access-nz55c") pod "0648d4b9-0096-4092-b2b9-70e23f9c863c" (UID: "0648d4b9-0096-4092-b2b9-70e23f9c863c"). InnerVolumeSpecName "kube-api-access-nz55c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.461841 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c8539e2-3b3b-488b-9111-d59aa7317490-kube-api-access-pbjsg" (OuterVolumeSpecName: "kube-api-access-pbjsg") pod "7c8539e2-3b3b-488b-9111-d59aa7317490" (UID: "7c8539e2-3b3b-488b-9111-d59aa7317490"). InnerVolumeSpecName "kube-api-access-pbjsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.471549 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0648d4b9-0096-4092-b2b9-70e23f9c863c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0648d4b9-0096-4092-b2b9-70e23f9c863c" (UID: "0648d4b9-0096-4092-b2b9-70e23f9c863c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.520121 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0648d4b9-0096-4092-b2b9-70e23f9c863c-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "0648d4b9-0096-4092-b2b9-70e23f9c863c" (UID: "0648d4b9-0096-4092-b2b9-70e23f9c863c"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.533450 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.551152 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0648d4b9-0096-4092-b2b9-70e23f9c863c-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.551188 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbjsg\" (UniqueName: \"kubernetes.io/projected/7c8539e2-3b3b-488b-9111-d59aa7317490-kube-api-access-pbjsg\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.551199 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0648d4b9-0096-4092-b2b9-70e23f9c863c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.551208 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c8539e2-3b3b-488b-9111-d59aa7317490-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.551218 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0648d4b9-0096-4092-b2b9-70e23f9c863c-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.551226 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz55c\" (UniqueName: \"kubernetes.io/projected/0648d4b9-0096-4092-b2b9-70e23f9c863c-kube-api-access-nz55c\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: E1124 19:39:39.551160 4812 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 24 19:39:39 crc kubenswrapper[4812]: E1124 19:39:39.551309 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-operator-scripts podName:c2d42cca-2cc7-4b6a-9f1d-215f522b4c82 nodeName:}" failed. No retries permitted until 2025-11-24 19:39:41.551288025 +0000 UTC m=+1375.340240396 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-operator-scripts") pod "glance7fd2-account-delete-vl62k" (UID: "c2d42cca-2cc7-4b6a-9f1d-215f522b4c82") : configmap "openstack-scripts" not found Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.551234 4812 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0648d4b9-0096-4092-b2b9-70e23f9c863c-ovn-rundir\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.551567 4812 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0648d4b9-0096-4092-b2b9-70e23f9c863c-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.558254 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0648d4b9-0096-4092-b2b9-70e23f9c863c-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "0648d4b9-0096-4092-b2b9-70e23f9c863c" (UID: "0648d4b9-0096-4092-b2b9-70e23f9c863c"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.618903 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.652493 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-server-conf\") pod \"db2cd03e-b999-48ea-b540-7fd35356ba8b\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.652572 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"db2cd03e-b999-48ea-b540-7fd35356ba8b\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.652598 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-erlang-cookie\") pod \"db2cd03e-b999-48ea-b540-7fd35356ba8b\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.652629 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db2cd03e-b999-48ea-b540-7fd35356ba8b-erlang-cookie-secret\") pod \"db2cd03e-b999-48ea-b540-7fd35356ba8b\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.652664 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6plcr\" (UniqueName: \"kubernetes.io/projected/db2cd03e-b999-48ea-b540-7fd35356ba8b-kube-api-access-6plcr\") pod \"db2cd03e-b999-48ea-b540-7fd35356ba8b\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.652686 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-plugins-conf\") pod \"db2cd03e-b999-48ea-b540-7fd35356ba8b\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.652760 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-tls\") pod \"db2cd03e-b999-48ea-b540-7fd35356ba8b\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.652796 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-config-data\") pod \"db2cd03e-b999-48ea-b540-7fd35356ba8b\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.652820 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-confd\") pod \"db2cd03e-b999-48ea-b540-7fd35356ba8b\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.652861 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-plugins\") pod \"db2cd03e-b999-48ea-b540-7fd35356ba8b\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.652878 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db2cd03e-b999-48ea-b540-7fd35356ba8b-pod-info\") pod \"db2cd03e-b999-48ea-b540-7fd35356ba8b\" (UID: \"db2cd03e-b999-48ea-b540-7fd35356ba8b\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.653180 4812 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0648d4b9-0096-4092-b2b9-70e23f9c863c-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.653974 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "db2cd03e-b999-48ea-b540-7fd35356ba8b" (UID: "db2cd03e-b999-48ea-b540-7fd35356ba8b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.654511 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "db2cd03e-b999-48ea-b540-7fd35356ba8b" (UID: "db2cd03e-b999-48ea-b540-7fd35356ba8b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.655068 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "db2cd03e-b999-48ea-b540-7fd35356ba8b" (UID: "db2cd03e-b999-48ea-b540-7fd35356ba8b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.659778 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db2cd03e-b999-48ea-b540-7fd35356ba8b-kube-api-access-6plcr" (OuterVolumeSpecName: "kube-api-access-6plcr") pod "db2cd03e-b999-48ea-b540-7fd35356ba8b" (UID: "db2cd03e-b999-48ea-b540-7fd35356ba8b"). InnerVolumeSpecName "kube-api-access-6plcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.663421 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/db2cd03e-b999-48ea-b540-7fd35356ba8b-pod-info" (OuterVolumeSpecName: "pod-info") pod "db2cd03e-b999-48ea-b540-7fd35356ba8b" (UID: "db2cd03e-b999-48ea-b540-7fd35356ba8b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.663657 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "db2cd03e-b999-48ea-b540-7fd35356ba8b" (UID: "db2cd03e-b999-48ea-b540-7fd35356ba8b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.666230 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "db2cd03e-b999-48ea-b540-7fd35356ba8b" (UID: "db2cd03e-b999-48ea-b540-7fd35356ba8b"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.666488 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db2cd03e-b999-48ea-b540-7fd35356ba8b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "db2cd03e-b999-48ea-b540-7fd35356ba8b" (UID: "db2cd03e-b999-48ea-b540-7fd35356ba8b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.683732 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-config-data" (OuterVolumeSpecName: "config-data") pod "db2cd03e-b999-48ea-b540-7fd35356ba8b" (UID: "db2cd03e-b999-48ea-b540-7fd35356ba8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.695950 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-server-conf" (OuterVolumeSpecName: "server-conf") pod "db2cd03e-b999-48ea-b540-7fd35356ba8b" (UID: "db2cd03e-b999-48ea-b540-7fd35356ba8b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.744039 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "db2cd03e-b999-48ea-b540-7fd35356ba8b" (UID: "db2cd03e-b999-48ea-b540-7fd35356ba8b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.754022 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7cb8ede6-6163-4906-89f7-7fe6458edc36-pod-info\") pod \"7cb8ede6-6163-4906-89f7-7fe6458edc36\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.754396 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-plugins\") pod \"7cb8ede6-6163-4906-89f7-7fe6458edc36\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.754423 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-tls\") pod \"7cb8ede6-6163-4906-89f7-7fe6458edc36\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.754454 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-plugins-conf\") pod \"7cb8ede6-6163-4906-89f7-7fe6458edc36\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.754477 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"7cb8ede6-6163-4906-89f7-7fe6458edc36\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.754526 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-config-data\") pod \"7cb8ede6-6163-4906-89f7-7fe6458edc36\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.754553 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-erlang-cookie\") pod \"7cb8ede6-6163-4906-89f7-7fe6458edc36\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.754570 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn4nv\" (UniqueName: \"kubernetes.io/projected/7cb8ede6-6163-4906-89f7-7fe6458edc36-kube-api-access-zn4nv\") pod \"7cb8ede6-6163-4906-89f7-7fe6458edc36\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.754588 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-confd\") pod \"7cb8ede6-6163-4906-89f7-7fe6458edc36\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.754643 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-server-conf\") pod \"7cb8ede6-6163-4906-89f7-7fe6458edc36\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.754941 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7cb8ede6-6163-4906-89f7-7fe6458edc36-erlang-cookie-secret\") pod \"7cb8ede6-6163-4906-89f7-7fe6458edc36\" (UID: \"7cb8ede6-6163-4906-89f7-7fe6458edc36\") " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.755710 4812 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.755730 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.755758 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.755772 4812 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/db2cd03e-b999-48ea-b540-7fd35356ba8b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.755786 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6plcr\" (UniqueName: \"kubernetes.io/projected/db2cd03e-b999-48ea-b540-7fd35356ba8b-kube-api-access-6plcr\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.755801 4812 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.755814 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.755826 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/db2cd03e-b999-48ea-b540-7fd35356ba8b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.755839 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.755850 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/db2cd03e-b999-48ea-b540-7fd35356ba8b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.755860 4812 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/db2cd03e-b999-48ea-b540-7fd35356ba8b-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.758232 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7cb8ede6-6163-4906-89f7-7fe6458edc36" (UID: "7cb8ede6-6163-4906-89f7-7fe6458edc36"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.758838 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7cb8ede6-6163-4906-89f7-7fe6458edc36" (UID: "7cb8ede6-6163-4906-89f7-7fe6458edc36"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.759142 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7cb8ede6-6163-4906-89f7-7fe6458edc36-pod-info" (OuterVolumeSpecName: "pod-info") pod "7cb8ede6-6163-4906-89f7-7fe6458edc36" (UID: "7cb8ede6-6163-4906-89f7-7fe6458edc36"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.759338 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7cb8ede6-6163-4906-89f7-7fe6458edc36" (UID: "7cb8ede6-6163-4906-89f7-7fe6458edc36"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.762804 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7cb8ede6-6163-4906-89f7-7fe6458edc36" (UID: "7cb8ede6-6163-4906-89f7-7fe6458edc36"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.770390 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb8ede6-6163-4906-89f7-7fe6458edc36-kube-api-access-zn4nv" (OuterVolumeSpecName: "kube-api-access-zn4nv") pod "7cb8ede6-6163-4906-89f7-7fe6458edc36" (UID: "7cb8ede6-6163-4906-89f7-7fe6458edc36"). InnerVolumeSpecName "kube-api-access-zn4nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.771435 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb8ede6-6163-4906-89f7-7fe6458edc36-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7cb8ede6-6163-4906-89f7-7fe6458edc36" (UID: "7cb8ede6-6163-4906-89f7-7fe6458edc36"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.784208 4812 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.792493 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "7cb8ede6-6163-4906-89f7-7fe6458edc36" (UID: "7cb8ede6-6163-4906-89f7-7fe6458edc36"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.795857 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-config-data" (OuterVolumeSpecName: "config-data") pod "7cb8ede6-6163-4906-89f7-7fe6458edc36" (UID: "7cb8ede6-6163-4906-89f7-7fe6458edc36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.840467 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-server-conf" (OuterVolumeSpecName: "server-conf") pod "7cb8ede6-6163-4906-89f7-7fe6458edc36" (UID: "7cb8ede6-6163-4906-89f7-7fe6458edc36"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.859144 4812 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.859180 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.859189 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.859199 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.866485 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn4nv\" (UniqueName: \"kubernetes.io/projected/7cb8ede6-6163-4906-89f7-7fe6458edc36-kube-api-access-zn4nv\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.866502 4812 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7cb8ede6-6163-4906-89f7-7fe6458edc36-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.866516 4812 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7cb8ede6-6163-4906-89f7-7fe6458edc36-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.866533 4812 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.866545 4812 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7cb8ede6-6163-4906-89f7-7fe6458edc36-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.866557 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.866569 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: E1124 19:39:39.867688 4812 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 24 19:39:39 crc kubenswrapper[4812]: E1124 19:39:39.867827 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-operator-scripts podName:2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9 nodeName:}" failed. No retries permitted until 2025-11-24 19:39:43.867803218 +0000 UTC m=+1377.656755689 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-operator-scripts") pod "novaapi5538-account-delete-t9sww" (UID: "2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9") : configmap "openstack-scripts" not found Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.875778 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7cb8ede6-6163-4906-89f7-7fe6458edc36" (UID: "7cb8ede6-6163-4906-89f7-7fe6458edc36"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.882532 4812 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.912972 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.957902 4812 generic.go:334] "Generic (PLEG): container finished" podID="db2cd03e-b999-48ea-b540-7fd35356ba8b" containerID="f9a279d42e1d81ebcc809ef24fb24a103a629e0087d30c570c28a4d054c09cc4" exitCode=0 Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.958543 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db2cd03e-b999-48ea-b540-7fd35356ba8b","Type":"ContainerDied","Data":"f9a279d42e1d81ebcc809ef24fb24a103a629e0087d30c570c28a4d054c09cc4"} Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.958626 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"db2cd03e-b999-48ea-b540-7fd35356ba8b","Type":"ContainerDied","Data":"908faf8b3239cedbb34edf774cab8f878190278ed040ba8f8bc97dcc566dfde4"} Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.958695 4812 scope.go:117] "RemoveContainer" containerID="f9a279d42e1d81ebcc809ef24fb24a103a629e0087d30c570c28a4d054c09cc4" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.958883 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.967309 4812 generic.go:334] "Generic (PLEG): container finished" podID="7cb8ede6-6163-4906-89f7-7fe6458edc36" containerID="608679d453b4a44e83990b6c799b689d9c67bad4d515f6fee46f773957c810bc" exitCode=0 Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.967397 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7cb8ede6-6163-4906-89f7-7fe6458edc36","Type":"ContainerDied","Data":"608679d453b4a44e83990b6c799b689d9c67bad4d515f6fee46f773957c810bc"} Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.967421 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7cb8ede6-6163-4906-89f7-7fe6458edc36","Type":"ContainerDied","Data":"e4c3681637a7f9a6d4ce3d769069698835fa06a760dd9e1e654a8a69f0cbb071"} Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.967430 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.967702 4812 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.968000 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7cb8ede6-6163-4906-89f7-7fe6458edc36-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.970120 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0648d4b9-0096-4092-b2b9-70e23f9c863c/ovn-northd/0.log" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.970225 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0648d4b9-0096-4092-b2b9-70e23f9c863c","Type":"ContainerDied","Data":"61f062143007d0e2ed51cf609156fdd92f7d94b279cda6b166dc5c6918fc570c"} Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.970324 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.973659 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b2264eb6-f494-4800-832b-d1e1d02daf4e","Type":"ContainerDied","Data":"8e687f92f86ed01924f84d1ca0ca051d992fccfd8078818b09102e6f0e4652f8"} Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.973743 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.979305 4812 generic.go:334] "Generic (PLEG): container finished" podID="c4f68d8b-5bb9-4778-9416-1e9f82bba8f3" containerID="d758e81f7d47380dbb825bd4c9f03b18b5d028cad105d8ab2aa48cd43176fb2a" exitCode=0 Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.979379 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c8f69d86c-2v9p4" event={"ID":"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3","Type":"ContainerDied","Data":"d758e81f7d47380dbb825bd4c9f03b18b5d028cad105d8ab2aa48cd43176fb2a"} Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.979403 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c8f69d86c-2v9p4" event={"ID":"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3","Type":"ContainerDied","Data":"2c6233e47a664613934185a9c49954edbe75e5f9d825c33d1da8e1f2c02345fb"} Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.979472 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c8f69d86c-2v9p4" Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.981974 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell063c1-account-delete-9lhdk" event={"ID":"7c8539e2-3b3b-488b-9111-d59aa7317490","Type":"ContainerDied","Data":"188993a747e2ccdf97c80db73f5a7857980951204b931360a0a221f4d58536a8"} Nov 24 19:39:39 crc kubenswrapper[4812]: I1124 19:39:39.982025 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell063c1-account-delete-9lhdk" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.010702 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.020605 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.024474 4812 scope.go:117] "RemoveContainer" containerID="0a9548dd5692b6153f3561ecedbb50d79cca6e5b8420150e9bcf2b64a40747c0" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.039622 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.051810 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.068739 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-public-tls-certs\") pod \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.068790 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-internal-tls-certs\") pod \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.068818 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-config-data\") pod \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.068852 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdcvk\" (UniqueName: \"kubernetes.io/projected/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-kube-api-access-cdcvk\") pod \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.068874 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-scripts\") pod \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.068891 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-fernet-keys\") pod \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.068918 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-combined-ca-bundle\") pod \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.068950 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-credential-keys\") pod \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\" (UID: \"c4f68d8b-5bb9-4778-9416-1e9f82bba8f3\") " Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.070569 4812 scope.go:117] "RemoveContainer" containerID="f9a279d42e1d81ebcc809ef24fb24a103a629e0087d30c570c28a4d054c09cc4" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.075553 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 19:39:40 crc kubenswrapper[4812]: E1124 19:39:40.077373 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9a279d42e1d81ebcc809ef24fb24a103a629e0087d30c570c28a4d054c09cc4\": container with ID starting with f9a279d42e1d81ebcc809ef24fb24a103a629e0087d30c570c28a4d054c09cc4 not found: ID does not exist" containerID="f9a279d42e1d81ebcc809ef24fb24a103a629e0087d30c570c28a4d054c09cc4" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.077415 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9a279d42e1d81ebcc809ef24fb24a103a629e0087d30c570c28a4d054c09cc4"} err="failed to get container status \"f9a279d42e1d81ebcc809ef24fb24a103a629e0087d30c570c28a4d054c09cc4\": rpc error: code = NotFound desc = could not find container \"f9a279d42e1d81ebcc809ef24fb24a103a629e0087d30c570c28a4d054c09cc4\": container with ID starting with f9a279d42e1d81ebcc809ef24fb24a103a629e0087d30c570c28a4d054c09cc4 not found: ID does not exist" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.077439 4812 scope.go:117] "RemoveContainer" containerID="0a9548dd5692b6153f3561ecedbb50d79cca6e5b8420150e9bcf2b64a40747c0" Nov 24 19:39:40 crc kubenswrapper[4812]: E1124 19:39:40.077801 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a9548dd5692b6153f3561ecedbb50d79cca6e5b8420150e9bcf2b64a40747c0\": container with ID starting with 0a9548dd5692b6153f3561ecedbb50d79cca6e5b8420150e9bcf2b64a40747c0 not found: ID does not exist" containerID="0a9548dd5692b6153f3561ecedbb50d79cca6e5b8420150e9bcf2b64a40747c0" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.077837 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a9548dd5692b6153f3561ecedbb50d79cca6e5b8420150e9bcf2b64a40747c0"} err="failed to get container status \"0a9548dd5692b6153f3561ecedbb50d79cca6e5b8420150e9bcf2b64a40747c0\": rpc error: code = NotFound desc = could not find container \"0a9548dd5692b6153f3561ecedbb50d79cca6e5b8420150e9bcf2b64a40747c0\": container with ID starting with 0a9548dd5692b6153f3561ecedbb50d79cca6e5b8420150e9bcf2b64a40747c0 not found: ID does not exist" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.077857 4812 scope.go:117] "RemoveContainer" containerID="608679d453b4a44e83990b6c799b689d9c67bad4d515f6fee46f773957c810bc" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.086607 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c4f68d8b-5bb9-4778-9416-1e9f82bba8f3" (UID: "c4f68d8b-5bb9-4778-9416-1e9f82bba8f3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.087162 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-kube-api-access-cdcvk" (OuterVolumeSpecName: "kube-api-access-cdcvk") pod "c4f68d8b-5bb9-4778-9416-1e9f82bba8f3" (UID: "c4f68d8b-5bb9-4778-9416-1e9f82bba8f3"). InnerVolumeSpecName "kube-api-access-cdcvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.090590 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-scripts" (OuterVolumeSpecName: "scripts") pod "c4f68d8b-5bb9-4778-9416-1e9f82bba8f3" (UID: "c4f68d8b-5bb9-4778-9416-1e9f82bba8f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.090569 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.095365 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c4f68d8b-5bb9-4778-9416-1e9f82bba8f3" (UID: "c4f68d8b-5bb9-4778-9416-1e9f82bba8f3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.095429 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.100881 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.107193 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell063c1-account-delete-9lhdk"] Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.110708 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4f68d8b-5bb9-4778-9416-1e9f82bba8f3" (UID: "c4f68d8b-5bb9-4778-9416-1e9f82bba8f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.126695 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell063c1-account-delete-9lhdk"] Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.149641 4812 scope.go:117] "RemoveContainer" containerID="19fddee3a853c3fd532fe650fc31347397a2376049b8e783055c2e331f93470a" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.154637 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-config-data" (OuterVolumeSpecName: "config-data") pod "c4f68d8b-5bb9-4778-9416-1e9f82bba8f3" (UID: "c4f68d8b-5bb9-4778-9416-1e9f82bba8f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.159050 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c4f68d8b-5bb9-4778-9416-1e9f82bba8f3" (UID: "c4f68d8b-5bb9-4778-9416-1e9f82bba8f3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.161716 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c4f68d8b-5bb9-4778-9416-1e9f82bba8f3" (UID: "c4f68d8b-5bb9-4778-9416-1e9f82bba8f3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.170644 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.170676 4812 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.170685 4812 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.170694 4812 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.170703 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.170712 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdcvk\" (UniqueName: \"kubernetes.io/projected/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-kube-api-access-cdcvk\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.170721 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.170729 4812 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.173653 4812 scope.go:117] "RemoveContainer" containerID="608679d453b4a44e83990b6c799b689d9c67bad4d515f6fee46f773957c810bc" Nov 24 19:39:40 crc kubenswrapper[4812]: E1124 19:39:40.174101 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"608679d453b4a44e83990b6c799b689d9c67bad4d515f6fee46f773957c810bc\": container with ID starting with 608679d453b4a44e83990b6c799b689d9c67bad4d515f6fee46f773957c810bc not found: ID does not exist" containerID="608679d453b4a44e83990b6c799b689d9c67bad4d515f6fee46f773957c810bc" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.174145 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"608679d453b4a44e83990b6c799b689d9c67bad4d515f6fee46f773957c810bc"} err="failed to get container status \"608679d453b4a44e83990b6c799b689d9c67bad4d515f6fee46f773957c810bc\": rpc error: code = NotFound desc = could not find container \"608679d453b4a44e83990b6c799b689d9c67bad4d515f6fee46f773957c810bc\": container with ID starting with 608679d453b4a44e83990b6c799b689d9c67bad4d515f6fee46f773957c810bc not found: ID does not exist" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.174171 4812 scope.go:117] "RemoveContainer" containerID="19fddee3a853c3fd532fe650fc31347397a2376049b8e783055c2e331f93470a" Nov 24 19:39:40 crc kubenswrapper[4812]: E1124 19:39:40.174579 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19fddee3a853c3fd532fe650fc31347397a2376049b8e783055c2e331f93470a\": container with ID starting with 19fddee3a853c3fd532fe650fc31347397a2376049b8e783055c2e331f93470a not found: ID does not exist" containerID="19fddee3a853c3fd532fe650fc31347397a2376049b8e783055c2e331f93470a" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.174610 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19fddee3a853c3fd532fe650fc31347397a2376049b8e783055c2e331f93470a"} err="failed to get container status \"19fddee3a853c3fd532fe650fc31347397a2376049b8e783055c2e331f93470a\": rpc error: code = NotFound desc = could not find container \"19fddee3a853c3fd532fe650fc31347397a2376049b8e783055c2e331f93470a\": container with ID starting with 19fddee3a853c3fd532fe650fc31347397a2376049b8e783055c2e331f93470a not found: ID does not exist" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.174636 4812 scope.go:117] "RemoveContainer" containerID="14a1254945b1fd6a95f6b90d7cf54f5c07482fc6aea59fae62c4d9b5369203bf" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.210613 4812 scope.go:117] "RemoveContainer" containerID="1612fa99105c6c6db36c69df3ab73e15ab7582ce223f0eb33cb1b46662439f92" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.226927 4812 scope.go:117] "RemoveContainer" containerID="38f77e06a7c796fa3248ed4ee27a5c9f23b546d855d2d2787a29ffdee9ac28be" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.251882 4812 scope.go:117] "RemoveContainer" containerID="d053bfd0635142ea6ca3037bd768b91d0f01e9e71438c5bf178927a708e4ef99" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.274019 4812 scope.go:117] "RemoveContainer" containerID="d758e81f7d47380dbb825bd4c9f03b18b5d028cad105d8ab2aa48cd43176fb2a" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.304658 4812 scope.go:117] "RemoveContainer" containerID="d758e81f7d47380dbb825bd4c9f03b18b5d028cad105d8ab2aa48cd43176fb2a" Nov 24 19:39:40 crc kubenswrapper[4812]: E1124 19:39:40.306065 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d758e81f7d47380dbb825bd4c9f03b18b5d028cad105d8ab2aa48cd43176fb2a\": container with ID starting with d758e81f7d47380dbb825bd4c9f03b18b5d028cad105d8ab2aa48cd43176fb2a not found: ID does not exist" containerID="d758e81f7d47380dbb825bd4c9f03b18b5d028cad105d8ab2aa48cd43176fb2a" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.306106 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d758e81f7d47380dbb825bd4c9f03b18b5d028cad105d8ab2aa48cd43176fb2a"} err="failed to get container status \"d758e81f7d47380dbb825bd4c9f03b18b5d028cad105d8ab2aa48cd43176fb2a\": rpc error: code = NotFound desc = could not find container \"d758e81f7d47380dbb825bd4c9f03b18b5d028cad105d8ab2aa48cd43176fb2a\": container with ID starting with d758e81f7d47380dbb825bd4c9f03b18b5d028cad105d8ab2aa48cd43176fb2a not found: ID does not exist" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.306134 4812 scope.go:117] "RemoveContainer" containerID="9e8766dff59643e0ac7ad1773abbd1f12836d5051ab75fbe8a53e51dc0236e21" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.318595 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-c8f69d86c-2v9p4"] Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.322723 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-c8f69d86c-2v9p4"] Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.804826 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-7776w"] Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.817304 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-7776w"] Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.848461 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement0bb7-account-delete-m56jl"] Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.868436 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-0bb7-account-create-jvn5p"] Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.874688 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement0bb7-account-delete-m56jl"] Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.878051 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-0bb7-account-create-jvn5p"] Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.964819 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-5n45p"] Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.967757 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.984320 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0648d4b9-0096-4092-b2b9-70e23f9c863c" path="/var/lib/kubelet/pods/0648d4b9-0096-4092-b2b9-70e23f9c863c/volumes" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.988962 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ed3a39d-494f-40f0-a336-1ccbc4d73f93" path="/var/lib/kubelet/pods/0ed3a39d-494f-40f0-a336-1ccbc4d73f93/volumes" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.989853 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.989962 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="174dd2a1-292b-4e07-8ed8-48d4109e9f57" path="/var/lib/kubelet/pods/174dd2a1-292b-4e07-8ed8-48d4109e9f57/volumes" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.990631 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c" path="/var/lib/kubelet/pods/2b6ec0ce-9b68-4f5c-beed-a81e9d10b43c/volumes" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.991192 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6986c0d4-ba42-44f3-ba4f-51872e0345b6" path="/var/lib/kubelet/pods/6986c0d4-ba42-44f3-ba4f-51872e0345b6/volumes" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.992618 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c8539e2-3b3b-488b-9111-d59aa7317490" path="/var/lib/kubelet/pods/7c8539e2-3b3b-488b-9111-d59aa7317490/volumes" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.993268 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cb8ede6-6163-4906-89f7-7fe6458edc36" path="/var/lib/kubelet/pods/7cb8ede6-6163-4906-89f7-7fe6458edc36/volumes" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.993804 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b12268ff-1299-46c4-8937-8c4dc00f7dc5" path="/var/lib/kubelet/pods/b12268ff-1299-46c4-8937-8c4dc00f7dc5/volumes" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.995540 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2264eb6-f494-4800-832b-d1e1d02daf4e" path="/var/lib/kubelet/pods/b2264eb6-f494-4800-832b-d1e1d02daf4e/volumes" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.996062 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9dbc131-33f8-4df7-88ba-2d90e93a436c" path="/var/lib/kubelet/pods/b9dbc131-33f8-4df7-88ba-2d90e93a436c/volumes" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.996646 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f68d8b-5bb9-4778-9416-1e9f82bba8f3" path="/var/lib/kubelet/pods/c4f68d8b-5bb9-4778-9416-1e9f82bba8f3/volumes" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.998036 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db2cd03e-b999-48ea-b540-7fd35356ba8b" path="/var/lib/kubelet/pods/db2cd03e-b999-48ea-b540-7fd35356ba8b/volumes" Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.998645 4812 generic.go:334] "Generic (PLEG): container finished" podID="87b80396-f87d-435e-8478-0ecb34bccd94" containerID="5589fa331fbb111d8059fa5d842a1f4818d7b185257f4ad95b60c65b7ba3a2fd" exitCode=0 Nov 24 19:39:40 crc kubenswrapper[4812]: I1124 19:39:40.998734 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.003053 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-5n45p"] Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.003228 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"87b80396-f87d-435e-8478-0ecb34bccd94","Type":"ContainerDied","Data":"5589fa331fbb111d8059fa5d842a1f4818d7b185257f4ad95b60c65b7ba3a2fd"} Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.003326 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"87b80396-f87d-435e-8478-0ecb34bccd94","Type":"ContainerDied","Data":"1430864e45d55ed01cabb5542b7ac160ca398a46ef223d63a7d5381c2de17d04"} Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.003407 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7fd2-account-create-psclf"] Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.003470 4812 scope.go:117] "RemoveContainer" containerID="5589fa331fbb111d8059fa5d842a1f4818d7b185257f4ad95b60c65b7ba3a2fd" Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.006407 4812 generic.go:334] "Generic (PLEG): container finished" podID="36aff13b-5f97-40c0-8c88-64c93ce91bcb" containerID="43fd51907e2ff8ea08d221fd8a7a09a85a3f90b61cf1114371ca7d05e3dcf447" exitCode=0 Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.006478 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36aff13b-5f97-40c0-8c88-64c93ce91bcb","Type":"ContainerDied","Data":"43fd51907e2ff8ea08d221fd8a7a09a85a3f90b61cf1114371ca7d05e3dcf447"} Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.006495 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.006506 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"36aff13b-5f97-40c0-8c88-64c93ce91bcb","Type":"ContainerDied","Data":"b50e4e72e94a42f7e72562434ad82fb05fd349b3c5f911909f455c087fa9cf28"} Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.012090 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7fd2-account-create-psclf"] Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.023085 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance7fd2-account-delete-vl62k"] Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.023293 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance7fd2-account-delete-vl62k" podUID="c2d42cca-2cc7-4b6a-9f1d-215f522b4c82" containerName="mariadb-account-delete" containerID="cri-o://d5fae8072f9f408d92aa3d01e0eff2bf277ff66460adda263052312bf707a170" gracePeriod=30 Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.033055 4812 scope.go:117] "RemoveContainer" containerID="5589fa331fbb111d8059fa5d842a1f4818d7b185257f4ad95b60c65b7ba3a2fd" Nov 24 19:39:41 crc kubenswrapper[4812]: E1124 19:39:41.035125 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5589fa331fbb111d8059fa5d842a1f4818d7b185257f4ad95b60c65b7ba3a2fd\": container with ID starting with 5589fa331fbb111d8059fa5d842a1f4818d7b185257f4ad95b60c65b7ba3a2fd not found: ID does not exist" containerID="5589fa331fbb111d8059fa5d842a1f4818d7b185257f4ad95b60c65b7ba3a2fd" Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.035167 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5589fa331fbb111d8059fa5d842a1f4818d7b185257f4ad95b60c65b7ba3a2fd"} err="failed to get container status \"5589fa331fbb111d8059fa5d842a1f4818d7b185257f4ad95b60c65b7ba3a2fd\": rpc error: code = NotFound desc = could not find container \"5589fa331fbb111d8059fa5d842a1f4818d7b185257f4ad95b60c65b7ba3a2fd\": container with ID starting with 5589fa331fbb111d8059fa5d842a1f4818d7b185257f4ad95b60c65b7ba3a2fd not found: ID does not exist" Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.035192 4812 scope.go:117] "RemoveContainer" containerID="43fd51907e2ff8ea08d221fd8a7a09a85a3f90b61cf1114371ca7d05e3dcf447" Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.044624 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-vjwqm"] Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.050262 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-vjwqm"] Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.066279 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbicanfac2-account-delete-v4hfz"] Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.066329 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-fac2-account-create-szxkh"] Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.066356 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-fac2-account-create-szxkh"] Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.070655 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbicanfac2-account-delete-v4hfz"] Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.093964 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b80396-f87d-435e-8478-0ecb34bccd94-config-data\") pod \"87b80396-f87d-435e-8478-0ecb34bccd94\" (UID: \"87b80396-f87d-435e-8478-0ecb34bccd94\") " Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.094054 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlr2w\" (UniqueName: \"kubernetes.io/projected/36aff13b-5f97-40c0-8c88-64c93ce91bcb-kube-api-access-wlr2w\") pod \"36aff13b-5f97-40c0-8c88-64c93ce91bcb\" (UID: \"36aff13b-5f97-40c0-8c88-64c93ce91bcb\") " Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.094085 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36aff13b-5f97-40c0-8c88-64c93ce91bcb-config-data\") pod \"36aff13b-5f97-40c0-8c88-64c93ce91bcb\" (UID: \"36aff13b-5f97-40c0-8c88-64c93ce91bcb\") " Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.094103 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b80396-f87d-435e-8478-0ecb34bccd94-combined-ca-bundle\") pod \"87b80396-f87d-435e-8478-0ecb34bccd94\" (UID: \"87b80396-f87d-435e-8478-0ecb34bccd94\") " Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.094123 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36aff13b-5f97-40c0-8c88-64c93ce91bcb-combined-ca-bundle\") pod \"36aff13b-5f97-40c0-8c88-64c93ce91bcb\" (UID: \"36aff13b-5f97-40c0-8c88-64c93ce91bcb\") " Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.099583 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjf65\" (UniqueName: \"kubernetes.io/projected/87b80396-f87d-435e-8478-0ecb34bccd94-kube-api-access-gjf65\") pod \"87b80396-f87d-435e-8478-0ecb34bccd94\" (UID: \"87b80396-f87d-435e-8478-0ecb34bccd94\") " Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.094358 4812 scope.go:117] "RemoveContainer" containerID="43fd51907e2ff8ea08d221fd8a7a09a85a3f90b61cf1114371ca7d05e3dcf447" Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.102003 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36aff13b-5f97-40c0-8c88-64c93ce91bcb-kube-api-access-wlr2w" (OuterVolumeSpecName: "kube-api-access-wlr2w") pod "36aff13b-5f97-40c0-8c88-64c93ce91bcb" (UID: "36aff13b-5f97-40c0-8c88-64c93ce91bcb"). InnerVolumeSpecName "kube-api-access-wlr2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:41 crc kubenswrapper[4812]: E1124 19:39:41.102186 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43fd51907e2ff8ea08d221fd8a7a09a85a3f90b61cf1114371ca7d05e3dcf447\": container with ID starting with 43fd51907e2ff8ea08d221fd8a7a09a85a3f90b61cf1114371ca7d05e3dcf447 not found: ID does not exist" containerID="43fd51907e2ff8ea08d221fd8a7a09a85a3f90b61cf1114371ca7d05e3dcf447" Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.102221 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43fd51907e2ff8ea08d221fd8a7a09a85a3f90b61cf1114371ca7d05e3dcf447"} err="failed to get container status \"43fd51907e2ff8ea08d221fd8a7a09a85a3f90b61cf1114371ca7d05e3dcf447\": rpc error: code = NotFound desc = could not find container \"43fd51907e2ff8ea08d221fd8a7a09a85a3f90b61cf1114371ca7d05e3dcf447\": container with ID starting with 43fd51907e2ff8ea08d221fd8a7a09a85a3f90b61cf1114371ca7d05e3dcf447 not found: ID does not exist" Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.113163 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-7l8kj"] Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.116214 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b80396-f87d-435e-8478-0ecb34bccd94-kube-api-access-gjf65" (OuterVolumeSpecName: "kube-api-access-gjf65") pod "87b80396-f87d-435e-8478-0ecb34bccd94" (UID: "87b80396-f87d-435e-8478-0ecb34bccd94"). InnerVolumeSpecName "kube-api-access-gjf65". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.118202 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-7l8kj"] Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.127755 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder5926-account-delete-lvwv5"] Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.129280 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b80396-f87d-435e-8478-0ecb34bccd94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87b80396-f87d-435e-8478-0ecb34bccd94" (UID: "87b80396-f87d-435e-8478-0ecb34bccd94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.131296 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36aff13b-5f97-40c0-8c88-64c93ce91bcb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36aff13b-5f97-40c0-8c88-64c93ce91bcb" (UID: "36aff13b-5f97-40c0-8c88-64c93ce91bcb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.131847 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b80396-f87d-435e-8478-0ecb34bccd94-config-data" (OuterVolumeSpecName: "config-data") pod "87b80396-f87d-435e-8478-0ecb34bccd94" (UID: "87b80396-f87d-435e-8478-0ecb34bccd94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.132997 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36aff13b-5f97-40c0-8c88-64c93ce91bcb-config-data" (OuterVolumeSpecName: "config-data") pod "36aff13b-5f97-40c0-8c88-64c93ce91bcb" (UID: "36aff13b-5f97-40c0-8c88-64c93ce91bcb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.133561 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5926-account-create-lsmsd"] Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.138181 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder5926-account-delete-lvwv5"] Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.142414 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5926-account-create-lsmsd"] Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.201808 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87b80396-f87d-435e-8478-0ecb34bccd94-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.201840 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlr2w\" (UniqueName: \"kubernetes.io/projected/36aff13b-5f97-40c0-8c88-64c93ce91bcb-kube-api-access-wlr2w\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.201850 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36aff13b-5f97-40c0-8c88-64c93ce91bcb-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.201859 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b80396-f87d-435e-8478-0ecb34bccd94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.201867 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36aff13b-5f97-40c0-8c88-64c93ce91bcb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.201877 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjf65\" (UniqueName: \"kubernetes.io/projected/87b80396-f87d-435e-8478-0ecb34bccd94-kube-api-access-gjf65\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.342356 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.358414 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.365623 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 19:39:41 crc kubenswrapper[4812]: I1124 19:39:41.372826 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 19:39:41 crc kubenswrapper[4812]: E1124 19:39:41.608382 4812 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 24 19:39:41 crc kubenswrapper[4812]: E1124 19:39:41.608458 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-operator-scripts podName:c2d42cca-2cc7-4b6a-9f1d-215f522b4c82 nodeName:}" failed. No retries permitted until 2025-11-24 19:39:45.608439515 +0000 UTC m=+1379.397391886 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-operator-scripts") pod "glance7fd2-account-delete-vl62k" (UID: "c2d42cca-2cc7-4b6a-9f1d-215f522b4c82") : configmap "openstack-scripts" not found Nov 24 19:39:41 crc kubenswrapper[4812]: E1124 19:39:41.640491 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 24 19:39:41 crc kubenswrapper[4812]: E1124 19:39:41.641074 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 24 19:39:41 crc kubenswrapper[4812]: E1124 19:39:41.641546 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 24 19:39:41 crc kubenswrapper[4812]: E1124 19:39:41.641632 4812 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-cgt9p" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovsdb-server" Nov 24 19:39:41 crc kubenswrapper[4812]: E1124 19:39:41.641973 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 24 19:39:41 crc kubenswrapper[4812]: E1124 19:39:41.645041 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 24 19:39:41 crc kubenswrapper[4812]: E1124 19:39:41.652458 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 24 19:39:41 crc kubenswrapper[4812]: E1124 19:39:41.652563 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-cgt9p" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovs-vswitchd" Nov 24 19:39:42 crc kubenswrapper[4812]: I1124 19:39:42.114092 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ce28537c-ff5d-4318-bc95-8a29da6aae53" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 19:39:42 crc kubenswrapper[4812]: I1124 19:39:42.114162 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="ce28537c-ff5d-4318-bc95-8a29da6aae53" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 19:39:42 crc kubenswrapper[4812]: I1124 19:39:42.983257 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36aff13b-5f97-40c0-8c88-64c93ce91bcb" path="/var/lib/kubelet/pods/36aff13b-5f97-40c0-8c88-64c93ce91bcb/volumes" Nov 24 19:39:42 crc kubenswrapper[4812]: I1124 19:39:42.984776 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61c60661-872c-43f4-8830-7d96fd575bd8" path="/var/lib/kubelet/pods/61c60661-872c-43f4-8830-7d96fd575bd8/volumes" Nov 24 19:39:42 crc kubenswrapper[4812]: I1124 19:39:42.986095 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72d5ebd7-a16f-49bb-a80e-1138d5f197d3" path="/var/lib/kubelet/pods/72d5ebd7-a16f-49bb-a80e-1138d5f197d3/volumes" Nov 24 19:39:42 crc kubenswrapper[4812]: I1124 19:39:42.987492 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b80396-f87d-435e-8478-0ecb34bccd94" path="/var/lib/kubelet/pods/87b80396-f87d-435e-8478-0ecb34bccd94/volumes" Nov 24 19:39:42 crc kubenswrapper[4812]: I1124 19:39:42.989776 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a190769b-2cd0-4c64-944c-3a66e6b61e95" path="/var/lib/kubelet/pods/a190769b-2cd0-4c64-944c-3a66e6b61e95/volumes" Nov 24 19:39:42 crc kubenswrapper[4812]: I1124 19:39:42.990682 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2b021e1-1d1a-422d-994a-3af278f1145b" path="/var/lib/kubelet/pods/b2b021e1-1d1a-422d-994a-3af278f1145b/volumes" Nov 24 19:39:42 crc kubenswrapper[4812]: I1124 19:39:42.991527 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c22383eb-b38e-415a-b0e4-bde087a20c04" path="/var/lib/kubelet/pods/c22383eb-b38e-415a-b0e4-bde087a20c04/volumes" Nov 24 19:39:42 crc kubenswrapper[4812]: I1124 19:39:42.993717 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb65fa33-2178-4d99-9279-ace1b50e4089" path="/var/lib/kubelet/pods/cb65fa33-2178-4d99-9279-ace1b50e4089/volumes" Nov 24 19:39:42 crc kubenswrapper[4812]: I1124 19:39:42.994743 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfb137c7-3a42-4a4a-b463-f27806805277" path="/var/lib/kubelet/pods/cfb137c7-3a42-4a4a-b463-f27806805277/volumes" Nov 24 19:39:42 crc kubenswrapper[4812]: I1124 19:39:42.995555 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1b0b239-3d7b-40c3-a599-f3f74d452813" path="/var/lib/kubelet/pods/d1b0b239-3d7b-40c3-a599-f3f74d452813/volumes" Nov 24 19:39:43 crc kubenswrapper[4812]: E1124 19:39:43.946896 4812 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 24 19:39:43 crc kubenswrapper[4812]: E1124 19:39:43.947384 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-operator-scripts podName:2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9 nodeName:}" failed. No retries permitted until 2025-11-24 19:39:51.947326167 +0000 UTC m=+1385.736278578 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-operator-scripts") pod "novaapi5538-account-delete-t9sww" (UID: "2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9") : configmap "openstack-scripts" not found Nov 24 19:39:45 crc kubenswrapper[4812]: E1124 19:39:45.675681 4812 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 24 19:39:45 crc kubenswrapper[4812]: E1124 19:39:45.675754 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-operator-scripts podName:c2d42cca-2cc7-4b6a-9f1d-215f522b4c82 nodeName:}" failed. No retries permitted until 2025-11-24 19:39:53.675736643 +0000 UTC m=+1387.464689014 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-operator-scripts") pod "glance7fd2-account-delete-vl62k" (UID: "c2d42cca-2cc7-4b6a-9f1d-215f522b4c82") : configmap "openstack-scripts" not found Nov 24 19:39:46 crc kubenswrapper[4812]: E1124 19:39:46.638986 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 24 19:39:46 crc kubenswrapper[4812]: E1124 19:39:46.639595 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 24 19:39:46 crc kubenswrapper[4812]: E1124 19:39:46.640064 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 24 19:39:46 crc kubenswrapper[4812]: E1124 19:39:46.640128 4812 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-cgt9p" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovsdb-server" Nov 24 19:39:46 crc kubenswrapper[4812]: E1124 19:39:46.640215 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 24 19:39:46 crc kubenswrapper[4812]: E1124 19:39:46.641877 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 24 19:39:46 crc kubenswrapper[4812]: E1124 19:39:46.643552 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 24 19:39:46 crc kubenswrapper[4812]: E1124 19:39:46.643614 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-cgt9p" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovs-vswitchd" Nov 24 19:39:50 crc kubenswrapper[4812]: I1124 19:39:50.986378 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.072624 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-config\") pod \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.072692 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-ovndb-tls-certs\") pod \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.072711 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-internal-tls-certs\") pod \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.072784 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-httpd-config\") pod \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.072813 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-public-tls-certs\") pod \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.072831 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-combined-ca-bundle\") pod \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.072854 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg2vh\" (UniqueName: \"kubernetes.io/projected/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-kube-api-access-zg2vh\") pod \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\" (UID: \"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa\") " Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.078233 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "cf04fa0a-bd1d-442f-afff-05fe1ebdeafa" (UID: "cf04fa0a-bd1d-442f-afff-05fe1ebdeafa"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.089189 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-kube-api-access-zg2vh" (OuterVolumeSpecName: "kube-api-access-zg2vh") pod "cf04fa0a-bd1d-442f-afff-05fe1ebdeafa" (UID: "cf04fa0a-bd1d-442f-afff-05fe1ebdeafa"). InnerVolumeSpecName "kube-api-access-zg2vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.108881 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-config" (OuterVolumeSpecName: "config") pod "cf04fa0a-bd1d-442f-afff-05fe1ebdeafa" (UID: "cf04fa0a-bd1d-442f-afff-05fe1ebdeafa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.116106 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cf04fa0a-bd1d-442f-afff-05fe1ebdeafa" (UID: "cf04fa0a-bd1d-442f-afff-05fe1ebdeafa"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.127286 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "cf04fa0a-bd1d-442f-afff-05fe1ebdeafa" (UID: "cf04fa0a-bd1d-442f-afff-05fe1ebdeafa"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.137811 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf04fa0a-bd1d-442f-afff-05fe1ebdeafa" (UID: "cf04fa0a-bd1d-442f-afff-05fe1ebdeafa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.139131 4812 generic.go:334] "Generic (PLEG): container finished" podID="cf04fa0a-bd1d-442f-afff-05fe1ebdeafa" containerID="1f11baf8ed33d8f96d179e8923989ae7c6c575405267a06b72ced9510109a883" exitCode=0 Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.139223 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f58df686c-jn8qq" event={"ID":"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa","Type":"ContainerDied","Data":"1f11baf8ed33d8f96d179e8923989ae7c6c575405267a06b72ced9510109a883"} Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.139756 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f58df686c-jn8qq" event={"ID":"cf04fa0a-bd1d-442f-afff-05fe1ebdeafa","Type":"ContainerDied","Data":"c6b53af358657b435e4037070ddfedfd9df942ef3c054809152a79994b960257"} Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.139708 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f58df686c-jn8qq" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.139860 4812 scope.go:117] "RemoveContainer" containerID="9eecdb130173be4362c4b01b4d7a91e9916967b8217d2c08f6922d9d056d6e8b" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.147939 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cf04fa0a-bd1d-442f-afff-05fe1ebdeafa" (UID: "cf04fa0a-bd1d-442f-afff-05fe1ebdeafa"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.174195 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.174223 4812 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.174236 4812 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.174249 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.174261 4812 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.174272 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.174283 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zg2vh\" (UniqueName: \"kubernetes.io/projected/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa-kube-api-access-zg2vh\") on node \"crc\" DevicePath \"\"" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.174231 4812 scope.go:117] "RemoveContainer" containerID="1f11baf8ed33d8f96d179e8923989ae7c6c575405267a06b72ced9510109a883" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.204817 4812 scope.go:117] "RemoveContainer" containerID="9eecdb130173be4362c4b01b4d7a91e9916967b8217d2c08f6922d9d056d6e8b" Nov 24 19:39:51 crc kubenswrapper[4812]: E1124 19:39:51.205548 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eecdb130173be4362c4b01b4d7a91e9916967b8217d2c08f6922d9d056d6e8b\": container with ID starting with 9eecdb130173be4362c4b01b4d7a91e9916967b8217d2c08f6922d9d056d6e8b not found: ID does not exist" containerID="9eecdb130173be4362c4b01b4d7a91e9916967b8217d2c08f6922d9d056d6e8b" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.205642 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eecdb130173be4362c4b01b4d7a91e9916967b8217d2c08f6922d9d056d6e8b"} err="failed to get container status \"9eecdb130173be4362c4b01b4d7a91e9916967b8217d2c08f6922d9d056d6e8b\": rpc error: code = NotFound desc = could not find container \"9eecdb130173be4362c4b01b4d7a91e9916967b8217d2c08f6922d9d056d6e8b\": container with ID starting with 9eecdb130173be4362c4b01b4d7a91e9916967b8217d2c08f6922d9d056d6e8b not found: ID does not exist" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.205723 4812 scope.go:117] "RemoveContainer" containerID="1f11baf8ed33d8f96d179e8923989ae7c6c575405267a06b72ced9510109a883" Nov 24 19:39:51 crc kubenswrapper[4812]: E1124 19:39:51.206246 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f11baf8ed33d8f96d179e8923989ae7c6c575405267a06b72ced9510109a883\": container with ID starting with 1f11baf8ed33d8f96d179e8923989ae7c6c575405267a06b72ced9510109a883 not found: ID does not exist" containerID="1f11baf8ed33d8f96d179e8923989ae7c6c575405267a06b72ced9510109a883" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.206322 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f11baf8ed33d8f96d179e8923989ae7c6c575405267a06b72ced9510109a883"} err="failed to get container status \"1f11baf8ed33d8f96d179e8923989ae7c6c575405267a06b72ced9510109a883\": rpc error: code = NotFound desc = could not find container \"1f11baf8ed33d8f96d179e8923989ae7c6c575405267a06b72ced9510109a883\": container with ID starting with 1f11baf8ed33d8f96d179e8923989ae7c6c575405267a06b72ced9510109a883 not found: ID does not exist" Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.512125 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f58df686c-jn8qq"] Nov 24 19:39:51 crc kubenswrapper[4812]: I1124 19:39:51.516290 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f58df686c-jn8qq"] Nov 24 19:39:51 crc kubenswrapper[4812]: E1124 19:39:51.638118 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 24 19:39:51 crc kubenswrapper[4812]: E1124 19:39:51.638689 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 24 19:39:51 crc kubenswrapper[4812]: E1124 19:39:51.639239 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 24 19:39:51 crc kubenswrapper[4812]: E1124 19:39:51.639368 4812 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-cgt9p" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovsdb-server" Nov 24 19:39:51 crc kubenswrapper[4812]: E1124 19:39:51.640192 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 24 19:39:51 crc kubenswrapper[4812]: E1124 19:39:51.642386 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 24 19:39:51 crc kubenswrapper[4812]: E1124 19:39:51.646227 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 24 19:39:51 crc kubenswrapper[4812]: E1124 19:39:51.646307 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-cgt9p" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovs-vswitchd" Nov 24 19:39:51 crc kubenswrapper[4812]: E1124 19:39:51.991383 4812 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 24 19:39:51 crc kubenswrapper[4812]: E1124 19:39:51.991481 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-operator-scripts podName:2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9 nodeName:}" failed. No retries permitted until 2025-11-24 19:40:07.991459323 +0000 UTC m=+1401.780411774 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-operator-scripts") pod "novaapi5538-account-delete-t9sww" (UID: "2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9") : configmap "openstack-scripts" not found Nov 24 19:39:52 crc kubenswrapper[4812]: I1124 19:39:52.982150 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf04fa0a-bd1d-442f-afff-05fe1ebdeafa" path="/var/lib/kubelet/pods/cf04fa0a-bd1d-442f-afff-05fe1ebdeafa/volumes" Nov 24 19:39:53 crc kubenswrapper[4812]: E1124 19:39:53.719006 4812 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 24 19:39:53 crc kubenswrapper[4812]: E1124 19:39:53.719081 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-operator-scripts podName:c2d42cca-2cc7-4b6a-9f1d-215f522b4c82 nodeName:}" failed. No retries permitted until 2025-11-24 19:40:09.719067495 +0000 UTC m=+1403.508019856 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-operator-scripts") pod "glance7fd2-account-delete-vl62k" (UID: "c2d42cca-2cc7-4b6a-9f1d-215f522b4c82") : configmap "openstack-scripts" not found Nov 24 19:39:56 crc kubenswrapper[4812]: E1124 19:39:56.638163 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 24 19:39:56 crc kubenswrapper[4812]: E1124 19:39:56.638712 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 24 19:39:56 crc kubenswrapper[4812]: E1124 19:39:56.639169 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 24 19:39:56 crc kubenswrapper[4812]: E1124 19:39:56.639225 4812 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-cgt9p" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovsdb-server" Nov 24 19:39:56 crc kubenswrapper[4812]: E1124 19:39:56.640257 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 24 19:39:56 crc kubenswrapper[4812]: E1124 19:39:56.642181 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 24 19:39:56 crc kubenswrapper[4812]: E1124 19:39:56.644945 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 24 19:39:56 crc kubenswrapper[4812]: E1124 19:39:56.645011 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-cgt9p" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovs-vswitchd" Nov 24 19:40:01 crc kubenswrapper[4812]: E1124 19:40:01.639208 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 24 19:40:01 crc kubenswrapper[4812]: E1124 19:40:01.639963 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 24 19:40:01 crc kubenswrapper[4812]: E1124 19:40:01.640585 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 24 19:40:01 crc kubenswrapper[4812]: E1124 19:40:01.641795 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 24 19:40:01 crc kubenswrapper[4812]: E1124 19:40:01.641879 4812 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-cgt9p" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovsdb-server" Nov 24 19:40:01 crc kubenswrapper[4812]: E1124 19:40:01.642185 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 24 19:40:01 crc kubenswrapper[4812]: E1124 19:40:01.644239 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 24 19:40:01 crc kubenswrapper[4812]: E1124 19:40:01.644308 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-cgt9p" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovs-vswitchd" Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.797438 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cgt9p_ba9f8224-4f20-4c27-b242-3385791aed68/ovs-vswitchd/0.log" Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.799239 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.871364 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-var-lib\") pod \"ba9f8224-4f20-4c27-b242-3385791aed68\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.871428 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-var-log\") pod \"ba9f8224-4f20-4c27-b242-3385791aed68\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.871446 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-etc-ovs\") pod \"ba9f8224-4f20-4c27-b242-3385791aed68\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.871477 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-var-run\") pod \"ba9f8224-4f20-4c27-b242-3385791aed68\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.871501 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba9f8224-4f20-4c27-b242-3385791aed68-scripts\") pod \"ba9f8224-4f20-4c27-b242-3385791aed68\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.871510 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-var-lib" (OuterVolumeSpecName: "var-lib") pod "ba9f8224-4f20-4c27-b242-3385791aed68" (UID: "ba9f8224-4f20-4c27-b242-3385791aed68"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.871579 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "ba9f8224-4f20-4c27-b242-3385791aed68" (UID: "ba9f8224-4f20-4c27-b242-3385791aed68"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.871605 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxtdj\" (UniqueName: \"kubernetes.io/projected/ba9f8224-4f20-4c27-b242-3385791aed68-kube-api-access-xxtdj\") pod \"ba9f8224-4f20-4c27-b242-3385791aed68\" (UID: \"ba9f8224-4f20-4c27-b242-3385791aed68\") " Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.871616 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-var-log" (OuterVolumeSpecName: "var-log") pod "ba9f8224-4f20-4c27-b242-3385791aed68" (UID: "ba9f8224-4f20-4c27-b242-3385791aed68"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.871647 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-var-run" (OuterVolumeSpecName: "var-run") pod "ba9f8224-4f20-4c27-b242-3385791aed68" (UID: "ba9f8224-4f20-4c27-b242-3385791aed68"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.871880 4812 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-var-lib\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.871894 4812 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-var-log\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.871902 4812 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-etc-ovs\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.871910 4812 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ba9f8224-4f20-4c27-b242-3385791aed68-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.873442 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba9f8224-4f20-4c27-b242-3385791aed68-scripts" (OuterVolumeSpecName: "scripts") pod "ba9f8224-4f20-4c27-b242-3385791aed68" (UID: "ba9f8224-4f20-4c27-b242-3385791aed68"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.877435 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba9f8224-4f20-4c27-b242-3385791aed68-kube-api-access-xxtdj" (OuterVolumeSpecName: "kube-api-access-xxtdj") pod "ba9f8224-4f20-4c27-b242-3385791aed68" (UID: "ba9f8224-4f20-4c27-b242-3385791aed68"). InnerVolumeSpecName "kube-api-access-xxtdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.882263 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.930217 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.977724 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p8cv\" (UniqueName: \"kubernetes.io/projected/7063bb41-d0d1-4605-b265-1fb3adce77b5-kube-api-access-5p8cv\") pod \"7063bb41-d0d1-4605-b265-1fb3adce77b5\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.978007 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7063bb41-d0d1-4605-b265-1fb3adce77b5-etc-machine-id\") pod \"7063bb41-d0d1-4605-b265-1fb3adce77b5\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.978259 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-config-data\") pod \"7063bb41-d0d1-4605-b265-1fb3adce77b5\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.978394 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-config-data-custom\") pod \"7063bb41-d0d1-4605-b265-1fb3adce77b5\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.979537 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7063bb41-d0d1-4605-b265-1fb3adce77b5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7063bb41-d0d1-4605-b265-1fb3adce77b5" (UID: "7063bb41-d0d1-4605-b265-1fb3adce77b5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.980493 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-scripts\") pod \"7063bb41-d0d1-4605-b265-1fb3adce77b5\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.980798 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-combined-ca-bundle\") pod \"7063bb41-d0d1-4605-b265-1fb3adce77b5\" (UID: \"7063bb41-d0d1-4605-b265-1fb3adce77b5\") " Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.982530 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7063bb41-d0d1-4605-b265-1fb3adce77b5-kube-api-access-5p8cv" (OuterVolumeSpecName: "kube-api-access-5p8cv") pod "7063bb41-d0d1-4605-b265-1fb3adce77b5" (UID: "7063bb41-d0d1-4605-b265-1fb3adce77b5"). InnerVolumeSpecName "kube-api-access-5p8cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.982749 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7063bb41-d0d1-4605-b265-1fb3adce77b5" (UID: "7063bb41-d0d1-4605-b265-1fb3adce77b5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.982870 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ba9f8224-4f20-4c27-b242-3385791aed68-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.983235 4812 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7063bb41-d0d1-4605-b265-1fb3adce77b5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.983526 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxtdj\" (UniqueName: \"kubernetes.io/projected/ba9f8224-4f20-4c27-b242-3385791aed68-kube-api-access-xxtdj\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:02 crc kubenswrapper[4812]: I1124 19:40:02.985418 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-scripts" (OuterVolumeSpecName: "scripts") pod "7063bb41-d0d1-4605-b265-1fb3adce77b5" (UID: "7063bb41-d0d1-4605-b265-1fb3adce77b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.066542 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7063bb41-d0d1-4605-b265-1fb3adce77b5" (UID: "7063bb41-d0d1-4605-b265-1fb3adce77b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.097773 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-cache\") pod \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.098031 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift\") pod \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.098107 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-lock\") pod \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.098200 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zlfw\" (UniqueName: \"kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-kube-api-access-6zlfw\") pod \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.098306 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\" (UID: \"c127eda7-bbfe-4197-b5cf-f4f99824d0c8\") " Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.098836 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.098911 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.099015 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p8cv\" (UniqueName: \"kubernetes.io/projected/7063bb41-d0d1-4605-b265-1fb3adce77b5-kube-api-access-5p8cv\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.099069 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.100135 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-cache" (OuterVolumeSpecName: "cache") pod "c127eda7-bbfe-4197-b5cf-f4f99824d0c8" (UID: "c127eda7-bbfe-4197-b5cf-f4f99824d0c8"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.102170 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-lock" (OuterVolumeSpecName: "lock") pod "c127eda7-bbfe-4197-b5cf-f4f99824d0c8" (UID: "c127eda7-bbfe-4197-b5cf-f4f99824d0c8"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.103461 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-config-data" (OuterVolumeSpecName: "config-data") pod "7063bb41-d0d1-4605-b265-1fb3adce77b5" (UID: "7063bb41-d0d1-4605-b265-1fb3adce77b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.107552 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-kube-api-access-6zlfw" (OuterVolumeSpecName: "kube-api-access-6zlfw") pod "c127eda7-bbfe-4197-b5cf-f4f99824d0c8" (UID: "c127eda7-bbfe-4197-b5cf-f4f99824d0c8"). InnerVolumeSpecName "kube-api-access-6zlfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.108131 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c127eda7-bbfe-4197-b5cf-f4f99824d0c8" (UID: "c127eda7-bbfe-4197-b5cf-f4f99824d0c8"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.110461 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "swift") pod "c127eda7-bbfe-4197-b5cf-f4f99824d0c8" (UID: "c127eda7-bbfe-4197-b5cf-f4f99824d0c8"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.206702 4812 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.206753 4812 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-lock\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.206762 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zlfw\" (UniqueName: \"kubernetes.io/projected/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-kube-api-access-6zlfw\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.206796 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.206810 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7063bb41-d0d1-4605-b265-1fb3adce77b5-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.206822 4812 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c127eda7-bbfe-4197-b5cf-f4f99824d0c8-cache\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.227253 4812 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.266213 4812 generic.go:334] "Generic (PLEG): container finished" podID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerID="5da49ed3bd43bccf7c1c7a762249c0b9563d0a34713a10ddabb4f29f3bb2d6c4" exitCode=137 Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.266296 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerDied","Data":"5da49ed3bd43bccf7c1c7a762249c0b9563d0a34713a10ddabb4f29f3bb2d6c4"} Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.266610 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c127eda7-bbfe-4197-b5cf-f4f99824d0c8","Type":"ContainerDied","Data":"f8448a8883db6320f67d86b35577d4e30a265e0d61e35756c50f18419d8bdb74"} Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.266315 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.266653 4812 scope.go:117] "RemoveContainer" containerID="5da49ed3bd43bccf7c1c7a762249c0b9563d0a34713a10ddabb4f29f3bb2d6c4" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.268976 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cgt9p_ba9f8224-4f20-4c27-b242-3385791aed68/ovs-vswitchd/0.log" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.269683 4812 generic.go:334] "Generic (PLEG): container finished" podID="ba9f8224-4f20-4c27-b242-3385791aed68" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" exitCode=137 Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.269738 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cgt9p" event={"ID":"ba9f8224-4f20-4c27-b242-3385791aed68","Type":"ContainerDied","Data":"623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680"} Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.269768 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cgt9p" event={"ID":"ba9f8224-4f20-4c27-b242-3385791aed68","Type":"ContainerDied","Data":"2ba64b4c1c22dae45b432998cfce6e7d324e8169cf8b0d00fc53fc3863ed3a32"} Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.269837 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cgt9p" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.272386 4812 generic.go:334] "Generic (PLEG): container finished" podID="7063bb41-d0d1-4605-b265-1fb3adce77b5" containerID="de017585554f7c04efa116007ce5cd58c485707a3172724fa32769316d9a7c7b" exitCode=137 Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.272410 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7063bb41-d0d1-4605-b265-1fb3adce77b5","Type":"ContainerDied","Data":"de017585554f7c04efa116007ce5cd58c485707a3172724fa32769316d9a7c7b"} Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.272427 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7063bb41-d0d1-4605-b265-1fb3adce77b5","Type":"ContainerDied","Data":"6f3944b14538688157807d59761a609046da96a18831c2ce92e137c597678a95"} Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.272471 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.297947 4812 scope.go:117] "RemoveContainer" containerID="52a6492fd3d0deacb1f3de4099c2af1b8e13196fcc94e5057991492ec70dceb1" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.300145 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-cgt9p"] Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.308312 4812 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.308377 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-cgt9p"] Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.315843 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.323242 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.328722 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.333113 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.334607 4812 scope.go:117] "RemoveContainer" containerID="7dfb7bb5f541d3e6a7b34ba4b2765ad1a014b13d9acd5887236f22389f857686" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.376108 4812 scope.go:117] "RemoveContainer" containerID="583024e52dfa2d0198e9e008ff828f760750f1c77c129940527456b669651af8" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.391166 4812 scope.go:117] "RemoveContainer" containerID="e7a4a9c734691227c7726486773c640c7222fb5fb72a6dfe0e67b5f73d49779c" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.407871 4812 scope.go:117] "RemoveContainer" containerID="6f8586c7b6379e41339cf4003e181ac17b8000c4b6cf916c10b8bd5974063830" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.429144 4812 scope.go:117] "RemoveContainer" containerID="369329005ba708cb3da74c4eb5f269c7a9bc67780612f12579a184b149f8f6a3" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.446009 4812 scope.go:117] "RemoveContainer" containerID="5c72f1df163389e7bb1686785b1e834a0fcaae9c25f9b0e0118c50ce389f8124" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.468412 4812 scope.go:117] "RemoveContainer" containerID="0f2648d680c181c55b3d5b4aac28a54db4a0cc1f041a8502edaf196dec2f5d2b" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.489579 4812 scope.go:117] "RemoveContainer" containerID="f42cebfcb67a0c82f8b5ea0093df23a7f3db4c4d55c27eba7281fffcdffb9145" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.510662 4812 scope.go:117] "RemoveContainer" containerID="3737b1e379814063ab73fb73d8cadf66e16906093ac70c6122f3dd28c3ef1165" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.543978 4812 scope.go:117] "RemoveContainer" containerID="95ee721a93b130c2b838cad7d2589d756c66158d110fc387dea0a070f5cab34b" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.568793 4812 scope.go:117] "RemoveContainer" containerID="85c687b61ffed21faaf0d22af2f315067e640622918bcd4af610650579b3e1ea" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.584730 4812 scope.go:117] "RemoveContainer" containerID="d1f7937bf24142a9d61506b61ddf9aa74aef7f7791c9f2de08d17609dc7039ce" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.600121 4812 scope.go:117] "RemoveContainer" containerID="ec513a4be128ca8f5f9f823a774f668bfdc27e8b505dfc23a7f565ab756b025f" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.626624 4812 scope.go:117] "RemoveContainer" containerID="5da49ed3bd43bccf7c1c7a762249c0b9563d0a34713a10ddabb4f29f3bb2d6c4" Nov 24 19:40:03 crc kubenswrapper[4812]: E1124 19:40:03.627213 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5da49ed3bd43bccf7c1c7a762249c0b9563d0a34713a10ddabb4f29f3bb2d6c4\": container with ID starting with 5da49ed3bd43bccf7c1c7a762249c0b9563d0a34713a10ddabb4f29f3bb2d6c4 not found: ID does not exist" containerID="5da49ed3bd43bccf7c1c7a762249c0b9563d0a34713a10ddabb4f29f3bb2d6c4" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.627265 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5da49ed3bd43bccf7c1c7a762249c0b9563d0a34713a10ddabb4f29f3bb2d6c4"} err="failed to get container status \"5da49ed3bd43bccf7c1c7a762249c0b9563d0a34713a10ddabb4f29f3bb2d6c4\": rpc error: code = NotFound desc = could not find container \"5da49ed3bd43bccf7c1c7a762249c0b9563d0a34713a10ddabb4f29f3bb2d6c4\": container with ID starting with 5da49ed3bd43bccf7c1c7a762249c0b9563d0a34713a10ddabb4f29f3bb2d6c4 not found: ID does not exist" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.627306 4812 scope.go:117] "RemoveContainer" containerID="52a6492fd3d0deacb1f3de4099c2af1b8e13196fcc94e5057991492ec70dceb1" Nov 24 19:40:03 crc kubenswrapper[4812]: E1124 19:40:03.627891 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52a6492fd3d0deacb1f3de4099c2af1b8e13196fcc94e5057991492ec70dceb1\": container with ID starting with 52a6492fd3d0deacb1f3de4099c2af1b8e13196fcc94e5057991492ec70dceb1 not found: ID does not exist" containerID="52a6492fd3d0deacb1f3de4099c2af1b8e13196fcc94e5057991492ec70dceb1" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.627933 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52a6492fd3d0deacb1f3de4099c2af1b8e13196fcc94e5057991492ec70dceb1"} err="failed to get container status \"52a6492fd3d0deacb1f3de4099c2af1b8e13196fcc94e5057991492ec70dceb1\": rpc error: code = NotFound desc = could not find container \"52a6492fd3d0deacb1f3de4099c2af1b8e13196fcc94e5057991492ec70dceb1\": container with ID starting with 52a6492fd3d0deacb1f3de4099c2af1b8e13196fcc94e5057991492ec70dceb1 not found: ID does not exist" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.627946 4812 scope.go:117] "RemoveContainer" containerID="7dfb7bb5f541d3e6a7b34ba4b2765ad1a014b13d9acd5887236f22389f857686" Nov 24 19:40:03 crc kubenswrapper[4812]: E1124 19:40:03.628244 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dfb7bb5f541d3e6a7b34ba4b2765ad1a014b13d9acd5887236f22389f857686\": container with ID starting with 7dfb7bb5f541d3e6a7b34ba4b2765ad1a014b13d9acd5887236f22389f857686 not found: ID does not exist" containerID="7dfb7bb5f541d3e6a7b34ba4b2765ad1a014b13d9acd5887236f22389f857686" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.628263 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dfb7bb5f541d3e6a7b34ba4b2765ad1a014b13d9acd5887236f22389f857686"} err="failed to get container status \"7dfb7bb5f541d3e6a7b34ba4b2765ad1a014b13d9acd5887236f22389f857686\": rpc error: code = NotFound desc = could not find container \"7dfb7bb5f541d3e6a7b34ba4b2765ad1a014b13d9acd5887236f22389f857686\": container with ID starting with 7dfb7bb5f541d3e6a7b34ba4b2765ad1a014b13d9acd5887236f22389f857686 not found: ID does not exist" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.628275 4812 scope.go:117] "RemoveContainer" containerID="583024e52dfa2d0198e9e008ff828f760750f1c77c129940527456b669651af8" Nov 24 19:40:03 crc kubenswrapper[4812]: E1124 19:40:03.628595 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"583024e52dfa2d0198e9e008ff828f760750f1c77c129940527456b669651af8\": container with ID starting with 583024e52dfa2d0198e9e008ff828f760750f1c77c129940527456b669651af8 not found: ID does not exist" containerID="583024e52dfa2d0198e9e008ff828f760750f1c77c129940527456b669651af8" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.628659 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"583024e52dfa2d0198e9e008ff828f760750f1c77c129940527456b669651af8"} err="failed to get container status \"583024e52dfa2d0198e9e008ff828f760750f1c77c129940527456b669651af8\": rpc error: code = NotFound desc = could not find container \"583024e52dfa2d0198e9e008ff828f760750f1c77c129940527456b669651af8\": container with ID starting with 583024e52dfa2d0198e9e008ff828f760750f1c77c129940527456b669651af8 not found: ID does not exist" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.628683 4812 scope.go:117] "RemoveContainer" containerID="e7a4a9c734691227c7726486773c640c7222fb5fb72a6dfe0e67b5f73d49779c" Nov 24 19:40:03 crc kubenswrapper[4812]: E1124 19:40:03.629001 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7a4a9c734691227c7726486773c640c7222fb5fb72a6dfe0e67b5f73d49779c\": container with ID starting with e7a4a9c734691227c7726486773c640c7222fb5fb72a6dfe0e67b5f73d49779c not found: ID does not exist" containerID="e7a4a9c734691227c7726486773c640c7222fb5fb72a6dfe0e67b5f73d49779c" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.629057 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7a4a9c734691227c7726486773c640c7222fb5fb72a6dfe0e67b5f73d49779c"} err="failed to get container status \"e7a4a9c734691227c7726486773c640c7222fb5fb72a6dfe0e67b5f73d49779c\": rpc error: code = NotFound desc = could not find container \"e7a4a9c734691227c7726486773c640c7222fb5fb72a6dfe0e67b5f73d49779c\": container with ID starting with e7a4a9c734691227c7726486773c640c7222fb5fb72a6dfe0e67b5f73d49779c not found: ID does not exist" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.629102 4812 scope.go:117] "RemoveContainer" containerID="6f8586c7b6379e41339cf4003e181ac17b8000c4b6cf916c10b8bd5974063830" Nov 24 19:40:03 crc kubenswrapper[4812]: E1124 19:40:03.629432 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f8586c7b6379e41339cf4003e181ac17b8000c4b6cf916c10b8bd5974063830\": container with ID starting with 6f8586c7b6379e41339cf4003e181ac17b8000c4b6cf916c10b8bd5974063830 not found: ID does not exist" containerID="6f8586c7b6379e41339cf4003e181ac17b8000c4b6cf916c10b8bd5974063830" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.629459 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f8586c7b6379e41339cf4003e181ac17b8000c4b6cf916c10b8bd5974063830"} err="failed to get container status \"6f8586c7b6379e41339cf4003e181ac17b8000c4b6cf916c10b8bd5974063830\": rpc error: code = NotFound desc = could not find container \"6f8586c7b6379e41339cf4003e181ac17b8000c4b6cf916c10b8bd5974063830\": container with ID starting with 6f8586c7b6379e41339cf4003e181ac17b8000c4b6cf916c10b8bd5974063830 not found: ID does not exist" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.629474 4812 scope.go:117] "RemoveContainer" containerID="369329005ba708cb3da74c4eb5f269c7a9bc67780612f12579a184b149f8f6a3" Nov 24 19:40:03 crc kubenswrapper[4812]: E1124 19:40:03.630017 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"369329005ba708cb3da74c4eb5f269c7a9bc67780612f12579a184b149f8f6a3\": container with ID starting with 369329005ba708cb3da74c4eb5f269c7a9bc67780612f12579a184b149f8f6a3 not found: ID does not exist" containerID="369329005ba708cb3da74c4eb5f269c7a9bc67780612f12579a184b149f8f6a3" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.630044 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"369329005ba708cb3da74c4eb5f269c7a9bc67780612f12579a184b149f8f6a3"} err="failed to get container status \"369329005ba708cb3da74c4eb5f269c7a9bc67780612f12579a184b149f8f6a3\": rpc error: code = NotFound desc = could not find container \"369329005ba708cb3da74c4eb5f269c7a9bc67780612f12579a184b149f8f6a3\": container with ID starting with 369329005ba708cb3da74c4eb5f269c7a9bc67780612f12579a184b149f8f6a3 not found: ID does not exist" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.630058 4812 scope.go:117] "RemoveContainer" containerID="5c72f1df163389e7bb1686785b1e834a0fcaae9c25f9b0e0118c50ce389f8124" Nov 24 19:40:03 crc kubenswrapper[4812]: E1124 19:40:03.630498 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c72f1df163389e7bb1686785b1e834a0fcaae9c25f9b0e0118c50ce389f8124\": container with ID starting with 5c72f1df163389e7bb1686785b1e834a0fcaae9c25f9b0e0118c50ce389f8124 not found: ID does not exist" containerID="5c72f1df163389e7bb1686785b1e834a0fcaae9c25f9b0e0118c50ce389f8124" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.630521 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c72f1df163389e7bb1686785b1e834a0fcaae9c25f9b0e0118c50ce389f8124"} err="failed to get container status \"5c72f1df163389e7bb1686785b1e834a0fcaae9c25f9b0e0118c50ce389f8124\": rpc error: code = NotFound desc = could not find container \"5c72f1df163389e7bb1686785b1e834a0fcaae9c25f9b0e0118c50ce389f8124\": container with ID starting with 5c72f1df163389e7bb1686785b1e834a0fcaae9c25f9b0e0118c50ce389f8124 not found: ID does not exist" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.630536 4812 scope.go:117] "RemoveContainer" containerID="0f2648d680c181c55b3d5b4aac28a54db4a0cc1f041a8502edaf196dec2f5d2b" Nov 24 19:40:03 crc kubenswrapper[4812]: E1124 19:40:03.630974 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f2648d680c181c55b3d5b4aac28a54db4a0cc1f041a8502edaf196dec2f5d2b\": container with ID starting with 0f2648d680c181c55b3d5b4aac28a54db4a0cc1f041a8502edaf196dec2f5d2b not found: ID does not exist" containerID="0f2648d680c181c55b3d5b4aac28a54db4a0cc1f041a8502edaf196dec2f5d2b" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.630993 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f2648d680c181c55b3d5b4aac28a54db4a0cc1f041a8502edaf196dec2f5d2b"} err="failed to get container status \"0f2648d680c181c55b3d5b4aac28a54db4a0cc1f041a8502edaf196dec2f5d2b\": rpc error: code = NotFound desc = could not find container \"0f2648d680c181c55b3d5b4aac28a54db4a0cc1f041a8502edaf196dec2f5d2b\": container with ID starting with 0f2648d680c181c55b3d5b4aac28a54db4a0cc1f041a8502edaf196dec2f5d2b not found: ID does not exist" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.631005 4812 scope.go:117] "RemoveContainer" containerID="f42cebfcb67a0c82f8b5ea0093df23a7f3db4c4d55c27eba7281fffcdffb9145" Nov 24 19:40:03 crc kubenswrapper[4812]: E1124 19:40:03.631455 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f42cebfcb67a0c82f8b5ea0093df23a7f3db4c4d55c27eba7281fffcdffb9145\": container with ID starting with f42cebfcb67a0c82f8b5ea0093df23a7f3db4c4d55c27eba7281fffcdffb9145 not found: ID does not exist" containerID="f42cebfcb67a0c82f8b5ea0093df23a7f3db4c4d55c27eba7281fffcdffb9145" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.631472 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f42cebfcb67a0c82f8b5ea0093df23a7f3db4c4d55c27eba7281fffcdffb9145"} err="failed to get container status \"f42cebfcb67a0c82f8b5ea0093df23a7f3db4c4d55c27eba7281fffcdffb9145\": rpc error: code = NotFound desc = could not find container \"f42cebfcb67a0c82f8b5ea0093df23a7f3db4c4d55c27eba7281fffcdffb9145\": container with ID starting with f42cebfcb67a0c82f8b5ea0093df23a7f3db4c4d55c27eba7281fffcdffb9145 not found: ID does not exist" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.631486 4812 scope.go:117] "RemoveContainer" containerID="3737b1e379814063ab73fb73d8cadf66e16906093ac70c6122f3dd28c3ef1165" Nov 24 19:40:03 crc kubenswrapper[4812]: E1124 19:40:03.631731 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3737b1e379814063ab73fb73d8cadf66e16906093ac70c6122f3dd28c3ef1165\": container with ID starting with 3737b1e379814063ab73fb73d8cadf66e16906093ac70c6122f3dd28c3ef1165 not found: ID does not exist" containerID="3737b1e379814063ab73fb73d8cadf66e16906093ac70c6122f3dd28c3ef1165" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.631749 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3737b1e379814063ab73fb73d8cadf66e16906093ac70c6122f3dd28c3ef1165"} err="failed to get container status \"3737b1e379814063ab73fb73d8cadf66e16906093ac70c6122f3dd28c3ef1165\": rpc error: code = NotFound desc = could not find container \"3737b1e379814063ab73fb73d8cadf66e16906093ac70c6122f3dd28c3ef1165\": container with ID starting with 3737b1e379814063ab73fb73d8cadf66e16906093ac70c6122f3dd28c3ef1165 not found: ID does not exist" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.631761 4812 scope.go:117] "RemoveContainer" containerID="95ee721a93b130c2b838cad7d2589d756c66158d110fc387dea0a070f5cab34b" Nov 24 19:40:03 crc kubenswrapper[4812]: E1124 19:40:03.632030 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95ee721a93b130c2b838cad7d2589d756c66158d110fc387dea0a070f5cab34b\": container with ID starting with 95ee721a93b130c2b838cad7d2589d756c66158d110fc387dea0a070f5cab34b not found: ID does not exist" containerID="95ee721a93b130c2b838cad7d2589d756c66158d110fc387dea0a070f5cab34b" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.632078 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95ee721a93b130c2b838cad7d2589d756c66158d110fc387dea0a070f5cab34b"} err="failed to get container status \"95ee721a93b130c2b838cad7d2589d756c66158d110fc387dea0a070f5cab34b\": rpc error: code = NotFound desc = could not find container \"95ee721a93b130c2b838cad7d2589d756c66158d110fc387dea0a070f5cab34b\": container with ID starting with 95ee721a93b130c2b838cad7d2589d756c66158d110fc387dea0a070f5cab34b not found: ID does not exist" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.632107 4812 scope.go:117] "RemoveContainer" containerID="85c687b61ffed21faaf0d22af2f315067e640622918bcd4af610650579b3e1ea" Nov 24 19:40:03 crc kubenswrapper[4812]: E1124 19:40:03.632557 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85c687b61ffed21faaf0d22af2f315067e640622918bcd4af610650579b3e1ea\": container with ID starting with 85c687b61ffed21faaf0d22af2f315067e640622918bcd4af610650579b3e1ea not found: ID does not exist" containerID="85c687b61ffed21faaf0d22af2f315067e640622918bcd4af610650579b3e1ea" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.632579 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85c687b61ffed21faaf0d22af2f315067e640622918bcd4af610650579b3e1ea"} err="failed to get container status \"85c687b61ffed21faaf0d22af2f315067e640622918bcd4af610650579b3e1ea\": rpc error: code = NotFound desc = could not find container \"85c687b61ffed21faaf0d22af2f315067e640622918bcd4af610650579b3e1ea\": container with ID starting with 85c687b61ffed21faaf0d22af2f315067e640622918bcd4af610650579b3e1ea not found: ID does not exist" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.632593 4812 scope.go:117] "RemoveContainer" containerID="d1f7937bf24142a9d61506b61ddf9aa74aef7f7791c9f2de08d17609dc7039ce" Nov 24 19:40:03 crc kubenswrapper[4812]: E1124 19:40:03.632869 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1f7937bf24142a9d61506b61ddf9aa74aef7f7791c9f2de08d17609dc7039ce\": container with ID starting with d1f7937bf24142a9d61506b61ddf9aa74aef7f7791c9f2de08d17609dc7039ce not found: ID does not exist" containerID="d1f7937bf24142a9d61506b61ddf9aa74aef7f7791c9f2de08d17609dc7039ce" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.632896 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1f7937bf24142a9d61506b61ddf9aa74aef7f7791c9f2de08d17609dc7039ce"} err="failed to get container status \"d1f7937bf24142a9d61506b61ddf9aa74aef7f7791c9f2de08d17609dc7039ce\": rpc error: code = NotFound desc = could not find container \"d1f7937bf24142a9d61506b61ddf9aa74aef7f7791c9f2de08d17609dc7039ce\": container with ID starting with d1f7937bf24142a9d61506b61ddf9aa74aef7f7791c9f2de08d17609dc7039ce not found: ID does not exist" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.632913 4812 scope.go:117] "RemoveContainer" containerID="ec513a4be128ca8f5f9f823a774f668bfdc27e8b505dfc23a7f565ab756b025f" Nov 24 19:40:03 crc kubenswrapper[4812]: E1124 19:40:03.633262 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec513a4be128ca8f5f9f823a774f668bfdc27e8b505dfc23a7f565ab756b025f\": container with ID starting with ec513a4be128ca8f5f9f823a774f668bfdc27e8b505dfc23a7f565ab756b025f not found: ID does not exist" containerID="ec513a4be128ca8f5f9f823a774f668bfdc27e8b505dfc23a7f565ab756b025f" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.633283 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec513a4be128ca8f5f9f823a774f668bfdc27e8b505dfc23a7f565ab756b025f"} err="failed to get container status \"ec513a4be128ca8f5f9f823a774f668bfdc27e8b505dfc23a7f565ab756b025f\": rpc error: code = NotFound desc = could not find container \"ec513a4be128ca8f5f9f823a774f668bfdc27e8b505dfc23a7f565ab756b025f\": container with ID starting with ec513a4be128ca8f5f9f823a774f668bfdc27e8b505dfc23a7f565ab756b025f not found: ID does not exist" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.633321 4812 scope.go:117] "RemoveContainer" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.662507 4812 scope.go:117] "RemoveContainer" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.682853 4812 scope.go:117] "RemoveContainer" containerID="accc8d815be23caec019536041925afd0f5ed5a4e6e387fd00b9c231da9efae1" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.703677 4812 scope.go:117] "RemoveContainer" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" Nov 24 19:40:03 crc kubenswrapper[4812]: E1124 19:40:03.704072 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680\": container with ID starting with 623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680 not found: ID does not exist" containerID="623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.704111 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680"} err="failed to get container status \"623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680\": rpc error: code = NotFound desc = could not find container \"623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680\": container with ID starting with 623503fb367d692e2fe7a9aff81186c002df606d233c62f7e0b5d27e9297c680 not found: ID does not exist" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.704140 4812 scope.go:117] "RemoveContainer" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" Nov 24 19:40:03 crc kubenswrapper[4812]: E1124 19:40:03.704461 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5\": container with ID starting with 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 not found: ID does not exist" containerID="20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.704542 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5"} err="failed to get container status \"20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5\": rpc error: code = NotFound desc = could not find container \"20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5\": container with ID starting with 20099ce5f394eee0d6ab561ca944d87a7c9213dbc5475ce26c39f673257db0d5 not found: ID does not exist" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.704643 4812 scope.go:117] "RemoveContainer" containerID="accc8d815be23caec019536041925afd0f5ed5a4e6e387fd00b9c231da9efae1" Nov 24 19:40:03 crc kubenswrapper[4812]: E1124 19:40:03.704970 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"accc8d815be23caec019536041925afd0f5ed5a4e6e387fd00b9c231da9efae1\": container with ID starting with accc8d815be23caec019536041925afd0f5ed5a4e6e387fd00b9c231da9efae1 not found: ID does not exist" containerID="accc8d815be23caec019536041925afd0f5ed5a4e6e387fd00b9c231da9efae1" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.705002 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"accc8d815be23caec019536041925afd0f5ed5a4e6e387fd00b9c231da9efae1"} err="failed to get container status \"accc8d815be23caec019536041925afd0f5ed5a4e6e387fd00b9c231da9efae1\": rpc error: code = NotFound desc = could not find container \"accc8d815be23caec019536041925afd0f5ed5a4e6e387fd00b9c231da9efae1\": container with ID starting with accc8d815be23caec019536041925afd0f5ed5a4e6e387fd00b9c231da9efae1 not found: ID does not exist" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.705017 4812 scope.go:117] "RemoveContainer" containerID="03b569a149de1654124dbb26b9b742abf3f34b40cd6556a3ae9992326e8dd347" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.734298 4812 scope.go:117] "RemoveContainer" containerID="de017585554f7c04efa116007ce5cd58c485707a3172724fa32769316d9a7c7b" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.750850 4812 scope.go:117] "RemoveContainer" containerID="03b569a149de1654124dbb26b9b742abf3f34b40cd6556a3ae9992326e8dd347" Nov 24 19:40:03 crc kubenswrapper[4812]: E1124 19:40:03.751449 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03b569a149de1654124dbb26b9b742abf3f34b40cd6556a3ae9992326e8dd347\": container with ID starting with 03b569a149de1654124dbb26b9b742abf3f34b40cd6556a3ae9992326e8dd347 not found: ID does not exist" containerID="03b569a149de1654124dbb26b9b742abf3f34b40cd6556a3ae9992326e8dd347" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.751493 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03b569a149de1654124dbb26b9b742abf3f34b40cd6556a3ae9992326e8dd347"} err="failed to get container status \"03b569a149de1654124dbb26b9b742abf3f34b40cd6556a3ae9992326e8dd347\": rpc error: code = NotFound desc = could not find container \"03b569a149de1654124dbb26b9b742abf3f34b40cd6556a3ae9992326e8dd347\": container with ID starting with 03b569a149de1654124dbb26b9b742abf3f34b40cd6556a3ae9992326e8dd347 not found: ID does not exist" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.751521 4812 scope.go:117] "RemoveContainer" containerID="de017585554f7c04efa116007ce5cd58c485707a3172724fa32769316d9a7c7b" Nov 24 19:40:03 crc kubenswrapper[4812]: E1124 19:40:03.751845 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de017585554f7c04efa116007ce5cd58c485707a3172724fa32769316d9a7c7b\": container with ID starting with de017585554f7c04efa116007ce5cd58c485707a3172724fa32769316d9a7c7b not found: ID does not exist" containerID="de017585554f7c04efa116007ce5cd58c485707a3172724fa32769316d9a7c7b" Nov 24 19:40:03 crc kubenswrapper[4812]: I1124 19:40:03.751931 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de017585554f7c04efa116007ce5cd58c485707a3172724fa32769316d9a7c7b"} err="failed to get container status \"de017585554f7c04efa116007ce5cd58c485707a3172724fa32769316d9a7c7b\": rpc error: code = NotFound desc = could not find container \"de017585554f7c04efa116007ce5cd58c485707a3172724fa32769316d9a7c7b\": container with ID starting with de017585554f7c04efa116007ce5cd58c485707a3172724fa32769316d9a7c7b not found: ID does not exist" Nov 24 19:40:04 crc kubenswrapper[4812]: I1124 19:40:04.980637 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7063bb41-d0d1-4605-b265-1fb3adce77b5" path="/var/lib/kubelet/pods/7063bb41-d0d1-4605-b265-1fb3adce77b5/volumes" Nov 24 19:40:04 crc kubenswrapper[4812]: I1124 19:40:04.982132 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" path="/var/lib/kubelet/pods/ba9f8224-4f20-4c27-b242-3385791aed68/volumes" Nov 24 19:40:04 crc kubenswrapper[4812]: I1124 19:40:04.983666 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" path="/var/lib/kubelet/pods/c127eda7-bbfe-4197-b5cf-f4f99824d0c8/volumes" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.488424 4812 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod8f7453e8-f9c5-4588-80b8-82bba37e1514"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod8f7453e8-f9c5-4588-80b8-82bba37e1514] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8f7453e8_f9c5_4588_80b8_82bba37e1514.slice" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.627778 4812 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","poddfeb89d5-a81d-411c-8808-ae9f506780e2"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poddfeb89d5-a81d-411c-8808-ae9f506780e2] : Timed out while waiting for systemd to remove kubepods-besteffort-poddfeb89d5_a81d_411c_8808_ae9f506780e2.slice" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.765273 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w42fz"] Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.765624 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9" containerName="nova-api-log" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.765639 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9" containerName="nova-api-log" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.765658 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="account-server" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.765666 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="account-server" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.765683 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7063bb41-d0d1-4605-b265-1fb3adce77b5" containerName="cinder-scheduler" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.765692 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7063bb41-d0d1-4605-b265-1fb3adce77b5" containerName="cinder-scheduler" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.765705 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovsdb-server-init" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.765713 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovsdb-server-init" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.765727 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09c615e5-0d09-4775-b559-6883b6dc280b" containerName="ceilometer-central-agent" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.765735 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="09c615e5-0d09-4775-b559-6883b6dc280b" containerName="ceilometer-central-agent" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.765751 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e1ab98f-9015-432e-90f2-4692dc37c99e" containerName="barbican-api-log" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.765759 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1ab98f-9015-432e-90f2-4692dc37c99e" containerName="barbican-api-log" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.765768 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e1ab98f-9015-432e-90f2-4692dc37c99e" containerName="barbican-api" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.765777 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1ab98f-9015-432e-90f2-4692dc37c99e" containerName="barbican-api" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.765792 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb8ede6-6163-4906-89f7-7fe6458edc36" containerName="setup-container" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.765801 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb8ede6-6163-4906-89f7-7fe6458edc36" containerName="setup-container" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.765810 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf04fa0a-bd1d-442f-afff-05fe1ebdeafa" containerName="neutron-api" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.765819 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf04fa0a-bd1d-442f-afff-05fe1ebdeafa" containerName="neutron-api" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.765828 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8539e2-3b3b-488b-9111-d59aa7317490" containerName="mariadb-account-delete" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.765836 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8539e2-3b3b-488b-9111-d59aa7317490" containerName="mariadb-account-delete" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.765850 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovs-vswitchd" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.765858 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovs-vswitchd" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.765867 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09c615e5-0d09-4775-b559-6883b6dc280b" containerName="proxy-httpd" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.765875 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="09c615e5-0d09-4775-b559-6883b6dc280b" containerName="proxy-httpd" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.765885 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce28537c-ff5d-4318-bc95-8a29da6aae53" containerName="nova-metadata-metadata" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.765893 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce28537c-ff5d-4318-bc95-8a29da6aae53" containerName="nova-metadata-metadata" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.765906 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="object-auditor" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.765915 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="object-auditor" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.765948 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db2cd03e-b999-48ea-b540-7fd35356ba8b" containerName="setup-container" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.765957 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="db2cd03e-b999-48ea-b540-7fd35356ba8b" containerName="setup-container" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.765968 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0648d4b9-0096-4092-b2b9-70e23f9c863c" containerName="openstack-network-exporter" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.765977 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0648d4b9-0096-4092-b2b9-70e23f9c863c" containerName="openstack-network-exporter" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.765990 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf01449d-a52d-488f-bd93-b5f84b57fb13" containerName="glance-log" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.765999 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf01449d-a52d-488f-bd93-b5f84b57fb13" containerName="glance-log" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766015 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde2bf26-3775-4ffe-83d7-1bc11d36c1e3" containerName="barbican-keystone-listener-log" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766023 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde2bf26-3775-4ffe-83d7-1bc11d36c1e3" containerName="barbican-keystone-listener-log" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766034 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fde2bf26-3775-4ffe-83d7-1bc11d36c1e3" containerName="barbican-keystone-listener" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766042 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="fde2bf26-3775-4ffe-83d7-1bc11d36c1e3" containerName="barbican-keystone-listener" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766053 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de0c0304-db1e-4a1e-8b7e-5700c55fc9ed" containerName="cinder-api" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766061 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="de0c0304-db1e-4a1e-8b7e-5700c55fc9ed" containerName="cinder-api" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766075 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="174dd2a1-292b-4e07-8ed8-48d4109e9f57" containerName="memcached" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766084 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="174dd2a1-292b-4e07-8ed8-48d4109e9f57" containerName="memcached" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766092 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f4002e-a9bd-462d-b5ed-ce5ea166ec16" containerName="glance-log" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766100 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f4002e-a9bd-462d-b5ed-ce5ea166ec16" containerName="glance-log" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766111 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2264eb6-f494-4800-832b-d1e1d02daf4e" containerName="mysql-bootstrap" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766119 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2264eb6-f494-4800-832b-d1e1d02daf4e" containerName="mysql-bootstrap" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766128 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0648d4b9-0096-4092-b2b9-70e23f9c863c" containerName="ovn-northd" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766136 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0648d4b9-0096-4092-b2b9-70e23f9c863c" containerName="ovn-northd" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766145 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8509a7-6885-4238-9da3-b214b5f8868e" containerName="placement-log" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766154 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8509a7-6885-4238-9da3-b214b5f8868e" containerName="placement-log" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766165 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b72df6d7-27f7-49e5-93d9-4069db72b602" containerName="mariadb-account-delete" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766173 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b72df6d7-27f7-49e5-93d9-4069db72b602" containerName="mariadb-account-delete" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766188 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="container-server" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766361 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="container-server" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766388 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2264eb6-f494-4800-832b-d1e1d02daf4e" containerName="galera" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766399 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2264eb6-f494-4800-832b-d1e1d02daf4e" containerName="galera" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766419 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7063bb41-d0d1-4605-b265-1fb3adce77b5" containerName="probe" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766429 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7063bb41-d0d1-4605-b265-1fb3adce77b5" containerName="probe" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766450 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d5ebd7-a16f-49bb-a80e-1138d5f197d3" containerName="mariadb-account-delete" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766461 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d5ebd7-a16f-49bb-a80e-1138d5f197d3" containerName="mariadb-account-delete" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766480 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf01449d-a52d-488f-bd93-b5f84b57fb13" containerName="glance-httpd" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766491 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf01449d-a52d-488f-bd93-b5f84b57fb13" containerName="glance-httpd" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766503 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36aff13b-5f97-40c0-8c88-64c93ce91bcb" containerName="nova-scheduler-scheduler" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766513 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="36aff13b-5f97-40c0-8c88-64c93ce91bcb" containerName="nova-scheduler-scheduler" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766524 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb8ede6-6163-4906-89f7-7fe6458edc36" containerName="rabbitmq" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766531 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb8ede6-6163-4906-89f7-7fe6458edc36" containerName="rabbitmq" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766548 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="rsync" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766557 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="rsync" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766569 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="object-replicator" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766577 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="object-replicator" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766587 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b80396-f87d-435e-8478-0ecb34bccd94" containerName="nova-cell1-conductor-conductor" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766595 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b80396-f87d-435e-8478-0ecb34bccd94" containerName="nova-cell1-conductor-conductor" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766607 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce28537c-ff5d-4318-bc95-8a29da6aae53" containerName="nova-metadata-log" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766615 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce28537c-ff5d-4318-bc95-8a29da6aae53" containerName="nova-metadata-log" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766626 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f4002e-a9bd-462d-b5ed-ce5ea166ec16" containerName="glance-httpd" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766634 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f4002e-a9bd-462d-b5ed-ce5ea166ec16" containerName="glance-httpd" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766645 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="object-server" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766653 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="object-server" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766662 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="object-updater" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766670 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="object-updater" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766681 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="account-replicator" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766688 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="account-replicator" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766701 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="container-updater" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766710 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="container-updater" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766726 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09c615e5-0d09-4775-b559-6883b6dc280b" containerName="ceilometer-notification-agent" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766734 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="09c615e5-0d09-4775-b559-6883b6dc280b" containerName="ceilometer-notification-agent" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766747 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93240875-2cab-44e0-b475-20f46cb4850e" containerName="nova-cell0-conductor-conductor" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766756 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="93240875-2cab-44e0-b475-20f46cb4850e" containerName="nova-cell0-conductor-conductor" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766771 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9" containerName="nova-api-api" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766779 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9" containerName="nova-api-api" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766813 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1b0b239-3d7b-40c3-a599-f3f74d452813" containerName="mariadb-account-delete" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766821 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1b0b239-3d7b-40c3-a599-f3f74d452813" containerName="mariadb-account-delete" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766832 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b12268ff-1299-46c4-8937-8c4dc00f7dc5" containerName="mariadb-account-delete" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766842 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b12268ff-1299-46c4-8937-8c4dc00f7dc5" containerName="mariadb-account-delete" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766852 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="container-auditor" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766863 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="container-auditor" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766879 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8509a7-6885-4238-9da3-b214b5f8868e" containerName="placement-api" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766889 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8509a7-6885-4238-9da3-b214b5f8868e" containerName="placement-api" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766906 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9dbc131-33f8-4df7-88ba-2d90e93a436c" containerName="kube-state-metrics" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766917 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9dbc131-33f8-4df7-88ba-2d90e93a436c" containerName="kube-state-metrics" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766930 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovsdb-server" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766938 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovsdb-server" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766952 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f68d8b-5bb9-4778-9416-1e9f82bba8f3" containerName="keystone-api" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766960 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f68d8b-5bb9-4778-9416-1e9f82bba8f3" containerName="keystone-api" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766973 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="account-auditor" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766981 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="account-auditor" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.766990 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="account-reaper" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.766997 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="account-reaper" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.767008 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de0c0304-db1e-4a1e-8b7e-5700c55fc9ed" containerName="cinder-api-log" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767019 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="de0c0304-db1e-4a1e-8b7e-5700c55fc9ed" containerName="cinder-api-log" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.767037 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf04fa0a-bd1d-442f-afff-05fe1ebdeafa" containerName="neutron-httpd" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767048 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf04fa0a-bd1d-442f-afff-05fe1ebdeafa" containerName="neutron-httpd" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.767068 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="container-replicator" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767077 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="container-replicator" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.767095 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db2cd03e-b999-48ea-b540-7fd35356ba8b" containerName="rabbitmq" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767106 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="db2cd03e-b999-48ea-b540-7fd35356ba8b" containerName="rabbitmq" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.767119 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="object-expirer" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767128 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="object-expirer" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.767142 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="swift-recon-cron" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767151 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="swift-recon-cron" Nov 24 19:40:05 crc kubenswrapper[4812]: E1124 19:40:05.767166 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09c615e5-0d09-4775-b559-6883b6dc280b" containerName="sg-core" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767175 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="09c615e5-0d09-4775-b559-6883b6dc280b" containerName="sg-core" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767435 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovsdb-server" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767457 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7063bb41-d0d1-4605-b265-1fb3adce77b5" containerName="cinder-scheduler" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767492 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="09c615e5-0d09-4775-b559-6883b6dc280b" containerName="sg-core" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767499 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="09c615e5-0d09-4775-b559-6883b6dc280b" containerName="proxy-httpd" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767505 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="container-auditor" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767514 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9" containerName="nova-api-api" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767530 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="object-server" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767539 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7063bb41-d0d1-4605-b265-1fb3adce77b5" containerName="probe" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767546 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde2bf26-3775-4ffe-83d7-1bc11d36c1e3" containerName="barbican-keystone-listener" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767555 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="174dd2a1-292b-4e07-8ed8-48d4109e9f57" containerName="memcached" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767564 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1b0b239-3d7b-40c3-a599-f3f74d452813" containerName="mariadb-account-delete" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767574 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf01449d-a52d-488f-bd93-b5f84b57fb13" containerName="glance-httpd" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767580 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f68d8b-5bb9-4778-9416-1e9f82bba8f3" containerName="keystone-api" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767586 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="container-server" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767593 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8539e2-3b3b-488b-9111-d59aa7317490" containerName="mariadb-account-delete" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767602 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8509a7-6885-4238-9da3-b214b5f8868e" containerName="placement-log" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767607 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="de0c0304-db1e-4a1e-8b7e-5700c55fc9ed" containerName="cinder-api" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767613 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d5ebd7-a16f-49bb-a80e-1138d5f197d3" containerName="mariadb-account-delete" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767620 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f4002e-a9bd-462d-b5ed-ce5ea166ec16" containerName="glance-log" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767626 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="account-auditor" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767634 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="object-auditor" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767645 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="object-replicator" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767652 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8509a7-6885-4238-9da3-b214b5f8868e" containerName="placement-api" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767658 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="fde2bf26-3775-4ffe-83d7-1bc11d36c1e3" containerName="barbican-keystone-listener-log" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767664 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="object-expirer" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767671 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="rsync" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767680 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="container-updater" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767689 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf04fa0a-bd1d-442f-afff-05fe1ebdeafa" containerName="neutron-api" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767699 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="de0c0304-db1e-4a1e-8b7e-5700c55fc9ed" containerName="cinder-api-log" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767706 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="36aff13b-5f97-40c0-8c88-64c93ce91bcb" containerName="nova-scheduler-scheduler" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767716 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="account-reaper" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767721 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce28537c-ff5d-4318-bc95-8a29da6aae53" containerName="nova-metadata-log" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767729 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="object-updater" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767739 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e1ab98f-9015-432e-90f2-4692dc37c99e" containerName="barbican-api-log" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767746 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="account-server" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767755 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f4002e-a9bd-462d-b5ed-ce5ea166ec16" containerName="glance-httpd" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767762 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d373f4-f1f8-4c34-9c3e-047d0f67d6d9" containerName="nova-api-log" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767771 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2264eb6-f494-4800-832b-d1e1d02daf4e" containerName="galera" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767779 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="09c615e5-0d09-4775-b559-6883b6dc280b" containerName="ceilometer-central-agent" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767790 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba9f8224-4f20-4c27-b242-3385791aed68" containerName="ovs-vswitchd" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767797 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf04fa0a-bd1d-442f-afff-05fe1ebdeafa" containerName="neutron-httpd" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767803 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0648d4b9-0096-4092-b2b9-70e23f9c863c" containerName="openstack-network-exporter" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767813 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="swift-recon-cron" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767820 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="09c615e5-0d09-4775-b559-6883b6dc280b" containerName="ceilometer-notification-agent" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767827 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf01449d-a52d-488f-bd93-b5f84b57fb13" containerName="glance-log" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767834 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="account-replicator" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767841 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b72df6d7-27f7-49e5-93d9-4069db72b602" containerName="mariadb-account-delete" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767850 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0648d4b9-0096-4092-b2b9-70e23f9c863c" containerName="ovn-northd" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767856 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e1ab98f-9015-432e-90f2-4692dc37c99e" containerName="barbican-api" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767865 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9dbc131-33f8-4df7-88ba-2d90e93a436c" containerName="kube-state-metrics" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767873 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="db2cd03e-b999-48ea-b540-7fd35356ba8b" containerName="rabbitmq" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767879 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b12268ff-1299-46c4-8937-8c4dc00f7dc5" containerName="mariadb-account-delete" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767889 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b80396-f87d-435e-8478-0ecb34bccd94" containerName="nova-cell1-conductor-conductor" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767897 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c127eda7-bbfe-4197-b5cf-f4f99824d0c8" containerName="container-replicator" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767905 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="93240875-2cab-44e0-b475-20f46cb4850e" containerName="nova-cell0-conductor-conductor" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767913 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce28537c-ff5d-4318-bc95-8a29da6aae53" containerName="nova-metadata-metadata" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.767922 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cb8ede6-6163-4906-89f7-7fe6458edc36" containerName="rabbitmq" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.768858 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w42fz" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.780939 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w42fz"] Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.949000 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac401ff7-e4f1-40a9-9187-07f95bde8555-utilities\") pod \"redhat-marketplace-w42fz\" (UID: \"ac401ff7-e4f1-40a9-9187-07f95bde8555\") " pod="openshift-marketplace/redhat-marketplace-w42fz" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.949180 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzgts\" (UniqueName: \"kubernetes.io/projected/ac401ff7-e4f1-40a9-9187-07f95bde8555-kube-api-access-dzgts\") pod \"redhat-marketplace-w42fz\" (UID: \"ac401ff7-e4f1-40a9-9187-07f95bde8555\") " pod="openshift-marketplace/redhat-marketplace-w42fz" Nov 24 19:40:05 crc kubenswrapper[4812]: I1124 19:40:05.949266 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac401ff7-e4f1-40a9-9187-07f95bde8555-catalog-content\") pod \"redhat-marketplace-w42fz\" (UID: \"ac401ff7-e4f1-40a9-9187-07f95bde8555\") " pod="openshift-marketplace/redhat-marketplace-w42fz" Nov 24 19:40:06 crc kubenswrapper[4812]: I1124 19:40:06.050891 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac401ff7-e4f1-40a9-9187-07f95bde8555-utilities\") pod \"redhat-marketplace-w42fz\" (UID: \"ac401ff7-e4f1-40a9-9187-07f95bde8555\") " pod="openshift-marketplace/redhat-marketplace-w42fz" Nov 24 19:40:06 crc kubenswrapper[4812]: I1124 19:40:06.050997 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzgts\" (UniqueName: \"kubernetes.io/projected/ac401ff7-e4f1-40a9-9187-07f95bde8555-kube-api-access-dzgts\") pod \"redhat-marketplace-w42fz\" (UID: \"ac401ff7-e4f1-40a9-9187-07f95bde8555\") " pod="openshift-marketplace/redhat-marketplace-w42fz" Nov 24 19:40:06 crc kubenswrapper[4812]: I1124 19:40:06.051023 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac401ff7-e4f1-40a9-9187-07f95bde8555-catalog-content\") pod \"redhat-marketplace-w42fz\" (UID: \"ac401ff7-e4f1-40a9-9187-07f95bde8555\") " pod="openshift-marketplace/redhat-marketplace-w42fz" Nov 24 19:40:06 crc kubenswrapper[4812]: I1124 19:40:06.051619 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac401ff7-e4f1-40a9-9187-07f95bde8555-catalog-content\") pod \"redhat-marketplace-w42fz\" (UID: \"ac401ff7-e4f1-40a9-9187-07f95bde8555\") " pod="openshift-marketplace/redhat-marketplace-w42fz" Nov 24 19:40:06 crc kubenswrapper[4812]: I1124 19:40:06.051676 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac401ff7-e4f1-40a9-9187-07f95bde8555-utilities\") pod \"redhat-marketplace-w42fz\" (UID: \"ac401ff7-e4f1-40a9-9187-07f95bde8555\") " pod="openshift-marketplace/redhat-marketplace-w42fz" Nov 24 19:40:06 crc kubenswrapper[4812]: I1124 19:40:06.081249 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzgts\" (UniqueName: \"kubernetes.io/projected/ac401ff7-e4f1-40a9-9187-07f95bde8555-kube-api-access-dzgts\") pod \"redhat-marketplace-w42fz\" (UID: \"ac401ff7-e4f1-40a9-9187-07f95bde8555\") " pod="openshift-marketplace/redhat-marketplace-w42fz" Nov 24 19:40:06 crc kubenswrapper[4812]: I1124 19:40:06.088452 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w42fz" Nov 24 19:40:06 crc kubenswrapper[4812]: I1124 19:40:06.572208 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w42fz"] Nov 24 19:40:07 crc kubenswrapper[4812]: I1124 19:40:07.329570 4812 generic.go:334] "Generic (PLEG): container finished" podID="ac401ff7-e4f1-40a9-9187-07f95bde8555" containerID="3549113eebe37e9a4ff1f0b50d0e39836831d60b5a67284423e1cc24fb993992" exitCode=0 Nov 24 19:40:07 crc kubenswrapper[4812]: I1124 19:40:07.329704 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w42fz" event={"ID":"ac401ff7-e4f1-40a9-9187-07f95bde8555","Type":"ContainerDied","Data":"3549113eebe37e9a4ff1f0b50d0e39836831d60b5a67284423e1cc24fb993992"} Nov 24 19:40:07 crc kubenswrapper[4812]: I1124 19:40:07.329762 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w42fz" event={"ID":"ac401ff7-e4f1-40a9-9187-07f95bde8555","Type":"ContainerStarted","Data":"079da5498e5270ca0cdc5b47a2aabf66a481ff218a1dedb99237d8995136f493"} Nov 24 19:40:07 crc kubenswrapper[4812]: I1124 19:40:07.334502 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 19:40:08 crc kubenswrapper[4812]: E1124 19:40:08.087076 4812 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 24 19:40:08 crc kubenswrapper[4812]: E1124 19:40:08.087427 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-operator-scripts podName:2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9 nodeName:}" failed. No retries permitted until 2025-11-24 19:40:40.087410737 +0000 UTC m=+1433.876363098 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-operator-scripts") pod "novaapi5538-account-delete-t9sww" (UID: "2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9") : configmap "openstack-scripts" not found Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.074943 4812 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podb72df6d7-27f7-49e5-93d9-4069db72b602"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podb72df6d7-27f7-49e5-93d9-4069db72b602] : Timed out while waiting for systemd to remove kubepods-besteffort-podb72df6d7_27f7_49e5_93d9_4069db72b602.slice" Nov 24 19:40:09 crc kubenswrapper[4812]: E1124 19:40:09.075242 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podb72df6d7-27f7-49e5-93d9-4069db72b602] : unable to destroy cgroup paths for cgroup [kubepods besteffort podb72df6d7-27f7-49e5-93d9-4069db72b602] : Timed out while waiting for systemd to remove kubepods-besteffort-podb72df6d7_27f7_49e5_93d9_4069db72b602.slice" pod="openstack/neutron448d-account-delete-qskvr" podUID="b72df6d7-27f7-49e5-93d9-4069db72b602" Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.245478 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi5538-account-delete-t9sww" Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.356404 4812 generic.go:334] "Generic (PLEG): container finished" podID="2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9" containerID="77dc3f6e74abb3b93503952b87f654fbfaef75da24f4b57f53b2e69b9f1e4f01" exitCode=137 Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.356479 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi5538-account-delete-t9sww" event={"ID":"2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9","Type":"ContainerDied","Data":"77dc3f6e74abb3b93503952b87f654fbfaef75da24f4b57f53b2e69b9f1e4f01"} Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.356507 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi5538-account-delete-t9sww" event={"ID":"2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9","Type":"ContainerDied","Data":"7ba603237c638b8b95b23a9796088c70a29ebfff6750c60097d85338ef551e9a"} Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.356509 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi5538-account-delete-t9sww" Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.356539 4812 scope.go:117] "RemoveContainer" containerID="77dc3f6e74abb3b93503952b87f654fbfaef75da24f4b57f53b2e69b9f1e4f01" Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.360919 4812 generic.go:334] "Generic (PLEG): container finished" podID="ac401ff7-e4f1-40a9-9187-07f95bde8555" containerID="b92b314b2409e98638b8bfcb38bda2c4b678173f51eaf7013c9d7d80369a18f0" exitCode=0 Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.360980 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron448d-account-delete-qskvr" Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.361078 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w42fz" event={"ID":"ac401ff7-e4f1-40a9-9187-07f95bde8555","Type":"ContainerDied","Data":"b92b314b2409e98638b8bfcb38bda2c4b678173f51eaf7013c9d7d80369a18f0"} Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.399656 4812 scope.go:117] "RemoveContainer" containerID="d0bcc42b8192418057b3b366ea961bd0ce3afbaab1670843d8937a73bc38c555" Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.410884 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-operator-scripts\") pod \"2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9\" (UID: \"2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9\") " Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.411499 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw4lg\" (UniqueName: \"kubernetes.io/projected/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-kube-api-access-lw4lg\") pod \"2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9\" (UID: \"2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9\") " Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.412178 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9" (UID: "2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.414246 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron448d-account-delete-qskvr"] Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.416804 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.416811 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-kube-api-access-lw4lg" (OuterVolumeSpecName: "kube-api-access-lw4lg") pod "2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9" (UID: "2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9"). InnerVolumeSpecName "kube-api-access-lw4lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.419252 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron448d-account-delete-qskvr"] Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.430612 4812 scope.go:117] "RemoveContainer" containerID="77dc3f6e74abb3b93503952b87f654fbfaef75da24f4b57f53b2e69b9f1e4f01" Nov 24 19:40:09 crc kubenswrapper[4812]: E1124 19:40:09.430972 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77dc3f6e74abb3b93503952b87f654fbfaef75da24f4b57f53b2e69b9f1e4f01\": container with ID starting with 77dc3f6e74abb3b93503952b87f654fbfaef75da24f4b57f53b2e69b9f1e4f01 not found: ID does not exist" containerID="77dc3f6e74abb3b93503952b87f654fbfaef75da24f4b57f53b2e69b9f1e4f01" Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.431005 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77dc3f6e74abb3b93503952b87f654fbfaef75da24f4b57f53b2e69b9f1e4f01"} err="failed to get container status \"77dc3f6e74abb3b93503952b87f654fbfaef75da24f4b57f53b2e69b9f1e4f01\": rpc error: code = NotFound desc = could not find container \"77dc3f6e74abb3b93503952b87f654fbfaef75da24f4b57f53b2e69b9f1e4f01\": container with ID starting with 77dc3f6e74abb3b93503952b87f654fbfaef75da24f4b57f53b2e69b9f1e4f01 not found: ID does not exist" Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.431024 4812 scope.go:117] "RemoveContainer" containerID="d0bcc42b8192418057b3b366ea961bd0ce3afbaab1670843d8937a73bc38c555" Nov 24 19:40:09 crc kubenswrapper[4812]: E1124 19:40:09.431446 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0bcc42b8192418057b3b366ea961bd0ce3afbaab1670843d8937a73bc38c555\": container with ID starting with d0bcc42b8192418057b3b366ea961bd0ce3afbaab1670843d8937a73bc38c555 not found: ID does not exist" containerID="d0bcc42b8192418057b3b366ea961bd0ce3afbaab1670843d8937a73bc38c555" Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.431482 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0bcc42b8192418057b3b366ea961bd0ce3afbaab1670843d8937a73bc38c555"} err="failed to get container status \"d0bcc42b8192418057b3b366ea961bd0ce3afbaab1670843d8937a73bc38c555\": rpc error: code = NotFound desc = could not find container \"d0bcc42b8192418057b3b366ea961bd0ce3afbaab1670843d8937a73bc38c555\": container with ID starting with d0bcc42b8192418057b3b366ea961bd0ce3afbaab1670843d8937a73bc38c555 not found: ID does not exist" Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.518640 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw4lg\" (UniqueName: \"kubernetes.io/projected/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9-kube-api-access-lw4lg\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.701505 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi5538-account-delete-t9sww"] Nov 24 19:40:09 crc kubenswrapper[4812]: I1124 19:40:09.711870 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novaapi5538-account-delete-t9sww"] Nov 24 19:40:09 crc kubenswrapper[4812]: E1124 19:40:09.722654 4812 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 24 19:40:09 crc kubenswrapper[4812]: E1124 19:40:09.722754 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-operator-scripts podName:c2d42cca-2cc7-4b6a-9f1d-215f522b4c82 nodeName:}" failed. No retries permitted until 2025-11-24 19:40:41.722726442 +0000 UTC m=+1435.511678843 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-operator-scripts") pod "glance7fd2-account-delete-vl62k" (UID: "c2d42cca-2cc7-4b6a-9f1d-215f522b4c82") : configmap "openstack-scripts" not found Nov 24 19:40:10 crc kubenswrapper[4812]: I1124 19:40:10.375036 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w42fz" event={"ID":"ac401ff7-e4f1-40a9-9187-07f95bde8555","Type":"ContainerStarted","Data":"add3b8d02f754e00f111458fda2ca832ac62f9886909c8bdc63978e7252035be"} Nov 24 19:40:10 crc kubenswrapper[4812]: I1124 19:40:10.407714 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w42fz" podStartSLOduration=2.974341887 podStartE2EDuration="5.407686647s" podCreationTimestamp="2025-11-24 19:40:05 +0000 UTC" firstStartedPulling="2025-11-24 19:40:07.33403694 +0000 UTC m=+1401.122989351" lastFinishedPulling="2025-11-24 19:40:09.76738173 +0000 UTC m=+1403.556334111" observedRunningTime="2025-11-24 19:40:10.398719276 +0000 UTC m=+1404.187671727" watchObservedRunningTime="2025-11-24 19:40:10.407686647 +0000 UTC m=+1404.196639048" Nov 24 19:40:10 crc kubenswrapper[4812]: I1124 19:40:10.978601 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9" path="/var/lib/kubelet/pods/2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9/volumes" Nov 24 19:40:10 crc kubenswrapper[4812]: I1124 19:40:10.979734 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b72df6d7-27f7-49e5-93d9-4069db72b602" path="/var/lib/kubelet/pods/b72df6d7-27f7-49e5-93d9-4069db72b602/volumes" Nov 24 19:40:11 crc kubenswrapper[4812]: W1124 19:40:11.066210 4812 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac401ff7_e4f1_40a9_9187_07f95bde8555.slice/crio-conmon-3549113eebe37e9a4ff1f0b50d0e39836831d60b5a67284423e1cc24fb993992.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac401ff7_e4f1_40a9_9187_07f95bde8555.slice/crio-conmon-3549113eebe37e9a4ff1f0b50d0e39836831d60b5a67284423e1cc24fb993992.scope: no such file or directory Nov 24 19:40:11 crc kubenswrapper[4812]: W1124 19:40:11.066322 4812 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac401ff7_e4f1_40a9_9187_07f95bde8555.slice/crio-3549113eebe37e9a4ff1f0b50d0e39836831d60b5a67284423e1cc24fb993992.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac401ff7_e4f1_40a9_9187_07f95bde8555.slice/crio-3549113eebe37e9a4ff1f0b50d0e39836831d60b5a67284423e1cc24fb993992.scope: no such file or directory Nov 24 19:40:11 crc kubenswrapper[4812]: W1124 19:40:11.066408 4812 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac401ff7_e4f1_40a9_9187_07f95bde8555.slice/crio-conmon-b92b314b2409e98638b8bfcb38bda2c4b678173f51eaf7013c9d7d80369a18f0.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac401ff7_e4f1_40a9_9187_07f95bde8555.slice/crio-conmon-b92b314b2409e98638b8bfcb38bda2c4b678173f51eaf7013c9d7d80369a18f0.scope: no such file or directory Nov 24 19:40:11 crc kubenswrapper[4812]: W1124 19:40:11.066446 4812 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac401ff7_e4f1_40a9_9187_07f95bde8555.slice/crio-b92b314b2409e98638b8bfcb38bda2c4b678173f51eaf7013c9d7d80369a18f0.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac401ff7_e4f1_40a9_9187_07f95bde8555.slice/crio-b92b314b2409e98638b8bfcb38bda2c4b678173f51eaf7013c9d7d80369a18f0.scope: no such file or directory Nov 24 19:40:11 crc kubenswrapper[4812]: I1124 19:40:11.385497 4812 generic.go:334] "Generic (PLEG): container finished" podID="c2d42cca-2cc7-4b6a-9f1d-215f522b4c82" containerID="d5fae8072f9f408d92aa3d01e0eff2bf277ff66460adda263052312bf707a170" exitCode=137 Nov 24 19:40:11 crc kubenswrapper[4812]: I1124 19:40:11.386592 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance7fd2-account-delete-vl62k" event={"ID":"c2d42cca-2cc7-4b6a-9f1d-215f522b4c82","Type":"ContainerDied","Data":"d5fae8072f9f408d92aa3d01e0eff2bf277ff66460adda263052312bf707a170"} Nov 24 19:40:11 crc kubenswrapper[4812]: I1124 19:40:11.386622 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance7fd2-account-delete-vl62k" event={"ID":"c2d42cca-2cc7-4b6a-9f1d-215f522b4c82","Type":"ContainerDied","Data":"78c456259fa0894750b2097a40cb0c89a5e41533a162992101df1bd5e34c787e"} Nov 24 19:40:11 crc kubenswrapper[4812]: I1124 19:40:11.386638 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78c456259fa0894750b2097a40cb0c89a5e41533a162992101df1bd5e34c787e" Nov 24 19:40:11 crc kubenswrapper[4812]: I1124 19:40:11.441618 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance7fd2-account-delete-vl62k" Nov 24 19:40:11 crc kubenswrapper[4812]: I1124 19:40:11.451846 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp9sz\" (UniqueName: \"kubernetes.io/projected/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-kube-api-access-fp9sz\") pod \"c2d42cca-2cc7-4b6a-9f1d-215f522b4c82\" (UID: \"c2d42cca-2cc7-4b6a-9f1d-215f522b4c82\") " Nov 24 19:40:11 crc kubenswrapper[4812]: I1124 19:40:11.452014 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-operator-scripts\") pod \"c2d42cca-2cc7-4b6a-9f1d-215f522b4c82\" (UID: \"c2d42cca-2cc7-4b6a-9f1d-215f522b4c82\") " Nov 24 19:40:11 crc kubenswrapper[4812]: I1124 19:40:11.452811 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c2d42cca-2cc7-4b6a-9f1d-215f522b4c82" (UID: "c2d42cca-2cc7-4b6a-9f1d-215f522b4c82"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:40:11 crc kubenswrapper[4812]: I1124 19:40:11.460693 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-kube-api-access-fp9sz" (OuterVolumeSpecName: "kube-api-access-fp9sz") pod "c2d42cca-2cc7-4b6a-9f1d-215f522b4c82" (UID: "c2d42cca-2cc7-4b6a-9f1d-215f522b4c82"). InnerVolumeSpecName "kube-api-access-fp9sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:40:11 crc kubenswrapper[4812]: I1124 19:40:11.553665 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp9sz\" (UniqueName: \"kubernetes.io/projected/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-kube-api-access-fp9sz\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:11 crc kubenswrapper[4812]: I1124 19:40:11.553710 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:12 crc kubenswrapper[4812]: I1124 19:40:12.393759 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance7fd2-account-delete-vl62k" Nov 24 19:40:12 crc kubenswrapper[4812]: I1124 19:40:12.423069 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance7fd2-account-delete-vl62k"] Nov 24 19:40:12 crc kubenswrapper[4812]: I1124 19:40:12.427315 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance7fd2-account-delete-vl62k"] Nov 24 19:40:12 crc kubenswrapper[4812]: I1124 19:40:12.982170 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2d42cca-2cc7-4b6a-9f1d-215f522b4c82" path="/var/lib/kubelet/pods/c2d42cca-2cc7-4b6a-9f1d-215f522b4c82/volumes" Nov 24 19:40:16 crc kubenswrapper[4812]: I1124 19:40:16.088978 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w42fz" Nov 24 19:40:16 crc kubenswrapper[4812]: I1124 19:40:16.089454 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w42fz" Nov 24 19:40:16 crc kubenswrapper[4812]: I1124 19:40:16.165552 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w42fz" Nov 24 19:40:16 crc kubenswrapper[4812]: I1124 19:40:16.501539 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w42fz" Nov 24 19:40:16 crc kubenswrapper[4812]: I1124 19:40:16.571407 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w42fz"] Nov 24 19:40:18 crc kubenswrapper[4812]: I1124 19:40:18.454160 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w42fz" podUID="ac401ff7-e4f1-40a9-9187-07f95bde8555" containerName="registry-server" containerID="cri-o://add3b8d02f754e00f111458fda2ca832ac62f9886909c8bdc63978e7252035be" gracePeriod=2 Nov 24 19:40:18 crc kubenswrapper[4812]: I1124 19:40:18.828023 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xww6q"] Nov 24 19:40:18 crc kubenswrapper[4812]: E1124 19:40:18.828695 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d42cca-2cc7-4b6a-9f1d-215f522b4c82" containerName="mariadb-account-delete" Nov 24 19:40:18 crc kubenswrapper[4812]: I1124 19:40:18.828747 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d42cca-2cc7-4b6a-9f1d-215f522b4c82" containerName="mariadb-account-delete" Nov 24 19:40:18 crc kubenswrapper[4812]: E1124 19:40:18.828785 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9" containerName="mariadb-account-delete" Nov 24 19:40:18 crc kubenswrapper[4812]: I1124 19:40:18.828795 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9" containerName="mariadb-account-delete" Nov 24 19:40:18 crc kubenswrapper[4812]: E1124 19:40:18.828806 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9" containerName="mariadb-account-delete" Nov 24 19:40:18 crc kubenswrapper[4812]: I1124 19:40:18.828814 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9" containerName="mariadb-account-delete" Nov 24 19:40:18 crc kubenswrapper[4812]: I1124 19:40:18.828980 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9" containerName="mariadb-account-delete" Nov 24 19:40:18 crc kubenswrapper[4812]: I1124 19:40:18.828996 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2d42cca-2cc7-4b6a-9f1d-215f522b4c82" containerName="mariadb-account-delete" Nov 24 19:40:18 crc kubenswrapper[4812]: I1124 19:40:18.829007 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="2383a5cc-08fb-4f42-ab22-eb1c9ac49ad9" containerName="mariadb-account-delete" Nov 24 19:40:18 crc kubenswrapper[4812]: I1124 19:40:18.830235 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xww6q" Nov 24 19:40:18 crc kubenswrapper[4812]: I1124 19:40:18.838602 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xww6q"] Nov 24 19:40:18 crc kubenswrapper[4812]: I1124 19:40:18.952199 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w42fz" Nov 24 19:40:18 crc kubenswrapper[4812]: I1124 19:40:18.970395 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38f22ff6-2032-49c7-8056-3edf9b04e546-catalog-content\") pod \"certified-operators-xww6q\" (UID: \"38f22ff6-2032-49c7-8056-3edf9b04e546\") " pod="openshift-marketplace/certified-operators-xww6q" Nov 24 19:40:18 crc kubenswrapper[4812]: I1124 19:40:18.970484 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38f22ff6-2032-49c7-8056-3edf9b04e546-utilities\") pod \"certified-operators-xww6q\" (UID: \"38f22ff6-2032-49c7-8056-3edf9b04e546\") " pod="openshift-marketplace/certified-operators-xww6q" Nov 24 19:40:18 crc kubenswrapper[4812]: I1124 19:40:18.970523 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwbn7\" (UniqueName: \"kubernetes.io/projected/38f22ff6-2032-49c7-8056-3edf9b04e546-kube-api-access-dwbn7\") pod \"certified-operators-xww6q\" (UID: \"38f22ff6-2032-49c7-8056-3edf9b04e546\") " pod="openshift-marketplace/certified-operators-xww6q" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.072007 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac401ff7-e4f1-40a9-9187-07f95bde8555-catalog-content\") pod \"ac401ff7-e4f1-40a9-9187-07f95bde8555\" (UID: \"ac401ff7-e4f1-40a9-9187-07f95bde8555\") " Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.072115 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzgts\" (UniqueName: \"kubernetes.io/projected/ac401ff7-e4f1-40a9-9187-07f95bde8555-kube-api-access-dzgts\") pod \"ac401ff7-e4f1-40a9-9187-07f95bde8555\" (UID: \"ac401ff7-e4f1-40a9-9187-07f95bde8555\") " Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.072193 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac401ff7-e4f1-40a9-9187-07f95bde8555-utilities\") pod \"ac401ff7-e4f1-40a9-9187-07f95bde8555\" (UID: \"ac401ff7-e4f1-40a9-9187-07f95bde8555\") " Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.072458 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38f22ff6-2032-49c7-8056-3edf9b04e546-utilities\") pod \"certified-operators-xww6q\" (UID: \"38f22ff6-2032-49c7-8056-3edf9b04e546\") " pod="openshift-marketplace/certified-operators-xww6q" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.072509 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwbn7\" (UniqueName: \"kubernetes.io/projected/38f22ff6-2032-49c7-8056-3edf9b04e546-kube-api-access-dwbn7\") pod \"certified-operators-xww6q\" (UID: \"38f22ff6-2032-49c7-8056-3edf9b04e546\") " pod="openshift-marketplace/certified-operators-xww6q" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.072607 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38f22ff6-2032-49c7-8056-3edf9b04e546-catalog-content\") pod \"certified-operators-xww6q\" (UID: \"38f22ff6-2032-49c7-8056-3edf9b04e546\") " pod="openshift-marketplace/certified-operators-xww6q" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.072901 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38f22ff6-2032-49c7-8056-3edf9b04e546-utilities\") pod \"certified-operators-xww6q\" (UID: \"38f22ff6-2032-49c7-8056-3edf9b04e546\") " pod="openshift-marketplace/certified-operators-xww6q" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.072948 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38f22ff6-2032-49c7-8056-3edf9b04e546-catalog-content\") pod \"certified-operators-xww6q\" (UID: \"38f22ff6-2032-49c7-8056-3edf9b04e546\") " pod="openshift-marketplace/certified-operators-xww6q" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.073184 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac401ff7-e4f1-40a9-9187-07f95bde8555-utilities" (OuterVolumeSpecName: "utilities") pod "ac401ff7-e4f1-40a9-9187-07f95bde8555" (UID: "ac401ff7-e4f1-40a9-9187-07f95bde8555"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.079227 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac401ff7-e4f1-40a9-9187-07f95bde8555-kube-api-access-dzgts" (OuterVolumeSpecName: "kube-api-access-dzgts") pod "ac401ff7-e4f1-40a9-9187-07f95bde8555" (UID: "ac401ff7-e4f1-40a9-9187-07f95bde8555"). InnerVolumeSpecName "kube-api-access-dzgts". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.093777 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac401ff7-e4f1-40a9-9187-07f95bde8555-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac401ff7-e4f1-40a9-9187-07f95bde8555" (UID: "ac401ff7-e4f1-40a9-9187-07f95bde8555"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.099728 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwbn7\" (UniqueName: \"kubernetes.io/projected/38f22ff6-2032-49c7-8056-3edf9b04e546-kube-api-access-dwbn7\") pod \"certified-operators-xww6q\" (UID: \"38f22ff6-2032-49c7-8056-3edf9b04e546\") " pod="openshift-marketplace/certified-operators-xww6q" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.151389 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xww6q" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.173735 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac401ff7-e4f1-40a9-9187-07f95bde8555-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.173766 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac401ff7-e4f1-40a9-9187-07f95bde8555-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.173779 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzgts\" (UniqueName: \"kubernetes.io/projected/ac401ff7-e4f1-40a9-9187-07f95bde8555-kube-api-access-dzgts\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.404404 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xww6q"] Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.479159 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xww6q" event={"ID":"38f22ff6-2032-49c7-8056-3edf9b04e546","Type":"ContainerStarted","Data":"106db38a3e08b17306dd24fff5be5d01cb1b5e3bb5ed4264bcda50eaa98e0e2d"} Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.488188 4812 generic.go:334] "Generic (PLEG): container finished" podID="ac401ff7-e4f1-40a9-9187-07f95bde8555" containerID="add3b8d02f754e00f111458fda2ca832ac62f9886909c8bdc63978e7252035be" exitCode=0 Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.488238 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w42fz" event={"ID":"ac401ff7-e4f1-40a9-9187-07f95bde8555","Type":"ContainerDied","Data":"add3b8d02f754e00f111458fda2ca832ac62f9886909c8bdc63978e7252035be"} Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.488268 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w42fz" event={"ID":"ac401ff7-e4f1-40a9-9187-07f95bde8555","Type":"ContainerDied","Data":"079da5498e5270ca0cdc5b47a2aabf66a481ff218a1dedb99237d8995136f493"} Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.488289 4812 scope.go:117] "RemoveContainer" containerID="add3b8d02f754e00f111458fda2ca832ac62f9886909c8bdc63978e7252035be" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.488460 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w42fz" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.519312 4812 scope.go:117] "RemoveContainer" containerID="b92b314b2409e98638b8bfcb38bda2c4b678173f51eaf7013c9d7d80369a18f0" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.525644 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w42fz"] Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.530865 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w42fz"] Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.541021 4812 scope.go:117] "RemoveContainer" containerID="3549113eebe37e9a4ff1f0b50d0e39836831d60b5a67284423e1cc24fb993992" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.567620 4812 scope.go:117] "RemoveContainer" containerID="add3b8d02f754e00f111458fda2ca832ac62f9886909c8bdc63978e7252035be" Nov 24 19:40:19 crc kubenswrapper[4812]: E1124 19:40:19.567921 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"add3b8d02f754e00f111458fda2ca832ac62f9886909c8bdc63978e7252035be\": container with ID starting with add3b8d02f754e00f111458fda2ca832ac62f9886909c8bdc63978e7252035be not found: ID does not exist" containerID="add3b8d02f754e00f111458fda2ca832ac62f9886909c8bdc63978e7252035be" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.567948 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"add3b8d02f754e00f111458fda2ca832ac62f9886909c8bdc63978e7252035be"} err="failed to get container status \"add3b8d02f754e00f111458fda2ca832ac62f9886909c8bdc63978e7252035be\": rpc error: code = NotFound desc = could not find container \"add3b8d02f754e00f111458fda2ca832ac62f9886909c8bdc63978e7252035be\": container with ID starting with add3b8d02f754e00f111458fda2ca832ac62f9886909c8bdc63978e7252035be not found: ID does not exist" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.567968 4812 scope.go:117] "RemoveContainer" containerID="b92b314b2409e98638b8bfcb38bda2c4b678173f51eaf7013c9d7d80369a18f0" Nov 24 19:40:19 crc kubenswrapper[4812]: E1124 19:40:19.568178 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b92b314b2409e98638b8bfcb38bda2c4b678173f51eaf7013c9d7d80369a18f0\": container with ID starting with b92b314b2409e98638b8bfcb38bda2c4b678173f51eaf7013c9d7d80369a18f0 not found: ID does not exist" containerID="b92b314b2409e98638b8bfcb38bda2c4b678173f51eaf7013c9d7d80369a18f0" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.568201 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b92b314b2409e98638b8bfcb38bda2c4b678173f51eaf7013c9d7d80369a18f0"} err="failed to get container status \"b92b314b2409e98638b8bfcb38bda2c4b678173f51eaf7013c9d7d80369a18f0\": rpc error: code = NotFound desc = could not find container \"b92b314b2409e98638b8bfcb38bda2c4b678173f51eaf7013c9d7d80369a18f0\": container with ID starting with b92b314b2409e98638b8bfcb38bda2c4b678173f51eaf7013c9d7d80369a18f0 not found: ID does not exist" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.568215 4812 scope.go:117] "RemoveContainer" containerID="3549113eebe37e9a4ff1f0b50d0e39836831d60b5a67284423e1cc24fb993992" Nov 24 19:40:19 crc kubenswrapper[4812]: E1124 19:40:19.568417 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3549113eebe37e9a4ff1f0b50d0e39836831d60b5a67284423e1cc24fb993992\": container with ID starting with 3549113eebe37e9a4ff1f0b50d0e39836831d60b5a67284423e1cc24fb993992 not found: ID does not exist" containerID="3549113eebe37e9a4ff1f0b50d0e39836831d60b5a67284423e1cc24fb993992" Nov 24 19:40:19 crc kubenswrapper[4812]: I1124 19:40:19.568435 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3549113eebe37e9a4ff1f0b50d0e39836831d60b5a67284423e1cc24fb993992"} err="failed to get container status \"3549113eebe37e9a4ff1f0b50d0e39836831d60b5a67284423e1cc24fb993992\": rpc error: code = NotFound desc = could not find container \"3549113eebe37e9a4ff1f0b50d0e39836831d60b5a67284423e1cc24fb993992\": container with ID starting with 3549113eebe37e9a4ff1f0b50d0e39836831d60b5a67284423e1cc24fb993992 not found: ID does not exist" Nov 24 19:40:20 crc kubenswrapper[4812]: I1124 19:40:20.499367 4812 generic.go:334] "Generic (PLEG): container finished" podID="38f22ff6-2032-49c7-8056-3edf9b04e546" containerID="e535cc42b7c6ebe698df1907dc55355af5fa04b69afa2106a95e71e340098bea" exitCode=0 Nov 24 19:40:20 crc kubenswrapper[4812]: I1124 19:40:20.499520 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xww6q" event={"ID":"38f22ff6-2032-49c7-8056-3edf9b04e546","Type":"ContainerDied","Data":"e535cc42b7c6ebe698df1907dc55355af5fa04b69afa2106a95e71e340098bea"} Nov 24 19:40:20 crc kubenswrapper[4812]: I1124 19:40:20.977446 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac401ff7-e4f1-40a9-9187-07f95bde8555" path="/var/lib/kubelet/pods/ac401ff7-e4f1-40a9-9187-07f95bde8555/volumes" Nov 24 19:40:21 crc kubenswrapper[4812]: I1124 19:40:21.515964 4812 generic.go:334] "Generic (PLEG): container finished" podID="38f22ff6-2032-49c7-8056-3edf9b04e546" containerID="11914f273c7502e466d7431338def0ac4f8da846a95fc154ddc4a17fbcbded54" exitCode=0 Nov 24 19:40:21 crc kubenswrapper[4812]: I1124 19:40:21.516042 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xww6q" event={"ID":"38f22ff6-2032-49c7-8056-3edf9b04e546","Type":"ContainerDied","Data":"11914f273c7502e466d7431338def0ac4f8da846a95fc154ddc4a17fbcbded54"} Nov 24 19:40:22 crc kubenswrapper[4812]: I1124 19:40:22.552483 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xww6q" event={"ID":"38f22ff6-2032-49c7-8056-3edf9b04e546","Type":"ContainerStarted","Data":"ec53b602e05ef397035f047a2e93e719d4c512e27a6a31db5ecff430b2b6e60d"} Nov 24 19:40:22 crc kubenswrapper[4812]: I1124 19:40:22.578464 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xww6q" podStartSLOduration=3.164244471 podStartE2EDuration="4.578442238s" podCreationTimestamp="2025-11-24 19:40:18 +0000 UTC" firstStartedPulling="2025-11-24 19:40:20.504015766 +0000 UTC m=+1414.292968147" lastFinishedPulling="2025-11-24 19:40:21.918213503 +0000 UTC m=+1415.707165914" observedRunningTime="2025-11-24 19:40:22.577914973 +0000 UTC m=+1416.366867344" watchObservedRunningTime="2025-11-24 19:40:22.578442238 +0000 UTC m=+1416.367394619" Nov 24 19:40:25 crc kubenswrapper[4812]: I1124 19:40:25.037685 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8kmt6"] Nov 24 19:40:25 crc kubenswrapper[4812]: E1124 19:40:25.038474 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac401ff7-e4f1-40a9-9187-07f95bde8555" containerName="registry-server" Nov 24 19:40:25 crc kubenswrapper[4812]: I1124 19:40:25.038496 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac401ff7-e4f1-40a9-9187-07f95bde8555" containerName="registry-server" Nov 24 19:40:25 crc kubenswrapper[4812]: E1124 19:40:25.038521 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac401ff7-e4f1-40a9-9187-07f95bde8555" containerName="extract-utilities" Nov 24 19:40:25 crc kubenswrapper[4812]: I1124 19:40:25.038533 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac401ff7-e4f1-40a9-9187-07f95bde8555" containerName="extract-utilities" Nov 24 19:40:25 crc kubenswrapper[4812]: E1124 19:40:25.038553 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac401ff7-e4f1-40a9-9187-07f95bde8555" containerName="extract-content" Nov 24 19:40:25 crc kubenswrapper[4812]: I1124 19:40:25.038568 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac401ff7-e4f1-40a9-9187-07f95bde8555" containerName="extract-content" Nov 24 19:40:25 crc kubenswrapper[4812]: I1124 19:40:25.038832 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac401ff7-e4f1-40a9-9187-07f95bde8555" containerName="registry-server" Nov 24 19:40:25 crc kubenswrapper[4812]: I1124 19:40:25.040732 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kmt6" Nov 24 19:40:25 crc kubenswrapper[4812]: I1124 19:40:25.056441 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8kmt6"] Nov 24 19:40:25 crc kubenswrapper[4812]: I1124 19:40:25.165481 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01b2c16e-f16e-4b31-abf4-ba0a69e849d6-utilities\") pod \"redhat-operators-8kmt6\" (UID: \"01b2c16e-f16e-4b31-abf4-ba0a69e849d6\") " pod="openshift-marketplace/redhat-operators-8kmt6" Nov 24 19:40:25 crc kubenswrapper[4812]: I1124 19:40:25.165551 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rx2w\" (UniqueName: \"kubernetes.io/projected/01b2c16e-f16e-4b31-abf4-ba0a69e849d6-kube-api-access-4rx2w\") pod \"redhat-operators-8kmt6\" (UID: \"01b2c16e-f16e-4b31-abf4-ba0a69e849d6\") " pod="openshift-marketplace/redhat-operators-8kmt6" Nov 24 19:40:25 crc kubenswrapper[4812]: I1124 19:40:25.165617 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01b2c16e-f16e-4b31-abf4-ba0a69e849d6-catalog-content\") pod \"redhat-operators-8kmt6\" (UID: \"01b2c16e-f16e-4b31-abf4-ba0a69e849d6\") " pod="openshift-marketplace/redhat-operators-8kmt6" Nov 24 19:40:25 crc kubenswrapper[4812]: I1124 19:40:25.266667 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01b2c16e-f16e-4b31-abf4-ba0a69e849d6-utilities\") pod \"redhat-operators-8kmt6\" (UID: \"01b2c16e-f16e-4b31-abf4-ba0a69e849d6\") " pod="openshift-marketplace/redhat-operators-8kmt6" Nov 24 19:40:25 crc kubenswrapper[4812]: I1124 19:40:25.266717 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rx2w\" (UniqueName: \"kubernetes.io/projected/01b2c16e-f16e-4b31-abf4-ba0a69e849d6-kube-api-access-4rx2w\") pod \"redhat-operators-8kmt6\" (UID: \"01b2c16e-f16e-4b31-abf4-ba0a69e849d6\") " pod="openshift-marketplace/redhat-operators-8kmt6" Nov 24 19:40:25 crc kubenswrapper[4812]: I1124 19:40:25.266764 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01b2c16e-f16e-4b31-abf4-ba0a69e849d6-catalog-content\") pod \"redhat-operators-8kmt6\" (UID: \"01b2c16e-f16e-4b31-abf4-ba0a69e849d6\") " pod="openshift-marketplace/redhat-operators-8kmt6" Nov 24 19:40:25 crc kubenswrapper[4812]: I1124 19:40:25.267184 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01b2c16e-f16e-4b31-abf4-ba0a69e849d6-utilities\") pod \"redhat-operators-8kmt6\" (UID: \"01b2c16e-f16e-4b31-abf4-ba0a69e849d6\") " pod="openshift-marketplace/redhat-operators-8kmt6" Nov 24 19:40:25 crc kubenswrapper[4812]: I1124 19:40:25.267301 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01b2c16e-f16e-4b31-abf4-ba0a69e849d6-catalog-content\") pod \"redhat-operators-8kmt6\" (UID: \"01b2c16e-f16e-4b31-abf4-ba0a69e849d6\") " pod="openshift-marketplace/redhat-operators-8kmt6" Nov 24 19:40:25 crc kubenswrapper[4812]: I1124 19:40:25.284978 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rx2w\" (UniqueName: \"kubernetes.io/projected/01b2c16e-f16e-4b31-abf4-ba0a69e849d6-kube-api-access-4rx2w\") pod \"redhat-operators-8kmt6\" (UID: \"01b2c16e-f16e-4b31-abf4-ba0a69e849d6\") " pod="openshift-marketplace/redhat-operators-8kmt6" Nov 24 19:40:25 crc kubenswrapper[4812]: I1124 19:40:25.375134 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8kmt6" Nov 24 19:40:25 crc kubenswrapper[4812]: I1124 19:40:25.601291 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8kmt6"] Nov 24 19:40:25 crc kubenswrapper[4812]: W1124 19:40:25.609623 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01b2c16e_f16e_4b31_abf4_ba0a69e849d6.slice/crio-a880a7841f0d223c6113831559d2d6c09c2c31b366caf338806324f0462e311c WatchSource:0}: Error finding container a880a7841f0d223c6113831559d2d6c09c2c31b366caf338806324f0462e311c: Status 404 returned error can't find the container with id a880a7841f0d223c6113831559d2d6c09c2c31b366caf338806324f0462e311c Nov 24 19:40:26 crc kubenswrapper[4812]: I1124 19:40:26.592812 4812 generic.go:334] "Generic (PLEG): container finished" podID="01b2c16e-f16e-4b31-abf4-ba0a69e849d6" containerID="6545ab27aee755ac13f4a54402f3280ac1aeed0d174c1db2c0083b5bf3bea630" exitCode=0 Nov 24 19:40:26 crc kubenswrapper[4812]: I1124 19:40:26.592896 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kmt6" event={"ID":"01b2c16e-f16e-4b31-abf4-ba0a69e849d6","Type":"ContainerDied","Data":"6545ab27aee755ac13f4a54402f3280ac1aeed0d174c1db2c0083b5bf3bea630"} Nov 24 19:40:26 crc kubenswrapper[4812]: I1124 19:40:26.592953 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kmt6" event={"ID":"01b2c16e-f16e-4b31-abf4-ba0a69e849d6","Type":"ContainerStarted","Data":"a880a7841f0d223c6113831559d2d6c09c2c31b366caf338806324f0462e311c"} Nov 24 19:40:29 crc kubenswrapper[4812]: I1124 19:40:29.152558 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xww6q" Nov 24 19:40:29 crc kubenswrapper[4812]: I1124 19:40:29.152952 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xww6q" Nov 24 19:40:29 crc kubenswrapper[4812]: I1124 19:40:29.227922 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xww6q" Nov 24 19:40:29 crc kubenswrapper[4812]: I1124 19:40:29.676168 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xww6q" Nov 24 19:40:30 crc kubenswrapper[4812]: I1124 19:40:30.207078 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xww6q"] Nov 24 19:40:31 crc kubenswrapper[4812]: I1124 19:40:31.643711 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xww6q" podUID="38f22ff6-2032-49c7-8056-3edf9b04e546" containerName="registry-server" containerID="cri-o://ec53b602e05ef397035f047a2e93e719d4c512e27a6a31db5ecff430b2b6e60d" gracePeriod=2 Nov 24 19:40:32 crc kubenswrapper[4812]: I1124 19:40:32.653853 4812 generic.go:334] "Generic (PLEG): container finished" podID="38f22ff6-2032-49c7-8056-3edf9b04e546" containerID="ec53b602e05ef397035f047a2e93e719d4c512e27a6a31db5ecff430b2b6e60d" exitCode=0 Nov 24 19:40:32 crc kubenswrapper[4812]: I1124 19:40:32.653948 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xww6q" event={"ID":"38f22ff6-2032-49c7-8056-3edf9b04e546","Type":"ContainerDied","Data":"ec53b602e05ef397035f047a2e93e719d4c512e27a6a31db5ecff430b2b6e60d"} Nov 24 19:40:32 crc kubenswrapper[4812]: I1124 19:40:32.998952 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:40:32 crc kubenswrapper[4812]: I1124 19:40:32.999027 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:40:33 crc kubenswrapper[4812]: I1124 19:40:33.246089 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xww6q" Nov 24 19:40:33 crc kubenswrapper[4812]: I1124 19:40:33.302771 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38f22ff6-2032-49c7-8056-3edf9b04e546-catalog-content\") pod \"38f22ff6-2032-49c7-8056-3edf9b04e546\" (UID: \"38f22ff6-2032-49c7-8056-3edf9b04e546\") " Nov 24 19:40:33 crc kubenswrapper[4812]: I1124 19:40:33.302833 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38f22ff6-2032-49c7-8056-3edf9b04e546-utilities\") pod \"38f22ff6-2032-49c7-8056-3edf9b04e546\" (UID: \"38f22ff6-2032-49c7-8056-3edf9b04e546\") " Nov 24 19:40:33 crc kubenswrapper[4812]: I1124 19:40:33.303034 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwbn7\" (UniqueName: \"kubernetes.io/projected/38f22ff6-2032-49c7-8056-3edf9b04e546-kube-api-access-dwbn7\") pod \"38f22ff6-2032-49c7-8056-3edf9b04e546\" (UID: \"38f22ff6-2032-49c7-8056-3edf9b04e546\") " Nov 24 19:40:33 crc kubenswrapper[4812]: I1124 19:40:33.304734 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38f22ff6-2032-49c7-8056-3edf9b04e546-utilities" (OuterVolumeSpecName: "utilities") pod "38f22ff6-2032-49c7-8056-3edf9b04e546" (UID: "38f22ff6-2032-49c7-8056-3edf9b04e546"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:40:33 crc kubenswrapper[4812]: I1124 19:40:33.309852 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38f22ff6-2032-49c7-8056-3edf9b04e546-kube-api-access-dwbn7" (OuterVolumeSpecName: "kube-api-access-dwbn7") pod "38f22ff6-2032-49c7-8056-3edf9b04e546" (UID: "38f22ff6-2032-49c7-8056-3edf9b04e546"). InnerVolumeSpecName "kube-api-access-dwbn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:40:33 crc kubenswrapper[4812]: I1124 19:40:33.360899 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38f22ff6-2032-49c7-8056-3edf9b04e546-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38f22ff6-2032-49c7-8056-3edf9b04e546" (UID: "38f22ff6-2032-49c7-8056-3edf9b04e546"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:40:33 crc kubenswrapper[4812]: I1124 19:40:33.406180 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwbn7\" (UniqueName: \"kubernetes.io/projected/38f22ff6-2032-49c7-8056-3edf9b04e546-kube-api-access-dwbn7\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:33 crc kubenswrapper[4812]: I1124 19:40:33.406220 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38f22ff6-2032-49c7-8056-3edf9b04e546-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:33 crc kubenswrapper[4812]: I1124 19:40:33.406229 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38f22ff6-2032-49c7-8056-3edf9b04e546-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:33 crc kubenswrapper[4812]: I1124 19:40:33.668601 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kmt6" event={"ID":"01b2c16e-f16e-4b31-abf4-ba0a69e849d6","Type":"ContainerStarted","Data":"745481e94155d0a5b50bbb0077bc48733f15ee69f5e53536599d08a1018ab681"} Nov 24 19:40:33 crc kubenswrapper[4812]: I1124 19:40:33.671436 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xww6q" event={"ID":"38f22ff6-2032-49c7-8056-3edf9b04e546","Type":"ContainerDied","Data":"106db38a3e08b17306dd24fff5be5d01cb1b5e3bb5ed4264bcda50eaa98e0e2d"} Nov 24 19:40:33 crc kubenswrapper[4812]: I1124 19:40:33.671478 4812 scope.go:117] "RemoveContainer" containerID="ec53b602e05ef397035f047a2e93e719d4c512e27a6a31db5ecff430b2b6e60d" Nov 24 19:40:33 crc kubenswrapper[4812]: I1124 19:40:33.671612 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xww6q" Nov 24 19:40:33 crc kubenswrapper[4812]: I1124 19:40:33.719438 4812 scope.go:117] "RemoveContainer" containerID="11914f273c7502e466d7431338def0ac4f8da846a95fc154ddc4a17fbcbded54" Nov 24 19:40:33 crc kubenswrapper[4812]: I1124 19:40:33.727678 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xww6q"] Nov 24 19:40:33 crc kubenswrapper[4812]: I1124 19:40:33.733570 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xww6q"] Nov 24 19:40:33 crc kubenswrapper[4812]: I1124 19:40:33.756226 4812 scope.go:117] "RemoveContainer" containerID="e535cc42b7c6ebe698df1907dc55355af5fa04b69afa2106a95e71e340098bea" Nov 24 19:40:34 crc kubenswrapper[4812]: I1124 19:40:34.687461 4812 generic.go:334] "Generic (PLEG): container finished" podID="01b2c16e-f16e-4b31-abf4-ba0a69e849d6" containerID="745481e94155d0a5b50bbb0077bc48733f15ee69f5e53536599d08a1018ab681" exitCode=0 Nov 24 19:40:34 crc kubenswrapper[4812]: I1124 19:40:34.687538 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kmt6" event={"ID":"01b2c16e-f16e-4b31-abf4-ba0a69e849d6","Type":"ContainerDied","Data":"745481e94155d0a5b50bbb0077bc48733f15ee69f5e53536599d08a1018ab681"} Nov 24 19:40:34 crc kubenswrapper[4812]: I1124 19:40:34.982319 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38f22ff6-2032-49c7-8056-3edf9b04e546" path="/var/lib/kubelet/pods/38f22ff6-2032-49c7-8056-3edf9b04e546/volumes" Nov 24 19:40:35 crc kubenswrapper[4812]: I1124 19:40:35.703100 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8kmt6" event={"ID":"01b2c16e-f16e-4b31-abf4-ba0a69e849d6","Type":"ContainerStarted","Data":"97917fd2df1238b7177a29df3880647567e8d6814952eefbe4d88bbfb19d1a00"} Nov 24 19:40:35 crc kubenswrapper[4812]: I1124 19:40:35.725256 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8kmt6" podStartSLOduration=3.136298722 podStartE2EDuration="11.725234089s" podCreationTimestamp="2025-11-24 19:40:24 +0000 UTC" firstStartedPulling="2025-11-24 19:40:26.594729319 +0000 UTC m=+1420.383681690" lastFinishedPulling="2025-11-24 19:40:35.183664646 +0000 UTC m=+1428.972617057" observedRunningTime="2025-11-24 19:40:35.724700004 +0000 UTC m=+1429.513652405" watchObservedRunningTime="2025-11-24 19:40:35.725234089 +0000 UTC m=+1429.514186470" Nov 24 19:40:45 crc kubenswrapper[4812]: I1124 19:40:45.376280 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8kmt6" Nov 24 19:40:45 crc kubenswrapper[4812]: I1124 19:40:45.376803 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8kmt6" Nov 24 19:40:45 crc kubenswrapper[4812]: I1124 19:40:45.434209 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8kmt6" Nov 24 19:40:45 crc kubenswrapper[4812]: I1124 19:40:45.860502 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8kmt6" Nov 24 19:40:45 crc kubenswrapper[4812]: I1124 19:40:45.978561 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8kmt6"] Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.033850 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xqfb9"] Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.034187 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xqfb9" podUID="580a9469-6660-444c-ae5b-d5eb1de8554c" containerName="registry-server" containerID="cri-o://79faaac93a722290577326645cad912dfbda941d95b22dc9659bb82d93df365c" gracePeriod=2 Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.508429 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqfb9" Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.521025 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/580a9469-6660-444c-ae5b-d5eb1de8554c-catalog-content\") pod \"580a9469-6660-444c-ae5b-d5eb1de8554c\" (UID: \"580a9469-6660-444c-ae5b-d5eb1de8554c\") " Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.521089 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/580a9469-6660-444c-ae5b-d5eb1de8554c-utilities\") pod \"580a9469-6660-444c-ae5b-d5eb1de8554c\" (UID: \"580a9469-6660-444c-ae5b-d5eb1de8554c\") " Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.521142 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5l4l\" (UniqueName: \"kubernetes.io/projected/580a9469-6660-444c-ae5b-d5eb1de8554c-kube-api-access-z5l4l\") pod \"580a9469-6660-444c-ae5b-d5eb1de8554c\" (UID: \"580a9469-6660-444c-ae5b-d5eb1de8554c\") " Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.523706 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/580a9469-6660-444c-ae5b-d5eb1de8554c-utilities" (OuterVolumeSpecName: "utilities") pod "580a9469-6660-444c-ae5b-d5eb1de8554c" (UID: "580a9469-6660-444c-ae5b-d5eb1de8554c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.528921 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580a9469-6660-444c-ae5b-d5eb1de8554c-kube-api-access-z5l4l" (OuterVolumeSpecName: "kube-api-access-z5l4l") pod "580a9469-6660-444c-ae5b-d5eb1de8554c" (UID: "580a9469-6660-444c-ae5b-d5eb1de8554c"). InnerVolumeSpecName "kube-api-access-z5l4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.621864 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/580a9469-6660-444c-ae5b-d5eb1de8554c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "580a9469-6660-444c-ae5b-d5eb1de8554c" (UID: "580a9469-6660-444c-ae5b-d5eb1de8554c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.622257 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/580a9469-6660-444c-ae5b-d5eb1de8554c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.622293 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/580a9469-6660-444c-ae5b-d5eb1de8554c-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.622303 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5l4l\" (UniqueName: \"kubernetes.io/projected/580a9469-6660-444c-ae5b-d5eb1de8554c-kube-api-access-z5l4l\") on node \"crc\" DevicePath \"\"" Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.813439 4812 generic.go:334] "Generic (PLEG): container finished" podID="580a9469-6660-444c-ae5b-d5eb1de8554c" containerID="79faaac93a722290577326645cad912dfbda941d95b22dc9659bb82d93df365c" exitCode=0 Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.813501 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqfb9" event={"ID":"580a9469-6660-444c-ae5b-d5eb1de8554c","Type":"ContainerDied","Data":"79faaac93a722290577326645cad912dfbda941d95b22dc9659bb82d93df365c"} Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.813920 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xqfb9" event={"ID":"580a9469-6660-444c-ae5b-d5eb1de8554c","Type":"ContainerDied","Data":"67072fb41c821e24cc63b825126fcea4f144e3427dec2eae3882e3181f811df5"} Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.813956 4812 scope.go:117] "RemoveContainer" containerID="79faaac93a722290577326645cad912dfbda941d95b22dc9659bb82d93df365c" Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.813541 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xqfb9" Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.854068 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xqfb9"] Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.855618 4812 scope.go:117] "RemoveContainer" containerID="2a63396e133a6e9387c01e584952c194d0121a49df9bd6b1bac8476c88db8d5f" Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.860113 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xqfb9"] Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.877039 4812 scope.go:117] "RemoveContainer" containerID="7ec51cc2ec24d10524b321475684565c4a384e46f23525ab9ff80bb515821170" Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.900805 4812 scope.go:117] "RemoveContainer" containerID="79faaac93a722290577326645cad912dfbda941d95b22dc9659bb82d93df365c" Nov 24 19:40:46 crc kubenswrapper[4812]: E1124 19:40:46.901484 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79faaac93a722290577326645cad912dfbda941d95b22dc9659bb82d93df365c\": container with ID starting with 79faaac93a722290577326645cad912dfbda941d95b22dc9659bb82d93df365c not found: ID does not exist" containerID="79faaac93a722290577326645cad912dfbda941d95b22dc9659bb82d93df365c" Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.901558 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79faaac93a722290577326645cad912dfbda941d95b22dc9659bb82d93df365c"} err="failed to get container status \"79faaac93a722290577326645cad912dfbda941d95b22dc9659bb82d93df365c\": rpc error: code = NotFound desc = could not find container \"79faaac93a722290577326645cad912dfbda941d95b22dc9659bb82d93df365c\": container with ID starting with 79faaac93a722290577326645cad912dfbda941d95b22dc9659bb82d93df365c not found: ID does not exist" Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.901593 4812 scope.go:117] "RemoveContainer" containerID="2a63396e133a6e9387c01e584952c194d0121a49df9bd6b1bac8476c88db8d5f" Nov 24 19:40:46 crc kubenswrapper[4812]: E1124 19:40:46.901908 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a63396e133a6e9387c01e584952c194d0121a49df9bd6b1bac8476c88db8d5f\": container with ID starting with 2a63396e133a6e9387c01e584952c194d0121a49df9bd6b1bac8476c88db8d5f not found: ID does not exist" containerID="2a63396e133a6e9387c01e584952c194d0121a49df9bd6b1bac8476c88db8d5f" Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.901942 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a63396e133a6e9387c01e584952c194d0121a49df9bd6b1bac8476c88db8d5f"} err="failed to get container status \"2a63396e133a6e9387c01e584952c194d0121a49df9bd6b1bac8476c88db8d5f\": rpc error: code = NotFound desc = could not find container \"2a63396e133a6e9387c01e584952c194d0121a49df9bd6b1bac8476c88db8d5f\": container with ID starting with 2a63396e133a6e9387c01e584952c194d0121a49df9bd6b1bac8476c88db8d5f not found: ID does not exist" Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.901964 4812 scope.go:117] "RemoveContainer" containerID="7ec51cc2ec24d10524b321475684565c4a384e46f23525ab9ff80bb515821170" Nov 24 19:40:46 crc kubenswrapper[4812]: E1124 19:40:46.902323 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ec51cc2ec24d10524b321475684565c4a384e46f23525ab9ff80bb515821170\": container with ID starting with 7ec51cc2ec24d10524b321475684565c4a384e46f23525ab9ff80bb515821170 not found: ID does not exist" containerID="7ec51cc2ec24d10524b321475684565c4a384e46f23525ab9ff80bb515821170" Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.902460 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ec51cc2ec24d10524b321475684565c4a384e46f23525ab9ff80bb515821170"} err="failed to get container status \"7ec51cc2ec24d10524b321475684565c4a384e46f23525ab9ff80bb515821170\": rpc error: code = NotFound desc = could not find container \"7ec51cc2ec24d10524b321475684565c4a384e46f23525ab9ff80bb515821170\": container with ID starting with 7ec51cc2ec24d10524b321475684565c4a384e46f23525ab9ff80bb515821170 not found: ID does not exist" Nov 24 19:40:46 crc kubenswrapper[4812]: I1124 19:40:46.976990 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="580a9469-6660-444c-ae5b-d5eb1de8554c" path="/var/lib/kubelet/pods/580a9469-6660-444c-ae5b-d5eb1de8554c/volumes" Nov 24 19:41:02 crc kubenswrapper[4812]: I1124 19:41:02.998190 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:41:02 crc kubenswrapper[4812]: I1124 19:41:02.998870 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:41:25 crc kubenswrapper[4812]: I1124 19:41:25.977132 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n7lj9"] Nov 24 19:41:25 crc kubenswrapper[4812]: E1124 19:41:25.978085 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f22ff6-2032-49c7-8056-3edf9b04e546" containerName="extract-utilities" Nov 24 19:41:25 crc kubenswrapper[4812]: I1124 19:41:25.978106 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f22ff6-2032-49c7-8056-3edf9b04e546" containerName="extract-utilities" Nov 24 19:41:25 crc kubenswrapper[4812]: E1124 19:41:25.978121 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f22ff6-2032-49c7-8056-3edf9b04e546" containerName="extract-content" Nov 24 19:41:25 crc kubenswrapper[4812]: I1124 19:41:25.978136 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f22ff6-2032-49c7-8056-3edf9b04e546" containerName="extract-content" Nov 24 19:41:25 crc kubenswrapper[4812]: E1124 19:41:25.978158 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580a9469-6660-444c-ae5b-d5eb1de8554c" containerName="extract-utilities" Nov 24 19:41:25 crc kubenswrapper[4812]: I1124 19:41:25.978170 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="580a9469-6660-444c-ae5b-d5eb1de8554c" containerName="extract-utilities" Nov 24 19:41:25 crc kubenswrapper[4812]: E1124 19:41:25.978191 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580a9469-6660-444c-ae5b-d5eb1de8554c" containerName="registry-server" Nov 24 19:41:25 crc kubenswrapper[4812]: I1124 19:41:25.978202 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="580a9469-6660-444c-ae5b-d5eb1de8554c" containerName="registry-server" Nov 24 19:41:25 crc kubenswrapper[4812]: E1124 19:41:25.978240 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580a9469-6660-444c-ae5b-d5eb1de8554c" containerName="extract-content" Nov 24 19:41:25 crc kubenswrapper[4812]: I1124 19:41:25.978252 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="580a9469-6660-444c-ae5b-d5eb1de8554c" containerName="extract-content" Nov 24 19:41:25 crc kubenswrapper[4812]: E1124 19:41:25.978279 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f22ff6-2032-49c7-8056-3edf9b04e546" containerName="registry-server" Nov 24 19:41:25 crc kubenswrapper[4812]: I1124 19:41:25.978291 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f22ff6-2032-49c7-8056-3edf9b04e546" containerName="registry-server" Nov 24 19:41:25 crc kubenswrapper[4812]: I1124 19:41:25.978574 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="580a9469-6660-444c-ae5b-d5eb1de8554c" containerName="registry-server" Nov 24 19:41:25 crc kubenswrapper[4812]: I1124 19:41:25.978623 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="38f22ff6-2032-49c7-8056-3edf9b04e546" containerName="registry-server" Nov 24 19:41:25 crc kubenswrapper[4812]: I1124 19:41:25.980389 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n7lj9" Nov 24 19:41:25 crc kubenswrapper[4812]: I1124 19:41:25.993953 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n7lj9"] Nov 24 19:41:26 crc kubenswrapper[4812]: I1124 19:41:26.024896 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f0ab3b5-5959-4e4a-855b-904f8412e226-catalog-content\") pod \"community-operators-n7lj9\" (UID: \"1f0ab3b5-5959-4e4a-855b-904f8412e226\") " pod="openshift-marketplace/community-operators-n7lj9" Nov 24 19:41:26 crc kubenswrapper[4812]: I1124 19:41:26.025092 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f0ab3b5-5959-4e4a-855b-904f8412e226-utilities\") pod \"community-operators-n7lj9\" (UID: \"1f0ab3b5-5959-4e4a-855b-904f8412e226\") " pod="openshift-marketplace/community-operators-n7lj9" Nov 24 19:41:26 crc kubenswrapper[4812]: I1124 19:41:26.025236 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7dcv\" (UniqueName: \"kubernetes.io/projected/1f0ab3b5-5959-4e4a-855b-904f8412e226-kube-api-access-d7dcv\") pod \"community-operators-n7lj9\" (UID: \"1f0ab3b5-5959-4e4a-855b-904f8412e226\") " pod="openshift-marketplace/community-operators-n7lj9" Nov 24 19:41:26 crc kubenswrapper[4812]: I1124 19:41:26.126591 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7dcv\" (UniqueName: \"kubernetes.io/projected/1f0ab3b5-5959-4e4a-855b-904f8412e226-kube-api-access-d7dcv\") pod \"community-operators-n7lj9\" (UID: \"1f0ab3b5-5959-4e4a-855b-904f8412e226\") " pod="openshift-marketplace/community-operators-n7lj9" Nov 24 19:41:26 crc kubenswrapper[4812]: I1124 19:41:26.126704 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f0ab3b5-5959-4e4a-855b-904f8412e226-catalog-content\") pod \"community-operators-n7lj9\" (UID: \"1f0ab3b5-5959-4e4a-855b-904f8412e226\") " pod="openshift-marketplace/community-operators-n7lj9" Nov 24 19:41:26 crc kubenswrapper[4812]: I1124 19:41:26.126743 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f0ab3b5-5959-4e4a-855b-904f8412e226-utilities\") pod \"community-operators-n7lj9\" (UID: \"1f0ab3b5-5959-4e4a-855b-904f8412e226\") " pod="openshift-marketplace/community-operators-n7lj9" Nov 24 19:41:26 crc kubenswrapper[4812]: I1124 19:41:26.127273 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f0ab3b5-5959-4e4a-855b-904f8412e226-utilities\") pod \"community-operators-n7lj9\" (UID: \"1f0ab3b5-5959-4e4a-855b-904f8412e226\") " pod="openshift-marketplace/community-operators-n7lj9" Nov 24 19:41:26 crc kubenswrapper[4812]: I1124 19:41:26.127473 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f0ab3b5-5959-4e4a-855b-904f8412e226-catalog-content\") pod \"community-operators-n7lj9\" (UID: \"1f0ab3b5-5959-4e4a-855b-904f8412e226\") " pod="openshift-marketplace/community-operators-n7lj9" Nov 24 19:41:26 crc kubenswrapper[4812]: I1124 19:41:26.155181 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7dcv\" (UniqueName: \"kubernetes.io/projected/1f0ab3b5-5959-4e4a-855b-904f8412e226-kube-api-access-d7dcv\") pod \"community-operators-n7lj9\" (UID: \"1f0ab3b5-5959-4e4a-855b-904f8412e226\") " pod="openshift-marketplace/community-operators-n7lj9" Nov 24 19:41:26 crc kubenswrapper[4812]: I1124 19:41:26.316065 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n7lj9" Nov 24 19:41:26 crc kubenswrapper[4812]: I1124 19:41:26.849080 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n7lj9"] Nov 24 19:41:27 crc kubenswrapper[4812]: I1124 19:41:27.277925 4812 generic.go:334] "Generic (PLEG): container finished" podID="1f0ab3b5-5959-4e4a-855b-904f8412e226" containerID="76e408ccd880d1f60ac2591911602514f1ffe24666afce92e09b5cb6a1cee565" exitCode=0 Nov 24 19:41:27 crc kubenswrapper[4812]: I1124 19:41:27.278042 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7lj9" event={"ID":"1f0ab3b5-5959-4e4a-855b-904f8412e226","Type":"ContainerDied","Data":"76e408ccd880d1f60ac2591911602514f1ffe24666afce92e09b5cb6a1cee565"} Nov 24 19:41:27 crc kubenswrapper[4812]: I1124 19:41:27.278236 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7lj9" event={"ID":"1f0ab3b5-5959-4e4a-855b-904f8412e226","Type":"ContainerStarted","Data":"bd5c9c3f1f3c53717f95a9f1be85bb45948b3cac45bae1b222bae47c2c01b86f"} Nov 24 19:41:28 crc kubenswrapper[4812]: I1124 19:41:28.290636 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7lj9" event={"ID":"1f0ab3b5-5959-4e4a-855b-904f8412e226","Type":"ContainerStarted","Data":"3dbc4fdbb653ea2e769e259772ffaf462c2e6e0fdd663631633fb677f0d10310"} Nov 24 19:41:29 crc kubenswrapper[4812]: I1124 19:41:29.305443 4812 generic.go:334] "Generic (PLEG): container finished" podID="1f0ab3b5-5959-4e4a-855b-904f8412e226" containerID="3dbc4fdbb653ea2e769e259772ffaf462c2e6e0fdd663631633fb677f0d10310" exitCode=0 Nov 24 19:41:29 crc kubenswrapper[4812]: I1124 19:41:29.305587 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7lj9" event={"ID":"1f0ab3b5-5959-4e4a-855b-904f8412e226","Type":"ContainerDied","Data":"3dbc4fdbb653ea2e769e259772ffaf462c2e6e0fdd663631633fb677f0d10310"} Nov 24 19:41:30 crc kubenswrapper[4812]: I1124 19:41:30.322476 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7lj9" event={"ID":"1f0ab3b5-5959-4e4a-855b-904f8412e226","Type":"ContainerStarted","Data":"f915645755075fb1005f95a8603e7ab7db60e63265d9e3bb4e9251ac8b2abf4b"} Nov 24 19:41:30 crc kubenswrapper[4812]: I1124 19:41:30.348149 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n7lj9" podStartSLOduration=2.893169696 podStartE2EDuration="5.348122503s" podCreationTimestamp="2025-11-24 19:41:25 +0000 UTC" firstStartedPulling="2025-11-24 19:41:27.28026958 +0000 UTC m=+1481.069221991" lastFinishedPulling="2025-11-24 19:41:29.735222417 +0000 UTC m=+1483.524174798" observedRunningTime="2025-11-24 19:41:30.344916215 +0000 UTC m=+1484.133868626" watchObservedRunningTime="2025-11-24 19:41:30.348122503 +0000 UTC m=+1484.137074914" Nov 24 19:41:32 crc kubenswrapper[4812]: I1124 19:41:32.998577 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:41:32 crc kubenswrapper[4812]: I1124 19:41:32.999295 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:41:33 crc kubenswrapper[4812]: I1124 19:41:32.999430 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:41:33 crc kubenswrapper[4812]: I1124 19:41:33.000117 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d0df34c9db895de19526e2349039e7168974e5c1bd66252317644188b384437d"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 19:41:33 crc kubenswrapper[4812]: I1124 19:41:33.000212 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://d0df34c9db895de19526e2349039e7168974e5c1bd66252317644188b384437d" gracePeriod=600 Nov 24 19:41:33 crc kubenswrapper[4812]: I1124 19:41:33.353495 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="d0df34c9db895de19526e2349039e7168974e5c1bd66252317644188b384437d" exitCode=0 Nov 24 19:41:33 crc kubenswrapper[4812]: I1124 19:41:33.353530 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"d0df34c9db895de19526e2349039e7168974e5c1bd66252317644188b384437d"} Nov 24 19:41:33 crc kubenswrapper[4812]: I1124 19:41:33.353595 4812 scope.go:117] "RemoveContainer" containerID="b36a42e19accf5660fd1a99293b3671794dc547e12b8dcbfeada632fbf2982a3" Nov 24 19:41:34 crc kubenswrapper[4812]: I1124 19:41:34.379381 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691"} Nov 24 19:41:36 crc kubenswrapper[4812]: I1124 19:41:36.316558 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n7lj9" Nov 24 19:41:36 crc kubenswrapper[4812]: I1124 19:41:36.316958 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n7lj9" Nov 24 19:41:36 crc kubenswrapper[4812]: I1124 19:41:36.374213 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n7lj9" Nov 24 19:41:36 crc kubenswrapper[4812]: I1124 19:41:36.468255 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n7lj9" Nov 24 19:41:36 crc kubenswrapper[4812]: I1124 19:41:36.628405 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n7lj9"] Nov 24 19:41:38 crc kubenswrapper[4812]: I1124 19:41:38.427578 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n7lj9" podUID="1f0ab3b5-5959-4e4a-855b-904f8412e226" containerName="registry-server" containerID="cri-o://f915645755075fb1005f95a8603e7ab7db60e63265d9e3bb4e9251ac8b2abf4b" gracePeriod=2 Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.430557 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n7lj9" Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.435365 4812 generic.go:334] "Generic (PLEG): container finished" podID="1f0ab3b5-5959-4e4a-855b-904f8412e226" containerID="f915645755075fb1005f95a8603e7ab7db60e63265d9e3bb4e9251ac8b2abf4b" exitCode=0 Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.435401 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7lj9" event={"ID":"1f0ab3b5-5959-4e4a-855b-904f8412e226","Type":"ContainerDied","Data":"f915645755075fb1005f95a8603e7ab7db60e63265d9e3bb4e9251ac8b2abf4b"} Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.435408 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n7lj9" Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.435428 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7lj9" event={"ID":"1f0ab3b5-5959-4e4a-855b-904f8412e226","Type":"ContainerDied","Data":"bd5c9c3f1f3c53717f95a9f1be85bb45948b3cac45bae1b222bae47c2c01b86f"} Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.435464 4812 scope.go:117] "RemoveContainer" containerID="f915645755075fb1005f95a8603e7ab7db60e63265d9e3bb4e9251ac8b2abf4b" Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.462474 4812 scope.go:117] "RemoveContainer" containerID="3dbc4fdbb653ea2e769e259772ffaf462c2e6e0fdd663631633fb677f0d10310" Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.485049 4812 scope.go:117] "RemoveContainer" containerID="76e408ccd880d1f60ac2591911602514f1ffe24666afce92e09b5cb6a1cee565" Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.507912 4812 scope.go:117] "RemoveContainer" containerID="f915645755075fb1005f95a8603e7ab7db60e63265d9e3bb4e9251ac8b2abf4b" Nov 24 19:41:39 crc kubenswrapper[4812]: E1124 19:41:39.508372 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f915645755075fb1005f95a8603e7ab7db60e63265d9e3bb4e9251ac8b2abf4b\": container with ID starting with f915645755075fb1005f95a8603e7ab7db60e63265d9e3bb4e9251ac8b2abf4b not found: ID does not exist" containerID="f915645755075fb1005f95a8603e7ab7db60e63265d9e3bb4e9251ac8b2abf4b" Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.508421 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f915645755075fb1005f95a8603e7ab7db60e63265d9e3bb4e9251ac8b2abf4b"} err="failed to get container status \"f915645755075fb1005f95a8603e7ab7db60e63265d9e3bb4e9251ac8b2abf4b\": rpc error: code = NotFound desc = could not find container \"f915645755075fb1005f95a8603e7ab7db60e63265d9e3bb4e9251ac8b2abf4b\": container with ID starting with f915645755075fb1005f95a8603e7ab7db60e63265d9e3bb4e9251ac8b2abf4b not found: ID does not exist" Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.508453 4812 scope.go:117] "RemoveContainer" containerID="3dbc4fdbb653ea2e769e259772ffaf462c2e6e0fdd663631633fb677f0d10310" Nov 24 19:41:39 crc kubenswrapper[4812]: E1124 19:41:39.508776 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dbc4fdbb653ea2e769e259772ffaf462c2e6e0fdd663631633fb677f0d10310\": container with ID starting with 3dbc4fdbb653ea2e769e259772ffaf462c2e6e0fdd663631633fb677f0d10310 not found: ID does not exist" containerID="3dbc4fdbb653ea2e769e259772ffaf462c2e6e0fdd663631633fb677f0d10310" Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.508809 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dbc4fdbb653ea2e769e259772ffaf462c2e6e0fdd663631633fb677f0d10310"} err="failed to get container status \"3dbc4fdbb653ea2e769e259772ffaf462c2e6e0fdd663631633fb677f0d10310\": rpc error: code = NotFound desc = could not find container \"3dbc4fdbb653ea2e769e259772ffaf462c2e6e0fdd663631633fb677f0d10310\": container with ID starting with 3dbc4fdbb653ea2e769e259772ffaf462c2e6e0fdd663631633fb677f0d10310 not found: ID does not exist" Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.508831 4812 scope.go:117] "RemoveContainer" containerID="76e408ccd880d1f60ac2591911602514f1ffe24666afce92e09b5cb6a1cee565" Nov 24 19:41:39 crc kubenswrapper[4812]: E1124 19:41:39.509104 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76e408ccd880d1f60ac2591911602514f1ffe24666afce92e09b5cb6a1cee565\": container with ID starting with 76e408ccd880d1f60ac2591911602514f1ffe24666afce92e09b5cb6a1cee565 not found: ID does not exist" containerID="76e408ccd880d1f60ac2591911602514f1ffe24666afce92e09b5cb6a1cee565" Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.509155 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76e408ccd880d1f60ac2591911602514f1ffe24666afce92e09b5cb6a1cee565"} err="failed to get container status \"76e408ccd880d1f60ac2591911602514f1ffe24666afce92e09b5cb6a1cee565\": rpc error: code = NotFound desc = could not find container \"76e408ccd880d1f60ac2591911602514f1ffe24666afce92e09b5cb6a1cee565\": container with ID starting with 76e408ccd880d1f60ac2591911602514f1ffe24666afce92e09b5cb6a1cee565 not found: ID does not exist" Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.542185 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7dcv\" (UniqueName: \"kubernetes.io/projected/1f0ab3b5-5959-4e4a-855b-904f8412e226-kube-api-access-d7dcv\") pod \"1f0ab3b5-5959-4e4a-855b-904f8412e226\" (UID: \"1f0ab3b5-5959-4e4a-855b-904f8412e226\") " Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.542318 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f0ab3b5-5959-4e4a-855b-904f8412e226-catalog-content\") pod \"1f0ab3b5-5959-4e4a-855b-904f8412e226\" (UID: \"1f0ab3b5-5959-4e4a-855b-904f8412e226\") " Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.542431 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f0ab3b5-5959-4e4a-855b-904f8412e226-utilities\") pod \"1f0ab3b5-5959-4e4a-855b-904f8412e226\" (UID: \"1f0ab3b5-5959-4e4a-855b-904f8412e226\") " Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.543248 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f0ab3b5-5959-4e4a-855b-904f8412e226-utilities" (OuterVolumeSpecName: "utilities") pod "1f0ab3b5-5959-4e4a-855b-904f8412e226" (UID: "1f0ab3b5-5959-4e4a-855b-904f8412e226"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.550263 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f0ab3b5-5959-4e4a-855b-904f8412e226-kube-api-access-d7dcv" (OuterVolumeSpecName: "kube-api-access-d7dcv") pod "1f0ab3b5-5959-4e4a-855b-904f8412e226" (UID: "1f0ab3b5-5959-4e4a-855b-904f8412e226"). InnerVolumeSpecName "kube-api-access-d7dcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.611716 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f0ab3b5-5959-4e4a-855b-904f8412e226-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f0ab3b5-5959-4e4a-855b-904f8412e226" (UID: "1f0ab3b5-5959-4e4a-855b-904f8412e226"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.643731 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7dcv\" (UniqueName: \"kubernetes.io/projected/1f0ab3b5-5959-4e4a-855b-904f8412e226-kube-api-access-d7dcv\") on node \"crc\" DevicePath \"\"" Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.643770 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f0ab3b5-5959-4e4a-855b-904f8412e226-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.643782 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f0ab3b5-5959-4e4a-855b-904f8412e226-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.776257 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n7lj9"] Nov 24 19:41:39 crc kubenswrapper[4812]: I1124 19:41:39.785486 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n7lj9"] Nov 24 19:41:40 crc kubenswrapper[4812]: I1124 19:41:40.983879 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f0ab3b5-5959-4e4a-855b-904f8412e226" path="/var/lib/kubelet/pods/1f0ab3b5-5959-4e4a-855b-904f8412e226/volumes" Nov 24 19:41:49 crc kubenswrapper[4812]: I1124 19:41:49.008357 4812 scope.go:117] "RemoveContainer" containerID="5b24363a93467b1f7aa9e690a4e55479fe80f7b68b45f314d458b78b448a1226" Nov 24 19:41:49 crc kubenswrapper[4812]: I1124 19:41:49.042302 4812 scope.go:117] "RemoveContainer" containerID="3f6b939af1e2c730ee7d67e6cd8a16b9550676b93f648f93b4359aa7139b962a" Nov 24 19:41:49 crc kubenswrapper[4812]: I1124 19:41:49.089614 4812 scope.go:117] "RemoveContainer" containerID="3f636d722fa34d8944e84798a852e165a8362197815ef3149415654b567bad59" Nov 24 19:41:49 crc kubenswrapper[4812]: I1124 19:41:49.116048 4812 scope.go:117] "RemoveContainer" containerID="27dd92bf0fa31dae5743c0d091290f515f61b6aec4150df2a4d104523c02a158" Nov 24 19:41:49 crc kubenswrapper[4812]: I1124 19:41:49.154932 4812 scope.go:117] "RemoveContainer" containerID="26e7b8154f6ad6d1baf3c0c70a991b1f542ee6aff6d15fb541f84daeb638b80a" Nov 24 19:41:49 crc kubenswrapper[4812]: I1124 19:41:49.178742 4812 scope.go:117] "RemoveContainer" containerID="6784b207fae1272e4e64281366725f6ba901ff3f865eac1a5c7530617134de08" Nov 24 19:41:49 crc kubenswrapper[4812]: I1124 19:41:49.240047 4812 scope.go:117] "RemoveContainer" containerID="fb57fda78afe686a7cc9cc2b74189b0701e0efae994b7b22fa603354146ddb49" Nov 24 19:41:49 crc kubenswrapper[4812]: I1124 19:41:49.275799 4812 scope.go:117] "RemoveContainer" containerID="b703ec77a0ae9156427e8cc74e3ce3d598c22e004a5eb55415fde3f16621e23f" Nov 24 19:41:49 crc kubenswrapper[4812]: I1124 19:41:49.306531 4812 scope.go:117] "RemoveContainer" containerID="c8fca5425aa6a7ad2399b179910d88ae1434ba33809ef43ae78dfaf618f07228" Nov 24 19:41:49 crc kubenswrapper[4812]: I1124 19:41:49.327752 4812 scope.go:117] "RemoveContainer" containerID="2f8101fdc30ddfa6c4c114cd05ac79aa0d1d7716e24bd8673c1568ea942f93c1" Nov 24 19:42:49 crc kubenswrapper[4812]: I1124 19:42:49.618774 4812 scope.go:117] "RemoveContainer" containerID="44d0bec5b4e03ee1c205aff04d3d557f0b6f05d3d6adee7a3fd558ceba3cbd6b" Nov 24 19:42:49 crc kubenswrapper[4812]: I1124 19:42:49.654128 4812 scope.go:117] "RemoveContainer" containerID="759efb3419052af308a0f1cfd3e42dcd638d6d2359dad4e5663e9916b25ea007" Nov 24 19:42:49 crc kubenswrapper[4812]: I1124 19:42:49.705560 4812 scope.go:117] "RemoveContainer" containerID="a0e49ffe5998442991d74da802e4da1613968b0b01164fd062d312a1654b88c3" Nov 24 19:42:49 crc kubenswrapper[4812]: I1124 19:42:49.736899 4812 scope.go:117] "RemoveContainer" containerID="a8144b0e721573926c3a34a9404c880cf7fb2982f4529eb1e7122567c155687e" Nov 24 19:42:49 crc kubenswrapper[4812]: I1124 19:42:49.777099 4812 scope.go:117] "RemoveContainer" containerID="35c1eabbdd36c309a8caf2aee002a0c9f518678113009e4ab89110691a9d1f18" Nov 24 19:42:49 crc kubenswrapper[4812]: I1124 19:42:49.809213 4812 scope.go:117] "RemoveContainer" containerID="46b5e8995de160649c855995e4c4380c56e1024966f5c97fad37845a75a8e823" Nov 24 19:42:49 crc kubenswrapper[4812]: I1124 19:42:49.837792 4812 scope.go:117] "RemoveContainer" containerID="c8722eb6dbfb9224875a2ab675d16b71bae4375b94338205f71c4fe08e703374" Nov 24 19:42:49 crc kubenswrapper[4812]: I1124 19:42:49.863454 4812 scope.go:117] "RemoveContainer" containerID="f5c4371e8bb52bf127f9a713ce008c36c8f59703bdb0516d218edf25f0da6d3c" Nov 24 19:42:49 crc kubenswrapper[4812]: I1124 19:42:49.899398 4812 scope.go:117] "RemoveContainer" containerID="6ecefa45e298955bae6aa3a3705807ec3dc02991ab6d484fb90e5a7431b808c5" Nov 24 19:42:49 crc kubenswrapper[4812]: I1124 19:42:49.939560 4812 scope.go:117] "RemoveContainer" containerID="c13e6786a49a879ebdafe7e2706bbeadf9a5620f8ec5d41a07a487a04ba9d615" Nov 24 19:42:49 crc kubenswrapper[4812]: I1124 19:42:49.962020 4812 scope.go:117] "RemoveContainer" containerID="bff8b6d9f9e59a43bca8cd6f4630a2ffe5d31fd7e1ec2ffc6c7cc3e9a1a6f6ac" Nov 24 19:42:50 crc kubenswrapper[4812]: I1124 19:42:50.009325 4812 scope.go:117] "RemoveContainer" containerID="e021b7fe56e0097d5aa8fd3433742c3816130416aba858d21e90d0ed7ba6119b" Nov 24 19:42:50 crc kubenswrapper[4812]: I1124 19:42:50.050955 4812 scope.go:117] "RemoveContainer" containerID="3013595fb4bd60e42e57392ce0200dcf65e129516355580bf500376553432846" Nov 24 19:42:50 crc kubenswrapper[4812]: I1124 19:42:50.072612 4812 scope.go:117] "RemoveContainer" containerID="4ae81d337940329ca754c7061c6455b5bfd2988945b51436bb58b67fc8c4cf02" Nov 24 19:43:50 crc kubenswrapper[4812]: I1124 19:43:50.314356 4812 scope.go:117] "RemoveContainer" containerID="a1d0c0172507606f6c4fa928a3e9234afb2ec885eb8b341a5de04336c4f109af" Nov 24 19:43:50 crc kubenswrapper[4812]: I1124 19:43:50.354315 4812 scope.go:117] "RemoveContainer" containerID="bae6552008f168d97c89dd33e9696bdc1c9a7a3e98584a611c0be209602e0b54" Nov 24 19:43:50 crc kubenswrapper[4812]: I1124 19:43:50.386498 4812 scope.go:117] "RemoveContainer" containerID="6e758e24b2c99d6af7b60e8063d9a1fc7ae01657afe6750b9366e0330b759807" Nov 24 19:43:50 crc kubenswrapper[4812]: I1124 19:43:50.436039 4812 scope.go:117] "RemoveContainer" containerID="56e92d62401e0fb5fecfcb29bd586b700c363e86804cbadf37038c174b1e81f1" Nov 24 19:43:50 crc kubenswrapper[4812]: I1124 19:43:50.492220 4812 scope.go:117] "RemoveContainer" containerID="1bf723392e2593de42b16d0252b5913f24c0fd77eff44f99268cc159a68845e0" Nov 24 19:43:50 crc kubenswrapper[4812]: I1124 19:43:50.517865 4812 scope.go:117] "RemoveContainer" containerID="529e2e8483b00d8d42ea5291410b0b7c3d3fa0e80bd3e854a931f9e02123e436" Nov 24 19:43:50 crc kubenswrapper[4812]: I1124 19:43:50.544145 4812 scope.go:117] "RemoveContainer" containerID="0b125b25f534109552515427bd331b685d82ec3aed6a70917f3be56f55bbd100" Nov 24 19:43:50 crc kubenswrapper[4812]: I1124 19:43:50.574202 4812 scope.go:117] "RemoveContainer" containerID="f1a398de16ac12139cac0d17939f95f60164b26c6180c9b237fc5e13c46dac37" Nov 24 19:44:02 crc kubenswrapper[4812]: I1124 19:44:02.998897 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:44:03 crc kubenswrapper[4812]: I1124 19:44:02.999631 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:44:32 crc kubenswrapper[4812]: I1124 19:44:32.998199 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:44:33 crc kubenswrapper[4812]: I1124 19:44:32.998808 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:44:50 crc kubenswrapper[4812]: I1124 19:44:50.724709 4812 scope.go:117] "RemoveContainer" containerID="7722d3ce2d1af69114e8e9df886108d0349e108a42086aac479457603d1c072a" Nov 24 19:44:50 crc kubenswrapper[4812]: I1124 19:44:50.766241 4812 scope.go:117] "RemoveContainer" containerID="4e7a4b06f4ec1fd3d802dead9d218ddf2050b690808c104ce9eff41582fadc93" Nov 24 19:44:50 crc kubenswrapper[4812]: I1124 19:44:50.798797 4812 scope.go:117] "RemoveContainer" containerID="705fac9235fc051bf3375b13e8f6298b3a7ac75c7c6fb5d5d491e30790d9dd51" Nov 24 19:44:50 crc kubenswrapper[4812]: I1124 19:44:50.868766 4812 scope.go:117] "RemoveContainer" containerID="22145cb6161b542de832198498201f037566ff179f93263c471dd93f46869d47" Nov 24 19:45:00 crc kubenswrapper[4812]: I1124 19:45:00.166796 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400225-klf82"] Nov 24 19:45:00 crc kubenswrapper[4812]: E1124 19:45:00.167768 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0ab3b5-5959-4e4a-855b-904f8412e226" containerName="registry-server" Nov 24 19:45:00 crc kubenswrapper[4812]: I1124 19:45:00.167789 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0ab3b5-5959-4e4a-855b-904f8412e226" containerName="registry-server" Nov 24 19:45:00 crc kubenswrapper[4812]: E1124 19:45:00.167833 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0ab3b5-5959-4e4a-855b-904f8412e226" containerName="extract-utilities" Nov 24 19:45:00 crc kubenswrapper[4812]: I1124 19:45:00.167846 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0ab3b5-5959-4e4a-855b-904f8412e226" containerName="extract-utilities" Nov 24 19:45:00 crc kubenswrapper[4812]: E1124 19:45:00.167866 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0ab3b5-5959-4e4a-855b-904f8412e226" containerName="extract-content" Nov 24 19:45:00 crc kubenswrapper[4812]: I1124 19:45:00.167879 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0ab3b5-5959-4e4a-855b-904f8412e226" containerName="extract-content" Nov 24 19:45:00 crc kubenswrapper[4812]: I1124 19:45:00.168132 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f0ab3b5-5959-4e4a-855b-904f8412e226" containerName="registry-server" Nov 24 19:45:00 crc kubenswrapper[4812]: I1124 19:45:00.168901 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400225-klf82" Nov 24 19:45:00 crc kubenswrapper[4812]: I1124 19:45:00.172917 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 19:45:00 crc kubenswrapper[4812]: I1124 19:45:00.172921 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 19:45:00 crc kubenswrapper[4812]: I1124 19:45:00.182500 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400225-klf82"] Nov 24 19:45:00 crc kubenswrapper[4812]: I1124 19:45:00.346079 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e00398ad-d448-40a8-b21c-886e79d6bc1e-config-volume\") pod \"collect-profiles-29400225-klf82\" (UID: \"e00398ad-d448-40a8-b21c-886e79d6bc1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400225-klf82" Nov 24 19:45:00 crc kubenswrapper[4812]: I1124 19:45:00.346132 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmgqw\" (UniqueName: \"kubernetes.io/projected/e00398ad-d448-40a8-b21c-886e79d6bc1e-kube-api-access-kmgqw\") pod \"collect-profiles-29400225-klf82\" (UID: \"e00398ad-d448-40a8-b21c-886e79d6bc1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400225-klf82" Nov 24 19:45:00 crc kubenswrapper[4812]: I1124 19:45:00.346170 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e00398ad-d448-40a8-b21c-886e79d6bc1e-secret-volume\") pod \"collect-profiles-29400225-klf82\" (UID: \"e00398ad-d448-40a8-b21c-886e79d6bc1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400225-klf82" Nov 24 19:45:00 crc kubenswrapper[4812]: I1124 19:45:00.448554 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e00398ad-d448-40a8-b21c-886e79d6bc1e-config-volume\") pod \"collect-profiles-29400225-klf82\" (UID: \"e00398ad-d448-40a8-b21c-886e79d6bc1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400225-klf82" Nov 24 19:45:00 crc kubenswrapper[4812]: I1124 19:45:00.449034 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmgqw\" (UniqueName: \"kubernetes.io/projected/e00398ad-d448-40a8-b21c-886e79d6bc1e-kube-api-access-kmgqw\") pod \"collect-profiles-29400225-klf82\" (UID: \"e00398ad-d448-40a8-b21c-886e79d6bc1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400225-klf82" Nov 24 19:45:00 crc kubenswrapper[4812]: I1124 19:45:00.450400 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e00398ad-d448-40a8-b21c-886e79d6bc1e-config-volume\") pod \"collect-profiles-29400225-klf82\" (UID: \"e00398ad-d448-40a8-b21c-886e79d6bc1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400225-klf82" Nov 24 19:45:00 crc kubenswrapper[4812]: I1124 19:45:00.452133 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e00398ad-d448-40a8-b21c-886e79d6bc1e-secret-volume\") pod \"collect-profiles-29400225-klf82\" (UID: \"e00398ad-d448-40a8-b21c-886e79d6bc1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400225-klf82" Nov 24 19:45:00 crc kubenswrapper[4812]: I1124 19:45:00.468919 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e00398ad-d448-40a8-b21c-886e79d6bc1e-secret-volume\") pod \"collect-profiles-29400225-klf82\" (UID: \"e00398ad-d448-40a8-b21c-886e79d6bc1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400225-klf82" Nov 24 19:45:00 crc kubenswrapper[4812]: I1124 19:45:00.484210 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmgqw\" (UniqueName: \"kubernetes.io/projected/e00398ad-d448-40a8-b21c-886e79d6bc1e-kube-api-access-kmgqw\") pod \"collect-profiles-29400225-klf82\" (UID: \"e00398ad-d448-40a8-b21c-886e79d6bc1e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400225-klf82" Nov 24 19:45:00 crc kubenswrapper[4812]: I1124 19:45:00.507502 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400225-klf82" Nov 24 19:45:00 crc kubenswrapper[4812]: I1124 19:45:00.959349 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400225-klf82"] Nov 24 19:45:01 crc kubenswrapper[4812]: I1124 19:45:01.414364 4812 generic.go:334] "Generic (PLEG): container finished" podID="e00398ad-d448-40a8-b21c-886e79d6bc1e" containerID="86792e34ba6bbffa1140ac701b5536e2abc532813c9d47b332511cc4d5708217" exitCode=0 Nov 24 19:45:01 crc kubenswrapper[4812]: I1124 19:45:01.414406 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400225-klf82" event={"ID":"e00398ad-d448-40a8-b21c-886e79d6bc1e","Type":"ContainerDied","Data":"86792e34ba6bbffa1140ac701b5536e2abc532813c9d47b332511cc4d5708217"} Nov 24 19:45:01 crc kubenswrapper[4812]: I1124 19:45:01.414432 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400225-klf82" event={"ID":"e00398ad-d448-40a8-b21c-886e79d6bc1e","Type":"ContainerStarted","Data":"6285474d676f56bef5398492ebc5cbf403a5154b85625593acaef4c1b3ef4194"} Nov 24 19:45:02 crc kubenswrapper[4812]: I1124 19:45:02.775927 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400225-klf82" Nov 24 19:45:02 crc kubenswrapper[4812]: I1124 19:45:02.890241 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e00398ad-d448-40a8-b21c-886e79d6bc1e-config-volume\") pod \"e00398ad-d448-40a8-b21c-886e79d6bc1e\" (UID: \"e00398ad-d448-40a8-b21c-886e79d6bc1e\") " Nov 24 19:45:02 crc kubenswrapper[4812]: I1124 19:45:02.890292 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmgqw\" (UniqueName: \"kubernetes.io/projected/e00398ad-d448-40a8-b21c-886e79d6bc1e-kube-api-access-kmgqw\") pod \"e00398ad-d448-40a8-b21c-886e79d6bc1e\" (UID: \"e00398ad-d448-40a8-b21c-886e79d6bc1e\") " Nov 24 19:45:02 crc kubenswrapper[4812]: I1124 19:45:02.890402 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e00398ad-d448-40a8-b21c-886e79d6bc1e-secret-volume\") pod \"e00398ad-d448-40a8-b21c-886e79d6bc1e\" (UID: \"e00398ad-d448-40a8-b21c-886e79d6bc1e\") " Nov 24 19:45:02 crc kubenswrapper[4812]: I1124 19:45:02.890879 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e00398ad-d448-40a8-b21c-886e79d6bc1e-config-volume" (OuterVolumeSpecName: "config-volume") pod "e00398ad-d448-40a8-b21c-886e79d6bc1e" (UID: "e00398ad-d448-40a8-b21c-886e79d6bc1e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 19:45:02 crc kubenswrapper[4812]: I1124 19:45:02.895466 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e00398ad-d448-40a8-b21c-886e79d6bc1e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e00398ad-d448-40a8-b21c-886e79d6bc1e" (UID: "e00398ad-d448-40a8-b21c-886e79d6bc1e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 19:45:02 crc kubenswrapper[4812]: I1124 19:45:02.895701 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e00398ad-d448-40a8-b21c-886e79d6bc1e-kube-api-access-kmgqw" (OuterVolumeSpecName: "kube-api-access-kmgqw") pod "e00398ad-d448-40a8-b21c-886e79d6bc1e" (UID: "e00398ad-d448-40a8-b21c-886e79d6bc1e"). InnerVolumeSpecName "kube-api-access-kmgqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:45:02 crc kubenswrapper[4812]: I1124 19:45:02.992161 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e00398ad-d448-40a8-b21c-886e79d6bc1e-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 19:45:02 crc kubenswrapper[4812]: I1124 19:45:02.992187 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e00398ad-d448-40a8-b21c-886e79d6bc1e-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 19:45:02 crc kubenswrapper[4812]: I1124 19:45:02.992196 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmgqw\" (UniqueName: \"kubernetes.io/projected/e00398ad-d448-40a8-b21c-886e79d6bc1e-kube-api-access-kmgqw\") on node \"crc\" DevicePath \"\"" Nov 24 19:45:02 crc kubenswrapper[4812]: I1124 19:45:02.998417 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:45:02 crc kubenswrapper[4812]: I1124 19:45:02.998468 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:45:02 crc kubenswrapper[4812]: I1124 19:45:02.998505 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:45:02 crc kubenswrapper[4812]: I1124 19:45:02.999033 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 19:45:02 crc kubenswrapper[4812]: I1124 19:45:02.999093 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" gracePeriod=600 Nov 24 19:45:03 crc kubenswrapper[4812]: E1124 19:45:03.120949 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:45:03 crc kubenswrapper[4812]: I1124 19:45:03.435937 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" exitCode=0 Nov 24 19:45:03 crc kubenswrapper[4812]: I1124 19:45:03.436007 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691"} Nov 24 19:45:03 crc kubenswrapper[4812]: I1124 19:45:03.436065 4812 scope.go:117] "RemoveContainer" containerID="d0df34c9db895de19526e2349039e7168974e5c1bd66252317644188b384437d" Nov 24 19:45:03 crc kubenswrapper[4812]: I1124 19:45:03.436907 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:45:03 crc kubenswrapper[4812]: E1124 19:45:03.437530 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:45:03 crc kubenswrapper[4812]: I1124 19:45:03.437996 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400225-klf82" event={"ID":"e00398ad-d448-40a8-b21c-886e79d6bc1e","Type":"ContainerDied","Data":"6285474d676f56bef5398492ebc5cbf403a5154b85625593acaef4c1b3ef4194"} Nov 24 19:45:03 crc kubenswrapper[4812]: I1124 19:45:03.438028 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6285474d676f56bef5398492ebc5cbf403a5154b85625593acaef4c1b3ef4194" Nov 24 19:45:03 crc kubenswrapper[4812]: I1124 19:45:03.438079 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400225-klf82" Nov 24 19:45:16 crc kubenswrapper[4812]: I1124 19:45:16.970503 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:45:16 crc kubenswrapper[4812]: E1124 19:45:16.971851 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:45:29 crc kubenswrapper[4812]: I1124 19:45:29.965854 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:45:29 crc kubenswrapper[4812]: E1124 19:45:29.966813 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:45:41 crc kubenswrapper[4812]: I1124 19:45:41.966085 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:45:41 crc kubenswrapper[4812]: E1124 19:45:41.967002 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:45:50 crc kubenswrapper[4812]: I1124 19:45:50.977776 4812 scope.go:117] "RemoveContainer" containerID="0809a53c850e8919822e704dda8799d073f705e0a2901f82f0b34719ebf12895" Nov 24 19:45:51 crc kubenswrapper[4812]: I1124 19:45:51.020644 4812 scope.go:117] "RemoveContainer" containerID="160b25a1390b6fcfeffce5f61c64ce498bcf9f9bb67ab059982c912baa5f9678" Nov 24 19:45:51 crc kubenswrapper[4812]: I1124 19:45:51.064956 4812 scope.go:117] "RemoveContainer" containerID="d5fae8072f9f408d92aa3d01e0eff2bf277ff66460adda263052312bf707a170" Nov 24 19:45:51 crc kubenswrapper[4812]: I1124 19:45:51.093286 4812 scope.go:117] "RemoveContainer" containerID="b424d3652698c522c799ba60499e354d09adfff1a34968540b33cb676b432c7e" Nov 24 19:45:55 crc kubenswrapper[4812]: I1124 19:45:55.966265 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:45:55 crc kubenswrapper[4812]: E1124 19:45:55.966822 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:46:10 crc kubenswrapper[4812]: I1124 19:46:10.966823 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:46:10 crc kubenswrapper[4812]: E1124 19:46:10.967822 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:46:21 crc kubenswrapper[4812]: I1124 19:46:21.973181 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:46:21 crc kubenswrapper[4812]: E1124 19:46:21.974094 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:46:34 crc kubenswrapper[4812]: I1124 19:46:34.966046 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:46:34 crc kubenswrapper[4812]: E1124 19:46:34.967129 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:46:49 crc kubenswrapper[4812]: I1124 19:46:49.965756 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:46:49 crc kubenswrapper[4812]: E1124 19:46:49.966966 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:47:01 crc kubenswrapper[4812]: I1124 19:47:01.966819 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:47:01 crc kubenswrapper[4812]: E1124 19:47:01.968110 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:47:15 crc kubenswrapper[4812]: I1124 19:47:15.965722 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:47:15 crc kubenswrapper[4812]: E1124 19:47:15.966313 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:47:27 crc kubenswrapper[4812]: I1124 19:47:27.966225 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:47:27 crc kubenswrapper[4812]: E1124 19:47:27.967164 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:47:40 crc kubenswrapper[4812]: I1124 19:47:40.966476 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:47:40 crc kubenswrapper[4812]: E1124 19:47:40.967990 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:47:52 crc kubenswrapper[4812]: I1124 19:47:52.966156 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:47:52 crc kubenswrapper[4812]: E1124 19:47:52.967523 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:48:03 crc kubenswrapper[4812]: I1124 19:48:03.966150 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:48:03 crc kubenswrapper[4812]: E1124 19:48:03.966968 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:48:14 crc kubenswrapper[4812]: I1124 19:48:14.966303 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:48:14 crc kubenswrapper[4812]: E1124 19:48:14.967381 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:48:26 crc kubenswrapper[4812]: I1124 19:48:26.971311 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:48:26 crc kubenswrapper[4812]: E1124 19:48:26.972296 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:48:39 crc kubenswrapper[4812]: I1124 19:48:39.966310 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:48:39 crc kubenswrapper[4812]: E1124 19:48:39.967545 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:48:50 crc kubenswrapper[4812]: I1124 19:48:50.966155 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:48:50 crc kubenswrapper[4812]: E1124 19:48:50.967306 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:49:04 crc kubenswrapper[4812]: I1124 19:49:04.969002 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:49:04 crc kubenswrapper[4812]: E1124 19:49:04.970384 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:49:16 crc kubenswrapper[4812]: I1124 19:49:16.973751 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:49:16 crc kubenswrapper[4812]: E1124 19:49:16.974707 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:49:28 crc kubenswrapper[4812]: I1124 19:49:28.096206 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:49:28 crc kubenswrapper[4812]: E1124 19:49:28.097649 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:49:39 crc kubenswrapper[4812]: I1124 19:49:39.965612 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:49:39 crc kubenswrapper[4812]: E1124 19:49:39.966607 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:49:52 crc kubenswrapper[4812]: I1124 19:49:52.969831 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:49:52 crc kubenswrapper[4812]: E1124 19:49:52.971038 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:50:06 crc kubenswrapper[4812]: I1124 19:50:06.973646 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:50:07 crc kubenswrapper[4812]: I1124 19:50:07.441022 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"89e31a7e027c311a0857050572d8d432d6fd8fe47ff5c2b985ae83aa6248a143"} Nov 24 19:50:28 crc kubenswrapper[4812]: I1124 19:50:28.856221 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-68nf9" podUID="2ec7107c-8ae0-4a00-901e-e70ac99520e7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 19:52:33 crc kubenswrapper[4812]: I1124 19:52:32.999325 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:52:33 crc kubenswrapper[4812]: I1124 19:52:33.000194 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:53:02 crc kubenswrapper[4812]: I1124 19:53:02.998742 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:53:03 crc kubenswrapper[4812]: I1124 19:53:03.000484 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:53:32 crc kubenswrapper[4812]: I1124 19:53:32.999088 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:53:33 crc kubenswrapper[4812]: I1124 19:53:32.999828 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:53:33 crc kubenswrapper[4812]: I1124 19:53:32.999895 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:53:33 crc kubenswrapper[4812]: I1124 19:53:33.000863 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89e31a7e027c311a0857050572d8d432d6fd8fe47ff5c2b985ae83aa6248a143"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 19:53:33 crc kubenswrapper[4812]: I1124 19:53:33.001005 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://89e31a7e027c311a0857050572d8d432d6fd8fe47ff5c2b985ae83aa6248a143" gracePeriod=600 Nov 24 19:53:33 crc kubenswrapper[4812]: I1124 19:53:33.998561 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="89e31a7e027c311a0857050572d8d432d6fd8fe47ff5c2b985ae83aa6248a143" exitCode=0 Nov 24 19:53:33 crc kubenswrapper[4812]: I1124 19:53:33.998677 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"89e31a7e027c311a0857050572d8d432d6fd8fe47ff5c2b985ae83aa6248a143"} Nov 24 19:53:33 crc kubenswrapper[4812]: I1124 19:53:33.998936 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0"} Nov 24 19:53:33 crc kubenswrapper[4812]: I1124 19:53:33.998989 4812 scope.go:117] "RemoveContainer" containerID="31badff0b9ddfd634f8c6840eb16549ce99c1b05e065e09d3b20581145996691" Nov 24 19:56:02 crc kubenswrapper[4812]: I1124 19:56:02.998837 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:56:02 crc kubenswrapper[4812]: I1124 19:56:02.999321 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:56:32 crc kubenswrapper[4812]: I1124 19:56:32.998221 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:56:32 crc kubenswrapper[4812]: I1124 19:56:32.998902 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:57:02 crc kubenswrapper[4812]: I1124 19:57:02.998995 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 19:57:03 crc kubenswrapper[4812]: I1124 19:57:02.999760 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 19:57:03 crc kubenswrapper[4812]: I1124 19:57:02.999820 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 19:57:03 crc kubenswrapper[4812]: I1124 19:57:03.000715 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 19:57:03 crc kubenswrapper[4812]: I1124 19:57:03.000819 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" gracePeriod=600 Nov 24 19:57:03 crc kubenswrapper[4812]: E1124 19:57:03.135785 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:57:04 crc kubenswrapper[4812]: I1124 19:57:04.128546 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" exitCode=0 Nov 24 19:57:04 crc kubenswrapper[4812]: I1124 19:57:04.128645 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0"} Nov 24 19:57:04 crc kubenswrapper[4812]: I1124 19:57:04.128931 4812 scope.go:117] "RemoveContainer" containerID="89e31a7e027c311a0857050572d8d432d6fd8fe47ff5c2b985ae83aa6248a143" Nov 24 19:57:04 crc kubenswrapper[4812]: I1124 19:57:04.129666 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 19:57:04 crc kubenswrapper[4812]: E1124 19:57:04.129993 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:57:16 crc kubenswrapper[4812]: I1124 19:57:16.973875 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 19:57:16 crc kubenswrapper[4812]: E1124 19:57:16.974835 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:57:31 crc kubenswrapper[4812]: I1124 19:57:31.967559 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 19:57:31 crc kubenswrapper[4812]: E1124 19:57:31.968589 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:57:42 crc kubenswrapper[4812]: I1124 19:57:42.965868 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 19:57:42 crc kubenswrapper[4812]: E1124 19:57:42.966611 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:57:55 crc kubenswrapper[4812]: I1124 19:57:55.966311 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 19:57:55 crc kubenswrapper[4812]: E1124 19:57:55.967618 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:58:09 crc kubenswrapper[4812]: I1124 19:58:09.966208 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 19:58:09 crc kubenswrapper[4812]: E1124 19:58:09.967179 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:58:24 crc kubenswrapper[4812]: I1124 19:58:24.965728 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 19:58:24 crc kubenswrapper[4812]: E1124 19:58:24.966624 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:58:38 crc kubenswrapper[4812]: I1124 19:58:38.966767 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 19:58:38 crc kubenswrapper[4812]: E1124 19:58:38.967920 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.295410 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k42rc"] Nov 24 19:58:48 crc kubenswrapper[4812]: E1124 19:58:48.296535 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00398ad-d448-40a8-b21c-886e79d6bc1e" containerName="collect-profiles" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.296559 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00398ad-d448-40a8-b21c-886e79d6bc1e" containerName="collect-profiles" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.296837 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e00398ad-d448-40a8-b21c-886e79d6bc1e" containerName="collect-profiles" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.298807 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k42rc" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.331778 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k42rc"] Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.397843 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20887a94-307b-45ae-93a2-34f12ccc335e-catalog-content\") pod \"redhat-marketplace-k42rc\" (UID: \"20887a94-307b-45ae-93a2-34f12ccc335e\") " pod="openshift-marketplace/redhat-marketplace-k42rc" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.397957 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20887a94-307b-45ae-93a2-34f12ccc335e-utilities\") pod \"redhat-marketplace-k42rc\" (UID: \"20887a94-307b-45ae-93a2-34f12ccc335e\") " pod="openshift-marketplace/redhat-marketplace-k42rc" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.398173 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t6r5\" (UniqueName: \"kubernetes.io/projected/20887a94-307b-45ae-93a2-34f12ccc335e-kube-api-access-5t6r5\") pod \"redhat-marketplace-k42rc\" (UID: \"20887a94-307b-45ae-93a2-34f12ccc335e\") " pod="openshift-marketplace/redhat-marketplace-k42rc" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.478176 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nhwff"] Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.480409 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nhwff" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.498647 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nhwff"] Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.501865 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20887a94-307b-45ae-93a2-34f12ccc335e-utilities\") pod \"redhat-marketplace-k42rc\" (UID: \"20887a94-307b-45ae-93a2-34f12ccc335e\") " pod="openshift-marketplace/redhat-marketplace-k42rc" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.501960 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t6r5\" (UniqueName: \"kubernetes.io/projected/20887a94-307b-45ae-93a2-34f12ccc335e-kube-api-access-5t6r5\") pod \"redhat-marketplace-k42rc\" (UID: \"20887a94-307b-45ae-93a2-34f12ccc335e\") " pod="openshift-marketplace/redhat-marketplace-k42rc" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.502133 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20887a94-307b-45ae-93a2-34f12ccc335e-catalog-content\") pod \"redhat-marketplace-k42rc\" (UID: \"20887a94-307b-45ae-93a2-34f12ccc335e\") " pod="openshift-marketplace/redhat-marketplace-k42rc" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.503154 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20887a94-307b-45ae-93a2-34f12ccc335e-utilities\") pod \"redhat-marketplace-k42rc\" (UID: \"20887a94-307b-45ae-93a2-34f12ccc335e\") " pod="openshift-marketplace/redhat-marketplace-k42rc" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.506153 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20887a94-307b-45ae-93a2-34f12ccc335e-catalog-content\") pod \"redhat-marketplace-k42rc\" (UID: \"20887a94-307b-45ae-93a2-34f12ccc335e\") " pod="openshift-marketplace/redhat-marketplace-k42rc" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.527911 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t6r5\" (UniqueName: \"kubernetes.io/projected/20887a94-307b-45ae-93a2-34f12ccc335e-kube-api-access-5t6r5\") pod \"redhat-marketplace-k42rc\" (UID: \"20887a94-307b-45ae-93a2-34f12ccc335e\") " pod="openshift-marketplace/redhat-marketplace-k42rc" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.603936 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7577bfbb-5db5-40ba-af0b-6eeca708a7da-utilities\") pod \"community-operators-nhwff\" (UID: \"7577bfbb-5db5-40ba-af0b-6eeca708a7da\") " pod="openshift-marketplace/community-operators-nhwff" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.604055 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7577bfbb-5db5-40ba-af0b-6eeca708a7da-catalog-content\") pod \"community-operators-nhwff\" (UID: \"7577bfbb-5db5-40ba-af0b-6eeca708a7da\") " pod="openshift-marketplace/community-operators-nhwff" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.604116 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfsst\" (UniqueName: \"kubernetes.io/projected/7577bfbb-5db5-40ba-af0b-6eeca708a7da-kube-api-access-kfsst\") pod \"community-operators-nhwff\" (UID: \"7577bfbb-5db5-40ba-af0b-6eeca708a7da\") " pod="openshift-marketplace/community-operators-nhwff" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.620825 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k42rc" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.706035 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfsst\" (UniqueName: \"kubernetes.io/projected/7577bfbb-5db5-40ba-af0b-6eeca708a7da-kube-api-access-kfsst\") pod \"community-operators-nhwff\" (UID: \"7577bfbb-5db5-40ba-af0b-6eeca708a7da\") " pod="openshift-marketplace/community-operators-nhwff" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.706118 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7577bfbb-5db5-40ba-af0b-6eeca708a7da-utilities\") pod \"community-operators-nhwff\" (UID: \"7577bfbb-5db5-40ba-af0b-6eeca708a7da\") " pod="openshift-marketplace/community-operators-nhwff" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.706186 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7577bfbb-5db5-40ba-af0b-6eeca708a7da-catalog-content\") pod \"community-operators-nhwff\" (UID: \"7577bfbb-5db5-40ba-af0b-6eeca708a7da\") " pod="openshift-marketplace/community-operators-nhwff" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.706687 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7577bfbb-5db5-40ba-af0b-6eeca708a7da-utilities\") pod \"community-operators-nhwff\" (UID: \"7577bfbb-5db5-40ba-af0b-6eeca708a7da\") " pod="openshift-marketplace/community-operators-nhwff" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.706803 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7577bfbb-5db5-40ba-af0b-6eeca708a7da-catalog-content\") pod \"community-operators-nhwff\" (UID: \"7577bfbb-5db5-40ba-af0b-6eeca708a7da\") " pod="openshift-marketplace/community-operators-nhwff" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.725523 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfsst\" (UniqueName: \"kubernetes.io/projected/7577bfbb-5db5-40ba-af0b-6eeca708a7da-kube-api-access-kfsst\") pod \"community-operators-nhwff\" (UID: \"7577bfbb-5db5-40ba-af0b-6eeca708a7da\") " pod="openshift-marketplace/community-operators-nhwff" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.816648 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nhwff" Nov 24 19:58:48 crc kubenswrapper[4812]: I1124 19:58:48.889033 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k42rc"] Nov 24 19:58:48 crc kubenswrapper[4812]: W1124 19:58:48.934279 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20887a94_307b_45ae_93a2_34f12ccc335e.slice/crio-c0f064f3374376125f54e05777f5ca5e098b244e6760b2394c75390f6f5de0f7 WatchSource:0}: Error finding container c0f064f3374376125f54e05777f5ca5e098b244e6760b2394c75390f6f5de0f7: Status 404 returned error can't find the container with id c0f064f3374376125f54e05777f5ca5e098b244e6760b2394c75390f6f5de0f7 Nov 24 19:58:49 crc kubenswrapper[4812]: I1124 19:58:49.109952 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nhwff"] Nov 24 19:58:49 crc kubenswrapper[4812]: I1124 19:58:49.237737 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhwff" event={"ID":"7577bfbb-5db5-40ba-af0b-6eeca708a7da","Type":"ContainerStarted","Data":"cbf60dfec1db4a08287584a40acf195151d27c8030f645f041f3713b9ee3cdc0"} Nov 24 19:58:49 crc kubenswrapper[4812]: I1124 19:58:49.239161 4812 generic.go:334] "Generic (PLEG): container finished" podID="20887a94-307b-45ae-93a2-34f12ccc335e" containerID="c6366817232c84eac8f0dac39d235456cc35da9a05530e7c5b1c1776302316cf" exitCode=0 Nov 24 19:58:49 crc kubenswrapper[4812]: I1124 19:58:49.239205 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k42rc" event={"ID":"20887a94-307b-45ae-93a2-34f12ccc335e","Type":"ContainerDied","Data":"c6366817232c84eac8f0dac39d235456cc35da9a05530e7c5b1c1776302316cf"} Nov 24 19:58:49 crc kubenswrapper[4812]: I1124 19:58:49.239230 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k42rc" event={"ID":"20887a94-307b-45ae-93a2-34f12ccc335e","Type":"ContainerStarted","Data":"c0f064f3374376125f54e05777f5ca5e098b244e6760b2394c75390f6f5de0f7"} Nov 24 19:58:49 crc kubenswrapper[4812]: I1124 19:58:49.240706 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 19:58:50 crc kubenswrapper[4812]: I1124 19:58:50.250629 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k42rc" event={"ID":"20887a94-307b-45ae-93a2-34f12ccc335e","Type":"ContainerStarted","Data":"45cbc453c561818fc5f4c21b0dc5306b16228d33c463dd3f7728ae66a4754dfe"} Nov 24 19:58:50 crc kubenswrapper[4812]: I1124 19:58:50.253121 4812 generic.go:334] "Generic (PLEG): container finished" podID="7577bfbb-5db5-40ba-af0b-6eeca708a7da" containerID="66213f8ad4d7fdd8486c9960954a78bf244d25c2dd1312b37a171320603dbd3f" exitCode=0 Nov 24 19:58:50 crc kubenswrapper[4812]: I1124 19:58:50.253171 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhwff" event={"ID":"7577bfbb-5db5-40ba-af0b-6eeca708a7da","Type":"ContainerDied","Data":"66213f8ad4d7fdd8486c9960954a78bf244d25c2dd1312b37a171320603dbd3f"} Nov 24 19:58:50 crc kubenswrapper[4812]: I1124 19:58:50.691735 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-swg9k"] Nov 24 19:58:50 crc kubenswrapper[4812]: I1124 19:58:50.695257 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swg9k" Nov 24 19:58:50 crc kubenswrapper[4812]: I1124 19:58:50.715673 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-swg9k"] Nov 24 19:58:50 crc kubenswrapper[4812]: I1124 19:58:50.843905 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57wg2\" (UniqueName: \"kubernetes.io/projected/b1d8257f-c44f-455e-8648-be87425c6242-kube-api-access-57wg2\") pod \"certified-operators-swg9k\" (UID: \"b1d8257f-c44f-455e-8648-be87425c6242\") " pod="openshift-marketplace/certified-operators-swg9k" Nov 24 19:58:50 crc kubenswrapper[4812]: I1124 19:58:50.844265 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1d8257f-c44f-455e-8648-be87425c6242-utilities\") pod \"certified-operators-swg9k\" (UID: \"b1d8257f-c44f-455e-8648-be87425c6242\") " pod="openshift-marketplace/certified-operators-swg9k" Nov 24 19:58:50 crc kubenswrapper[4812]: I1124 19:58:50.844541 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1d8257f-c44f-455e-8648-be87425c6242-catalog-content\") pod \"certified-operators-swg9k\" (UID: \"b1d8257f-c44f-455e-8648-be87425c6242\") " pod="openshift-marketplace/certified-operators-swg9k" Nov 24 19:58:50 crc kubenswrapper[4812]: I1124 19:58:50.884990 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-94pwl"] Nov 24 19:58:50 crc kubenswrapper[4812]: I1124 19:58:50.886834 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94pwl" Nov 24 19:58:50 crc kubenswrapper[4812]: I1124 19:58:50.907883 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-94pwl"] Nov 24 19:58:50 crc kubenswrapper[4812]: I1124 19:58:50.950680 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9lsg\" (UniqueName: \"kubernetes.io/projected/90724d16-7db7-43e8-bab7-86123d4c911d-kube-api-access-r9lsg\") pod \"redhat-operators-94pwl\" (UID: \"90724d16-7db7-43e8-bab7-86123d4c911d\") " pod="openshift-marketplace/redhat-operators-94pwl" Nov 24 19:58:50 crc kubenswrapper[4812]: I1124 19:58:50.950750 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90724d16-7db7-43e8-bab7-86123d4c911d-utilities\") pod \"redhat-operators-94pwl\" (UID: \"90724d16-7db7-43e8-bab7-86123d4c911d\") " pod="openshift-marketplace/redhat-operators-94pwl" Nov 24 19:58:50 crc kubenswrapper[4812]: I1124 19:58:50.950781 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1d8257f-c44f-455e-8648-be87425c6242-catalog-content\") pod \"certified-operators-swg9k\" (UID: \"b1d8257f-c44f-455e-8648-be87425c6242\") " pod="openshift-marketplace/certified-operators-swg9k" Nov 24 19:58:50 crc kubenswrapper[4812]: I1124 19:58:50.950803 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90724d16-7db7-43e8-bab7-86123d4c911d-catalog-content\") pod \"redhat-operators-94pwl\" (UID: \"90724d16-7db7-43e8-bab7-86123d4c911d\") " pod="openshift-marketplace/redhat-operators-94pwl" Nov 24 19:58:50 crc kubenswrapper[4812]: I1124 19:58:50.957106 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57wg2\" (UniqueName: \"kubernetes.io/projected/b1d8257f-c44f-455e-8648-be87425c6242-kube-api-access-57wg2\") pod \"certified-operators-swg9k\" (UID: \"b1d8257f-c44f-455e-8648-be87425c6242\") " pod="openshift-marketplace/certified-operators-swg9k" Nov 24 19:58:50 crc kubenswrapper[4812]: I1124 19:58:50.957255 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1d8257f-c44f-455e-8648-be87425c6242-utilities\") pod \"certified-operators-swg9k\" (UID: \"b1d8257f-c44f-455e-8648-be87425c6242\") " pod="openshift-marketplace/certified-operators-swg9k" Nov 24 19:58:50 crc kubenswrapper[4812]: I1124 19:58:50.957844 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1d8257f-c44f-455e-8648-be87425c6242-utilities\") pod \"certified-operators-swg9k\" (UID: \"b1d8257f-c44f-455e-8648-be87425c6242\") " pod="openshift-marketplace/certified-operators-swg9k" Nov 24 19:58:50 crc kubenswrapper[4812]: I1124 19:58:50.958116 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1d8257f-c44f-455e-8648-be87425c6242-catalog-content\") pod \"certified-operators-swg9k\" (UID: \"b1d8257f-c44f-455e-8648-be87425c6242\") " pod="openshift-marketplace/certified-operators-swg9k" Nov 24 19:58:50 crc kubenswrapper[4812]: I1124 19:58:50.988801 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57wg2\" (UniqueName: \"kubernetes.io/projected/b1d8257f-c44f-455e-8648-be87425c6242-kube-api-access-57wg2\") pod \"certified-operators-swg9k\" (UID: \"b1d8257f-c44f-455e-8648-be87425c6242\") " pod="openshift-marketplace/certified-operators-swg9k" Nov 24 19:58:51 crc kubenswrapper[4812]: I1124 19:58:51.034854 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swg9k" Nov 24 19:58:51 crc kubenswrapper[4812]: I1124 19:58:51.059795 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90724d16-7db7-43e8-bab7-86123d4c911d-utilities\") pod \"redhat-operators-94pwl\" (UID: \"90724d16-7db7-43e8-bab7-86123d4c911d\") " pod="openshift-marketplace/redhat-operators-94pwl" Nov 24 19:58:51 crc kubenswrapper[4812]: I1124 19:58:51.059867 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90724d16-7db7-43e8-bab7-86123d4c911d-catalog-content\") pod \"redhat-operators-94pwl\" (UID: \"90724d16-7db7-43e8-bab7-86123d4c911d\") " pod="openshift-marketplace/redhat-operators-94pwl" Nov 24 19:58:51 crc kubenswrapper[4812]: I1124 19:58:51.059983 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9lsg\" (UniqueName: \"kubernetes.io/projected/90724d16-7db7-43e8-bab7-86123d4c911d-kube-api-access-r9lsg\") pod \"redhat-operators-94pwl\" (UID: \"90724d16-7db7-43e8-bab7-86123d4c911d\") " pod="openshift-marketplace/redhat-operators-94pwl" Nov 24 19:58:51 crc kubenswrapper[4812]: I1124 19:58:51.061241 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90724d16-7db7-43e8-bab7-86123d4c911d-utilities\") pod \"redhat-operators-94pwl\" (UID: \"90724d16-7db7-43e8-bab7-86123d4c911d\") " pod="openshift-marketplace/redhat-operators-94pwl" Nov 24 19:58:51 crc kubenswrapper[4812]: I1124 19:58:51.061543 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90724d16-7db7-43e8-bab7-86123d4c911d-catalog-content\") pod \"redhat-operators-94pwl\" (UID: \"90724d16-7db7-43e8-bab7-86123d4c911d\") " pod="openshift-marketplace/redhat-operators-94pwl" Nov 24 19:58:51 crc kubenswrapper[4812]: I1124 19:58:51.079804 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9lsg\" (UniqueName: \"kubernetes.io/projected/90724d16-7db7-43e8-bab7-86123d4c911d-kube-api-access-r9lsg\") pod \"redhat-operators-94pwl\" (UID: \"90724d16-7db7-43e8-bab7-86123d4c911d\") " pod="openshift-marketplace/redhat-operators-94pwl" Nov 24 19:58:51 crc kubenswrapper[4812]: I1124 19:58:51.217860 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94pwl" Nov 24 19:58:51 crc kubenswrapper[4812]: I1124 19:58:51.269244 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhwff" event={"ID":"7577bfbb-5db5-40ba-af0b-6eeca708a7da","Type":"ContainerStarted","Data":"022f6fd2ed6547dc1b355b5a7333520663d4f271062f1d37a3d3408e46490fe0"} Nov 24 19:58:51 crc kubenswrapper[4812]: I1124 19:58:51.271702 4812 generic.go:334] "Generic (PLEG): container finished" podID="20887a94-307b-45ae-93a2-34f12ccc335e" containerID="45cbc453c561818fc5f4c21b0dc5306b16228d33c463dd3f7728ae66a4754dfe" exitCode=0 Nov 24 19:58:51 crc kubenswrapper[4812]: I1124 19:58:51.271856 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k42rc" event={"ID":"20887a94-307b-45ae-93a2-34f12ccc335e","Type":"ContainerDied","Data":"45cbc453c561818fc5f4c21b0dc5306b16228d33c463dd3f7728ae66a4754dfe"} Nov 24 19:58:51 crc kubenswrapper[4812]: I1124 19:58:51.278304 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-swg9k"] Nov 24 19:58:51 crc kubenswrapper[4812]: I1124 19:58:51.462220 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-94pwl"] Nov 24 19:58:51 crc kubenswrapper[4812]: W1124 19:58:51.467684 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90724d16_7db7_43e8_bab7_86123d4c911d.slice/crio-3c3cbe5f8f08c77d6009d29bd90b8f79c11f50323557eb03b258d3c403eb2bb6 WatchSource:0}: Error finding container 3c3cbe5f8f08c77d6009d29bd90b8f79c11f50323557eb03b258d3c403eb2bb6: Status 404 returned error can't find the container with id 3c3cbe5f8f08c77d6009d29bd90b8f79c11f50323557eb03b258d3c403eb2bb6 Nov 24 19:58:52 crc kubenswrapper[4812]: I1124 19:58:52.281454 4812 generic.go:334] "Generic (PLEG): container finished" podID="b1d8257f-c44f-455e-8648-be87425c6242" containerID="0e5c7aa8fbc92ea8ffdae56fb576de70c18c9d3f0558cc150c28f267666bcf49" exitCode=0 Nov 24 19:58:52 crc kubenswrapper[4812]: I1124 19:58:52.281548 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swg9k" event={"ID":"b1d8257f-c44f-455e-8648-be87425c6242","Type":"ContainerDied","Data":"0e5c7aa8fbc92ea8ffdae56fb576de70c18c9d3f0558cc150c28f267666bcf49"} Nov 24 19:58:52 crc kubenswrapper[4812]: I1124 19:58:52.281575 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swg9k" event={"ID":"b1d8257f-c44f-455e-8648-be87425c6242","Type":"ContainerStarted","Data":"2dd6b85be5fa7e4b55724d6b7d12adc2a5853c469c0574fbf9f537c9d2b2db68"} Nov 24 19:58:52 crc kubenswrapper[4812]: I1124 19:58:52.285449 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k42rc" event={"ID":"20887a94-307b-45ae-93a2-34f12ccc335e","Type":"ContainerStarted","Data":"6fdeac049b5d7cf3e6b48ee1d6b825d3d320d1c694aa820d8b9b249c9d42189d"} Nov 24 19:58:52 crc kubenswrapper[4812]: I1124 19:58:52.288467 4812 generic.go:334] "Generic (PLEG): container finished" podID="90724d16-7db7-43e8-bab7-86123d4c911d" containerID="2fd338ef6ec61e882577e92cd9ac2a2af3c7e1a175ff771c74b488045a66fb54" exitCode=0 Nov 24 19:58:52 crc kubenswrapper[4812]: I1124 19:58:52.288551 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94pwl" event={"ID":"90724d16-7db7-43e8-bab7-86123d4c911d","Type":"ContainerDied","Data":"2fd338ef6ec61e882577e92cd9ac2a2af3c7e1a175ff771c74b488045a66fb54"} Nov 24 19:58:52 crc kubenswrapper[4812]: I1124 19:58:52.288590 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94pwl" event={"ID":"90724d16-7db7-43e8-bab7-86123d4c911d","Type":"ContainerStarted","Data":"3c3cbe5f8f08c77d6009d29bd90b8f79c11f50323557eb03b258d3c403eb2bb6"} Nov 24 19:58:52 crc kubenswrapper[4812]: I1124 19:58:52.291252 4812 generic.go:334] "Generic (PLEG): container finished" podID="7577bfbb-5db5-40ba-af0b-6eeca708a7da" containerID="022f6fd2ed6547dc1b355b5a7333520663d4f271062f1d37a3d3408e46490fe0" exitCode=0 Nov 24 19:58:52 crc kubenswrapper[4812]: I1124 19:58:52.291290 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhwff" event={"ID":"7577bfbb-5db5-40ba-af0b-6eeca708a7da","Type":"ContainerDied","Data":"022f6fd2ed6547dc1b355b5a7333520663d4f271062f1d37a3d3408e46490fe0"} Nov 24 19:58:52 crc kubenswrapper[4812]: I1124 19:58:52.335920 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k42rc" podStartSLOduration=1.637179031 podStartE2EDuration="4.335900501s" podCreationTimestamp="2025-11-24 19:58:48 +0000 UTC" firstStartedPulling="2025-11-24 19:58:49.240505404 +0000 UTC m=+2523.029457775" lastFinishedPulling="2025-11-24 19:58:51.939226874 +0000 UTC m=+2525.728179245" observedRunningTime="2025-11-24 19:58:52.330160519 +0000 UTC m=+2526.119112890" watchObservedRunningTime="2025-11-24 19:58:52.335900501 +0000 UTC m=+2526.124852872" Nov 24 19:58:52 crc kubenswrapper[4812]: I1124 19:58:52.965877 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 19:58:52 crc kubenswrapper[4812]: E1124 19:58:52.966587 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:58:53 crc kubenswrapper[4812]: I1124 19:58:53.305173 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swg9k" event={"ID":"b1d8257f-c44f-455e-8648-be87425c6242","Type":"ContainerStarted","Data":"3bf338e6ef193b5ac106ad34b48b0a1f3f7c3fc3cb76f7a1421fd2127be8a7b3"} Nov 24 19:58:53 crc kubenswrapper[4812]: I1124 19:58:53.310995 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94pwl" event={"ID":"90724d16-7db7-43e8-bab7-86123d4c911d","Type":"ContainerStarted","Data":"3fdde4c47dd34a82632b4a474e1fdf5b9c71ee095c6c88ddb032112c83576295"} Nov 24 19:58:53 crc kubenswrapper[4812]: I1124 19:58:53.313503 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhwff" event={"ID":"7577bfbb-5db5-40ba-af0b-6eeca708a7da","Type":"ContainerStarted","Data":"103d380aee1d38478676e332715f62c2991d78364e51b50c73358c1fa04baa22"} Nov 24 19:58:53 crc kubenswrapper[4812]: I1124 19:58:53.342532 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nhwff" podStartSLOduration=2.829121439 podStartE2EDuration="5.342513442s" podCreationTimestamp="2025-11-24 19:58:48 +0000 UTC" firstStartedPulling="2025-11-24 19:58:50.255110771 +0000 UTC m=+2524.044063152" lastFinishedPulling="2025-11-24 19:58:52.768502734 +0000 UTC m=+2526.557455155" observedRunningTime="2025-11-24 19:58:53.340978489 +0000 UTC m=+2527.129930880" watchObservedRunningTime="2025-11-24 19:58:53.342513442 +0000 UTC m=+2527.131465823" Nov 24 19:58:54 crc kubenswrapper[4812]: I1124 19:58:54.325723 4812 generic.go:334] "Generic (PLEG): container finished" podID="90724d16-7db7-43e8-bab7-86123d4c911d" containerID="3fdde4c47dd34a82632b4a474e1fdf5b9c71ee095c6c88ddb032112c83576295" exitCode=0 Nov 24 19:58:54 crc kubenswrapper[4812]: I1124 19:58:54.325882 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94pwl" event={"ID":"90724d16-7db7-43e8-bab7-86123d4c911d","Type":"ContainerDied","Data":"3fdde4c47dd34a82632b4a474e1fdf5b9c71ee095c6c88ddb032112c83576295"} Nov 24 19:58:54 crc kubenswrapper[4812]: I1124 19:58:54.328778 4812 generic.go:334] "Generic (PLEG): container finished" podID="b1d8257f-c44f-455e-8648-be87425c6242" containerID="3bf338e6ef193b5ac106ad34b48b0a1f3f7c3fc3cb76f7a1421fd2127be8a7b3" exitCode=0 Nov 24 19:58:54 crc kubenswrapper[4812]: I1124 19:58:54.328820 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swg9k" event={"ID":"b1d8257f-c44f-455e-8648-be87425c6242","Type":"ContainerDied","Data":"3bf338e6ef193b5ac106ad34b48b0a1f3f7c3fc3cb76f7a1421fd2127be8a7b3"} Nov 24 19:58:55 crc kubenswrapper[4812]: I1124 19:58:55.340235 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94pwl" event={"ID":"90724d16-7db7-43e8-bab7-86123d4c911d","Type":"ContainerStarted","Data":"4a4a86f4848aee50569215b080e364dd7648c7fc3c32eaa664f794f9c413c3dd"} Nov 24 19:58:55 crc kubenswrapper[4812]: I1124 19:58:55.344625 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swg9k" event={"ID":"b1d8257f-c44f-455e-8648-be87425c6242","Type":"ContainerStarted","Data":"60548264389ad97b4a7e7c55a87180f6c81728d28b99c86dda8ef0a96d7c45a3"} Nov 24 19:58:55 crc kubenswrapper[4812]: I1124 19:58:55.396885 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-94pwl" podStartSLOduration=2.713452878 podStartE2EDuration="5.396861985s" podCreationTimestamp="2025-11-24 19:58:50 +0000 UTC" firstStartedPulling="2025-11-24 19:58:52.289998534 +0000 UTC m=+2526.078950905" lastFinishedPulling="2025-11-24 19:58:54.973407631 +0000 UTC m=+2528.762360012" observedRunningTime="2025-11-24 19:58:55.37012038 +0000 UTC m=+2529.159072761" watchObservedRunningTime="2025-11-24 19:58:55.396861985 +0000 UTC m=+2529.185814356" Nov 24 19:58:55 crc kubenswrapper[4812]: I1124 19:58:55.400302 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-swg9k" podStartSLOduration=2.913352627 podStartE2EDuration="5.400292712s" podCreationTimestamp="2025-11-24 19:58:50 +0000 UTC" firstStartedPulling="2025-11-24 19:58:52.282997787 +0000 UTC m=+2526.071950168" lastFinishedPulling="2025-11-24 19:58:54.769937842 +0000 UTC m=+2528.558890253" observedRunningTime="2025-11-24 19:58:55.393090499 +0000 UTC m=+2529.182042890" watchObservedRunningTime="2025-11-24 19:58:55.400292712 +0000 UTC m=+2529.189245083" Nov 24 19:58:58 crc kubenswrapper[4812]: I1124 19:58:58.621676 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k42rc" Nov 24 19:58:58 crc kubenswrapper[4812]: I1124 19:58:58.622527 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k42rc" Nov 24 19:58:58 crc kubenswrapper[4812]: I1124 19:58:58.687888 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k42rc" Nov 24 19:58:58 crc kubenswrapper[4812]: I1124 19:58:58.816808 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nhwff" Nov 24 19:58:58 crc kubenswrapper[4812]: I1124 19:58:58.816858 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nhwff" Nov 24 19:58:58 crc kubenswrapper[4812]: I1124 19:58:58.885125 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nhwff" Nov 24 19:58:59 crc kubenswrapper[4812]: I1124 19:58:59.437225 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k42rc" Nov 24 19:58:59 crc kubenswrapper[4812]: I1124 19:58:59.440639 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nhwff" Nov 24 19:59:01 crc kubenswrapper[4812]: I1124 19:59:01.036074 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-swg9k" Nov 24 19:59:01 crc kubenswrapper[4812]: I1124 19:59:01.036141 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-swg9k" Nov 24 19:59:01 crc kubenswrapper[4812]: I1124 19:59:01.113157 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-swg9k" Nov 24 19:59:01 crc kubenswrapper[4812]: I1124 19:59:01.218434 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-94pwl" Nov 24 19:59:01 crc kubenswrapper[4812]: I1124 19:59:01.218804 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-94pwl" Nov 24 19:59:01 crc kubenswrapper[4812]: I1124 19:59:01.276824 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k42rc"] Nov 24 19:59:01 crc kubenswrapper[4812]: I1124 19:59:01.403847 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k42rc" podUID="20887a94-307b-45ae-93a2-34f12ccc335e" containerName="registry-server" containerID="cri-o://6fdeac049b5d7cf3e6b48ee1d6b825d3d320d1c694aa820d8b9b249c9d42189d" gracePeriod=2 Nov 24 19:59:01 crc kubenswrapper[4812]: I1124 19:59:01.462841 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-swg9k" Nov 24 19:59:01 crc kubenswrapper[4812]: I1124 19:59:01.859990 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k42rc" Nov 24 19:59:01 crc kubenswrapper[4812]: I1124 19:59:01.879721 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nhwff"] Nov 24 19:59:01 crc kubenswrapper[4812]: I1124 19:59:01.880035 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nhwff" podUID="7577bfbb-5db5-40ba-af0b-6eeca708a7da" containerName="registry-server" containerID="cri-o://103d380aee1d38478676e332715f62c2991d78364e51b50c73358c1fa04baa22" gracePeriod=2 Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.033445 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t6r5\" (UniqueName: \"kubernetes.io/projected/20887a94-307b-45ae-93a2-34f12ccc335e-kube-api-access-5t6r5\") pod \"20887a94-307b-45ae-93a2-34f12ccc335e\" (UID: \"20887a94-307b-45ae-93a2-34f12ccc335e\") " Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.033497 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20887a94-307b-45ae-93a2-34f12ccc335e-catalog-content\") pod \"20887a94-307b-45ae-93a2-34f12ccc335e\" (UID: \"20887a94-307b-45ae-93a2-34f12ccc335e\") " Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.033607 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20887a94-307b-45ae-93a2-34f12ccc335e-utilities\") pod \"20887a94-307b-45ae-93a2-34f12ccc335e\" (UID: \"20887a94-307b-45ae-93a2-34f12ccc335e\") " Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.034923 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20887a94-307b-45ae-93a2-34f12ccc335e-utilities" (OuterVolumeSpecName: "utilities") pod "20887a94-307b-45ae-93a2-34f12ccc335e" (UID: "20887a94-307b-45ae-93a2-34f12ccc335e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.044558 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20887a94-307b-45ae-93a2-34f12ccc335e-kube-api-access-5t6r5" (OuterVolumeSpecName: "kube-api-access-5t6r5") pod "20887a94-307b-45ae-93a2-34f12ccc335e" (UID: "20887a94-307b-45ae-93a2-34f12ccc335e"). InnerVolumeSpecName "kube-api-access-5t6r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.068514 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20887a94-307b-45ae-93a2-34f12ccc335e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20887a94-307b-45ae-93a2-34f12ccc335e" (UID: "20887a94-307b-45ae-93a2-34f12ccc335e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.135518 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t6r5\" (UniqueName: \"kubernetes.io/projected/20887a94-307b-45ae-93a2-34f12ccc335e-kube-api-access-5t6r5\") on node \"crc\" DevicePath \"\"" Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.136620 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20887a94-307b-45ae-93a2-34f12ccc335e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.136650 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20887a94-307b-45ae-93a2-34f12ccc335e-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.286381 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-94pwl" podUID="90724d16-7db7-43e8-bab7-86123d4c911d" containerName="registry-server" probeResult="failure" output=< Nov 24 19:59:02 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 19:59:02 crc kubenswrapper[4812]: > Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.419976 4812 generic.go:334] "Generic (PLEG): container finished" podID="7577bfbb-5db5-40ba-af0b-6eeca708a7da" containerID="103d380aee1d38478676e332715f62c2991d78364e51b50c73358c1fa04baa22" exitCode=0 Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.420097 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhwff" event={"ID":"7577bfbb-5db5-40ba-af0b-6eeca708a7da","Type":"ContainerDied","Data":"103d380aee1d38478676e332715f62c2991d78364e51b50c73358c1fa04baa22"} Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.426697 4812 generic.go:334] "Generic (PLEG): container finished" podID="20887a94-307b-45ae-93a2-34f12ccc335e" containerID="6fdeac049b5d7cf3e6b48ee1d6b825d3d320d1c694aa820d8b9b249c9d42189d" exitCode=0 Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.426795 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k42rc" Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.426911 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k42rc" event={"ID":"20887a94-307b-45ae-93a2-34f12ccc335e","Type":"ContainerDied","Data":"6fdeac049b5d7cf3e6b48ee1d6b825d3d320d1c694aa820d8b9b249c9d42189d"} Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.426956 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k42rc" event={"ID":"20887a94-307b-45ae-93a2-34f12ccc335e","Type":"ContainerDied","Data":"c0f064f3374376125f54e05777f5ca5e098b244e6760b2394c75390f6f5de0f7"} Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.426991 4812 scope.go:117] "RemoveContainer" containerID="6fdeac049b5d7cf3e6b48ee1d6b825d3d320d1c694aa820d8b9b249c9d42189d" Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.458314 4812 scope.go:117] "RemoveContainer" containerID="45cbc453c561818fc5f4c21b0dc5306b16228d33c463dd3f7728ae66a4754dfe" Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.484456 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k42rc"] Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.491438 4812 scope.go:117] "RemoveContainer" containerID="c6366817232c84eac8f0dac39d235456cc35da9a05530e7c5b1c1776302316cf" Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.494507 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k42rc"] Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.519869 4812 scope.go:117] "RemoveContainer" containerID="6fdeac049b5d7cf3e6b48ee1d6b825d3d320d1c694aa820d8b9b249c9d42189d" Nov 24 19:59:02 crc kubenswrapper[4812]: E1124 19:59:02.520492 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fdeac049b5d7cf3e6b48ee1d6b825d3d320d1c694aa820d8b9b249c9d42189d\": container with ID starting with 6fdeac049b5d7cf3e6b48ee1d6b825d3d320d1c694aa820d8b9b249c9d42189d not found: ID does not exist" containerID="6fdeac049b5d7cf3e6b48ee1d6b825d3d320d1c694aa820d8b9b249c9d42189d" Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.520543 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fdeac049b5d7cf3e6b48ee1d6b825d3d320d1c694aa820d8b9b249c9d42189d"} err="failed to get container status \"6fdeac049b5d7cf3e6b48ee1d6b825d3d320d1c694aa820d8b9b249c9d42189d\": rpc error: code = NotFound desc = could not find container \"6fdeac049b5d7cf3e6b48ee1d6b825d3d320d1c694aa820d8b9b249c9d42189d\": container with ID starting with 6fdeac049b5d7cf3e6b48ee1d6b825d3d320d1c694aa820d8b9b249c9d42189d not found: ID does not exist" Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.520594 4812 scope.go:117] "RemoveContainer" containerID="45cbc453c561818fc5f4c21b0dc5306b16228d33c463dd3f7728ae66a4754dfe" Nov 24 19:59:02 crc kubenswrapper[4812]: E1124 19:59:02.521032 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45cbc453c561818fc5f4c21b0dc5306b16228d33c463dd3f7728ae66a4754dfe\": container with ID starting with 45cbc453c561818fc5f4c21b0dc5306b16228d33c463dd3f7728ae66a4754dfe not found: ID does not exist" containerID="45cbc453c561818fc5f4c21b0dc5306b16228d33c463dd3f7728ae66a4754dfe" Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.521085 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45cbc453c561818fc5f4c21b0dc5306b16228d33c463dd3f7728ae66a4754dfe"} err="failed to get container status \"45cbc453c561818fc5f4c21b0dc5306b16228d33c463dd3f7728ae66a4754dfe\": rpc error: code = NotFound desc = could not find container \"45cbc453c561818fc5f4c21b0dc5306b16228d33c463dd3f7728ae66a4754dfe\": container with ID starting with 45cbc453c561818fc5f4c21b0dc5306b16228d33c463dd3f7728ae66a4754dfe not found: ID does not exist" Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.521124 4812 scope.go:117] "RemoveContainer" containerID="c6366817232c84eac8f0dac39d235456cc35da9a05530e7c5b1c1776302316cf" Nov 24 19:59:02 crc kubenswrapper[4812]: E1124 19:59:02.521516 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6366817232c84eac8f0dac39d235456cc35da9a05530e7c5b1c1776302316cf\": container with ID starting with c6366817232c84eac8f0dac39d235456cc35da9a05530e7c5b1c1776302316cf not found: ID does not exist" containerID="c6366817232c84eac8f0dac39d235456cc35da9a05530e7c5b1c1776302316cf" Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.521550 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6366817232c84eac8f0dac39d235456cc35da9a05530e7c5b1c1776302316cf"} err="failed to get container status \"c6366817232c84eac8f0dac39d235456cc35da9a05530e7c5b1c1776302316cf\": rpc error: code = NotFound desc = could not find container \"c6366817232c84eac8f0dac39d235456cc35da9a05530e7c5b1c1776302316cf\": container with ID starting with c6366817232c84eac8f0dac39d235456cc35da9a05530e7c5b1c1776302316cf not found: ID does not exist" Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.892235 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nhwff" Nov 24 19:59:02 crc kubenswrapper[4812]: I1124 19:59:02.976491 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20887a94-307b-45ae-93a2-34f12ccc335e" path="/var/lib/kubelet/pods/20887a94-307b-45ae-93a2-34f12ccc335e/volumes" Nov 24 19:59:03 crc kubenswrapper[4812]: I1124 19:59:03.053260 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfsst\" (UniqueName: \"kubernetes.io/projected/7577bfbb-5db5-40ba-af0b-6eeca708a7da-kube-api-access-kfsst\") pod \"7577bfbb-5db5-40ba-af0b-6eeca708a7da\" (UID: \"7577bfbb-5db5-40ba-af0b-6eeca708a7da\") " Nov 24 19:59:03 crc kubenswrapper[4812]: I1124 19:59:03.053442 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7577bfbb-5db5-40ba-af0b-6eeca708a7da-catalog-content\") pod \"7577bfbb-5db5-40ba-af0b-6eeca708a7da\" (UID: \"7577bfbb-5db5-40ba-af0b-6eeca708a7da\") " Nov 24 19:59:03 crc kubenswrapper[4812]: I1124 19:59:03.053488 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7577bfbb-5db5-40ba-af0b-6eeca708a7da-utilities\") pod \"7577bfbb-5db5-40ba-af0b-6eeca708a7da\" (UID: \"7577bfbb-5db5-40ba-af0b-6eeca708a7da\") " Nov 24 19:59:03 crc kubenswrapper[4812]: I1124 19:59:03.054857 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7577bfbb-5db5-40ba-af0b-6eeca708a7da-utilities" (OuterVolumeSpecName: "utilities") pod "7577bfbb-5db5-40ba-af0b-6eeca708a7da" (UID: "7577bfbb-5db5-40ba-af0b-6eeca708a7da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:59:03 crc kubenswrapper[4812]: I1124 19:59:03.060856 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7577bfbb-5db5-40ba-af0b-6eeca708a7da-kube-api-access-kfsst" (OuterVolumeSpecName: "kube-api-access-kfsst") pod "7577bfbb-5db5-40ba-af0b-6eeca708a7da" (UID: "7577bfbb-5db5-40ba-af0b-6eeca708a7da"). InnerVolumeSpecName "kube-api-access-kfsst". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:59:03 crc kubenswrapper[4812]: I1124 19:59:03.135379 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7577bfbb-5db5-40ba-af0b-6eeca708a7da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7577bfbb-5db5-40ba-af0b-6eeca708a7da" (UID: "7577bfbb-5db5-40ba-af0b-6eeca708a7da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:59:03 crc kubenswrapper[4812]: I1124 19:59:03.155929 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7577bfbb-5db5-40ba-af0b-6eeca708a7da-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:59:03 crc kubenswrapper[4812]: I1124 19:59:03.155982 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7577bfbb-5db5-40ba-af0b-6eeca708a7da-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:59:03 crc kubenswrapper[4812]: I1124 19:59:03.155991 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfsst\" (UniqueName: \"kubernetes.io/projected/7577bfbb-5db5-40ba-af0b-6eeca708a7da-kube-api-access-kfsst\") on node \"crc\" DevicePath \"\"" Nov 24 19:59:03 crc kubenswrapper[4812]: I1124 19:59:03.440741 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nhwff" event={"ID":"7577bfbb-5db5-40ba-af0b-6eeca708a7da","Type":"ContainerDied","Data":"cbf60dfec1db4a08287584a40acf195151d27c8030f645f041f3713b9ee3cdc0"} Nov 24 19:59:03 crc kubenswrapper[4812]: I1124 19:59:03.440773 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nhwff" Nov 24 19:59:03 crc kubenswrapper[4812]: I1124 19:59:03.440822 4812 scope.go:117] "RemoveContainer" containerID="103d380aee1d38478676e332715f62c2991d78364e51b50c73358c1fa04baa22" Nov 24 19:59:03 crc kubenswrapper[4812]: I1124 19:59:03.462494 4812 scope.go:117] "RemoveContainer" containerID="022f6fd2ed6547dc1b355b5a7333520663d4f271062f1d37a3d3408e46490fe0" Nov 24 19:59:03 crc kubenswrapper[4812]: I1124 19:59:03.482904 4812 scope.go:117] "RemoveContainer" containerID="66213f8ad4d7fdd8486c9960954a78bf244d25c2dd1312b37a171320603dbd3f" Nov 24 19:59:03 crc kubenswrapper[4812]: I1124 19:59:03.536113 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nhwff"] Nov 24 19:59:03 crc kubenswrapper[4812]: I1124 19:59:03.542866 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nhwff"] Nov 24 19:59:03 crc kubenswrapper[4812]: I1124 19:59:03.679896 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-swg9k"] Nov 24 19:59:03 crc kubenswrapper[4812]: I1124 19:59:03.680247 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-swg9k" podUID="b1d8257f-c44f-455e-8648-be87425c6242" containerName="registry-server" containerID="cri-o://60548264389ad97b4a7e7c55a87180f6c81728d28b99c86dda8ef0a96d7c45a3" gracePeriod=2 Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.195991 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swg9k" Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.372706 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57wg2\" (UniqueName: \"kubernetes.io/projected/b1d8257f-c44f-455e-8648-be87425c6242-kube-api-access-57wg2\") pod \"b1d8257f-c44f-455e-8648-be87425c6242\" (UID: \"b1d8257f-c44f-455e-8648-be87425c6242\") " Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.373181 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1d8257f-c44f-455e-8648-be87425c6242-catalog-content\") pod \"b1d8257f-c44f-455e-8648-be87425c6242\" (UID: \"b1d8257f-c44f-455e-8648-be87425c6242\") " Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.373427 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1d8257f-c44f-455e-8648-be87425c6242-utilities\") pod \"b1d8257f-c44f-455e-8648-be87425c6242\" (UID: \"b1d8257f-c44f-455e-8648-be87425c6242\") " Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.375399 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1d8257f-c44f-455e-8648-be87425c6242-utilities" (OuterVolumeSpecName: "utilities") pod "b1d8257f-c44f-455e-8648-be87425c6242" (UID: "b1d8257f-c44f-455e-8648-be87425c6242"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.384596 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1d8257f-c44f-455e-8648-be87425c6242-kube-api-access-57wg2" (OuterVolumeSpecName: "kube-api-access-57wg2") pod "b1d8257f-c44f-455e-8648-be87425c6242" (UID: "b1d8257f-c44f-455e-8648-be87425c6242"). InnerVolumeSpecName "kube-api-access-57wg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.460369 4812 generic.go:334] "Generic (PLEG): container finished" podID="b1d8257f-c44f-455e-8648-be87425c6242" containerID="60548264389ad97b4a7e7c55a87180f6c81728d28b99c86dda8ef0a96d7c45a3" exitCode=0 Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.460467 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swg9k" Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.460481 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swg9k" event={"ID":"b1d8257f-c44f-455e-8648-be87425c6242","Type":"ContainerDied","Data":"60548264389ad97b4a7e7c55a87180f6c81728d28b99c86dda8ef0a96d7c45a3"} Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.460630 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swg9k" event={"ID":"b1d8257f-c44f-455e-8648-be87425c6242","Type":"ContainerDied","Data":"2dd6b85be5fa7e4b55724d6b7d12adc2a5853c469c0574fbf9f537c9d2b2db68"} Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.460675 4812 scope.go:117] "RemoveContainer" containerID="60548264389ad97b4a7e7c55a87180f6c81728d28b99c86dda8ef0a96d7c45a3" Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.476242 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1d8257f-c44f-455e-8648-be87425c6242-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.476303 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57wg2\" (UniqueName: \"kubernetes.io/projected/b1d8257f-c44f-455e-8648-be87425c6242-kube-api-access-57wg2\") on node \"crc\" DevicePath \"\"" Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.492191 4812 scope.go:117] "RemoveContainer" containerID="3bf338e6ef193b5ac106ad34b48b0a1f3f7c3fc3cb76f7a1421fd2127be8a7b3" Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.527476 4812 scope.go:117] "RemoveContainer" containerID="0e5c7aa8fbc92ea8ffdae56fb576de70c18c9d3f0558cc150c28f267666bcf49" Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.555627 4812 scope.go:117] "RemoveContainer" containerID="60548264389ad97b4a7e7c55a87180f6c81728d28b99c86dda8ef0a96d7c45a3" Nov 24 19:59:04 crc kubenswrapper[4812]: E1124 19:59:04.556289 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60548264389ad97b4a7e7c55a87180f6c81728d28b99c86dda8ef0a96d7c45a3\": container with ID starting with 60548264389ad97b4a7e7c55a87180f6c81728d28b99c86dda8ef0a96d7c45a3 not found: ID does not exist" containerID="60548264389ad97b4a7e7c55a87180f6c81728d28b99c86dda8ef0a96d7c45a3" Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.556445 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60548264389ad97b4a7e7c55a87180f6c81728d28b99c86dda8ef0a96d7c45a3"} err="failed to get container status \"60548264389ad97b4a7e7c55a87180f6c81728d28b99c86dda8ef0a96d7c45a3\": rpc error: code = NotFound desc = could not find container \"60548264389ad97b4a7e7c55a87180f6c81728d28b99c86dda8ef0a96d7c45a3\": container with ID starting with 60548264389ad97b4a7e7c55a87180f6c81728d28b99c86dda8ef0a96d7c45a3 not found: ID does not exist" Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.556492 4812 scope.go:117] "RemoveContainer" containerID="3bf338e6ef193b5ac106ad34b48b0a1f3f7c3fc3cb76f7a1421fd2127be8a7b3" Nov 24 19:59:04 crc kubenswrapper[4812]: E1124 19:59:04.557001 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bf338e6ef193b5ac106ad34b48b0a1f3f7c3fc3cb76f7a1421fd2127be8a7b3\": container with ID starting with 3bf338e6ef193b5ac106ad34b48b0a1f3f7c3fc3cb76f7a1421fd2127be8a7b3 not found: ID does not exist" containerID="3bf338e6ef193b5ac106ad34b48b0a1f3f7c3fc3cb76f7a1421fd2127be8a7b3" Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.557071 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bf338e6ef193b5ac106ad34b48b0a1f3f7c3fc3cb76f7a1421fd2127be8a7b3"} err="failed to get container status \"3bf338e6ef193b5ac106ad34b48b0a1f3f7c3fc3cb76f7a1421fd2127be8a7b3\": rpc error: code = NotFound desc = could not find container \"3bf338e6ef193b5ac106ad34b48b0a1f3f7c3fc3cb76f7a1421fd2127be8a7b3\": container with ID starting with 3bf338e6ef193b5ac106ad34b48b0a1f3f7c3fc3cb76f7a1421fd2127be8a7b3 not found: ID does not exist" Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.557125 4812 scope.go:117] "RemoveContainer" containerID="0e5c7aa8fbc92ea8ffdae56fb576de70c18c9d3f0558cc150c28f267666bcf49" Nov 24 19:59:04 crc kubenswrapper[4812]: E1124 19:59:04.557618 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e5c7aa8fbc92ea8ffdae56fb576de70c18c9d3f0558cc150c28f267666bcf49\": container with ID starting with 0e5c7aa8fbc92ea8ffdae56fb576de70c18c9d3f0558cc150c28f267666bcf49 not found: ID does not exist" containerID="0e5c7aa8fbc92ea8ffdae56fb576de70c18c9d3f0558cc150c28f267666bcf49" Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.557676 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e5c7aa8fbc92ea8ffdae56fb576de70c18c9d3f0558cc150c28f267666bcf49"} err="failed to get container status \"0e5c7aa8fbc92ea8ffdae56fb576de70c18c9d3f0558cc150c28f267666bcf49\": rpc error: code = NotFound desc = could not find container \"0e5c7aa8fbc92ea8ffdae56fb576de70c18c9d3f0558cc150c28f267666bcf49\": container with ID starting with 0e5c7aa8fbc92ea8ffdae56fb576de70c18c9d3f0558cc150c28f267666bcf49 not found: ID does not exist" Nov 24 19:59:04 crc kubenswrapper[4812]: I1124 19:59:04.986554 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7577bfbb-5db5-40ba-af0b-6eeca708a7da" path="/var/lib/kubelet/pods/7577bfbb-5db5-40ba-af0b-6eeca708a7da/volumes" Nov 24 19:59:06 crc kubenswrapper[4812]: I1124 19:59:06.220877 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1d8257f-c44f-455e-8648-be87425c6242-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1d8257f-c44f-455e-8648-be87425c6242" (UID: "b1d8257f-c44f-455e-8648-be87425c6242"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:59:06 crc kubenswrapper[4812]: I1124 19:59:06.307957 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1d8257f-c44f-455e-8648-be87425c6242-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:59:06 crc kubenswrapper[4812]: I1124 19:59:06.311770 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-swg9k"] Nov 24 19:59:06 crc kubenswrapper[4812]: I1124 19:59:06.322317 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-swg9k"] Nov 24 19:59:06 crc kubenswrapper[4812]: I1124 19:59:06.977442 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1d8257f-c44f-455e-8648-be87425c6242" path="/var/lib/kubelet/pods/b1d8257f-c44f-455e-8648-be87425c6242/volumes" Nov 24 19:59:07 crc kubenswrapper[4812]: I1124 19:59:07.966439 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 19:59:07 crc kubenswrapper[4812]: E1124 19:59:07.968272 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:59:11 crc kubenswrapper[4812]: I1124 19:59:11.288681 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-94pwl" Nov 24 19:59:11 crc kubenswrapper[4812]: I1124 19:59:11.357602 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-94pwl" Nov 24 19:59:11 crc kubenswrapper[4812]: I1124 19:59:11.539050 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-94pwl"] Nov 24 19:59:12 crc kubenswrapper[4812]: I1124 19:59:12.570013 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-94pwl" podUID="90724d16-7db7-43e8-bab7-86123d4c911d" containerName="registry-server" containerID="cri-o://4a4a86f4848aee50569215b080e364dd7648c7fc3c32eaa664f794f9c413c3dd" gracePeriod=2 Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.023146 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94pwl" Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.221300 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9lsg\" (UniqueName: \"kubernetes.io/projected/90724d16-7db7-43e8-bab7-86123d4c911d-kube-api-access-r9lsg\") pod \"90724d16-7db7-43e8-bab7-86123d4c911d\" (UID: \"90724d16-7db7-43e8-bab7-86123d4c911d\") " Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.221585 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90724d16-7db7-43e8-bab7-86123d4c911d-catalog-content\") pod \"90724d16-7db7-43e8-bab7-86123d4c911d\" (UID: \"90724d16-7db7-43e8-bab7-86123d4c911d\") " Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.221694 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90724d16-7db7-43e8-bab7-86123d4c911d-utilities\") pod \"90724d16-7db7-43e8-bab7-86123d4c911d\" (UID: \"90724d16-7db7-43e8-bab7-86123d4c911d\") " Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.223001 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90724d16-7db7-43e8-bab7-86123d4c911d-utilities" (OuterVolumeSpecName: "utilities") pod "90724d16-7db7-43e8-bab7-86123d4c911d" (UID: "90724d16-7db7-43e8-bab7-86123d4c911d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.228146 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90724d16-7db7-43e8-bab7-86123d4c911d-kube-api-access-r9lsg" (OuterVolumeSpecName: "kube-api-access-r9lsg") pod "90724d16-7db7-43e8-bab7-86123d4c911d" (UID: "90724d16-7db7-43e8-bab7-86123d4c911d"). InnerVolumeSpecName "kube-api-access-r9lsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.323859 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90724d16-7db7-43e8-bab7-86123d4c911d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.323902 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9lsg\" (UniqueName: \"kubernetes.io/projected/90724d16-7db7-43e8-bab7-86123d4c911d-kube-api-access-r9lsg\") on node \"crc\" DevicePath \"\"" Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.327447 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90724d16-7db7-43e8-bab7-86123d4c911d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90724d16-7db7-43e8-bab7-86123d4c911d" (UID: "90724d16-7db7-43e8-bab7-86123d4c911d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.425271 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90724d16-7db7-43e8-bab7-86123d4c911d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.582753 4812 generic.go:334] "Generic (PLEG): container finished" podID="90724d16-7db7-43e8-bab7-86123d4c911d" containerID="4a4a86f4848aee50569215b080e364dd7648c7fc3c32eaa664f794f9c413c3dd" exitCode=0 Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.583074 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94pwl" event={"ID":"90724d16-7db7-43e8-bab7-86123d4c911d","Type":"ContainerDied","Data":"4a4a86f4848aee50569215b080e364dd7648c7fc3c32eaa664f794f9c413c3dd"} Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.583381 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-94pwl" event={"ID":"90724d16-7db7-43e8-bab7-86123d4c911d","Type":"ContainerDied","Data":"3c3cbe5f8f08c77d6009d29bd90b8f79c11f50323557eb03b258d3c403eb2bb6"} Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.583105 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-94pwl" Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.583407 4812 scope.go:117] "RemoveContainer" containerID="4a4a86f4848aee50569215b080e364dd7648c7fc3c32eaa664f794f9c413c3dd" Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.613110 4812 scope.go:117] "RemoveContainer" containerID="3fdde4c47dd34a82632b4a474e1fdf5b9c71ee095c6c88ddb032112c83576295" Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.648133 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-94pwl"] Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.653300 4812 scope.go:117] "RemoveContainer" containerID="2fd338ef6ec61e882577e92cd9ac2a2af3c7e1a175ff771c74b488045a66fb54" Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.659582 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-94pwl"] Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.694856 4812 scope.go:117] "RemoveContainer" containerID="4a4a86f4848aee50569215b080e364dd7648c7fc3c32eaa664f794f9c413c3dd" Nov 24 19:59:13 crc kubenswrapper[4812]: E1124 19:59:13.695600 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a4a86f4848aee50569215b080e364dd7648c7fc3c32eaa664f794f9c413c3dd\": container with ID starting with 4a4a86f4848aee50569215b080e364dd7648c7fc3c32eaa664f794f9c413c3dd not found: ID does not exist" containerID="4a4a86f4848aee50569215b080e364dd7648c7fc3c32eaa664f794f9c413c3dd" Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.695678 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a4a86f4848aee50569215b080e364dd7648c7fc3c32eaa664f794f9c413c3dd"} err="failed to get container status \"4a4a86f4848aee50569215b080e364dd7648c7fc3c32eaa664f794f9c413c3dd\": rpc error: code = NotFound desc = could not find container \"4a4a86f4848aee50569215b080e364dd7648c7fc3c32eaa664f794f9c413c3dd\": container with ID starting with 4a4a86f4848aee50569215b080e364dd7648c7fc3c32eaa664f794f9c413c3dd not found: ID does not exist" Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.695718 4812 scope.go:117] "RemoveContainer" containerID="3fdde4c47dd34a82632b4a474e1fdf5b9c71ee095c6c88ddb032112c83576295" Nov 24 19:59:13 crc kubenswrapper[4812]: E1124 19:59:13.696248 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fdde4c47dd34a82632b4a474e1fdf5b9c71ee095c6c88ddb032112c83576295\": container with ID starting with 3fdde4c47dd34a82632b4a474e1fdf5b9c71ee095c6c88ddb032112c83576295 not found: ID does not exist" containerID="3fdde4c47dd34a82632b4a474e1fdf5b9c71ee095c6c88ddb032112c83576295" Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.696287 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fdde4c47dd34a82632b4a474e1fdf5b9c71ee095c6c88ddb032112c83576295"} err="failed to get container status \"3fdde4c47dd34a82632b4a474e1fdf5b9c71ee095c6c88ddb032112c83576295\": rpc error: code = NotFound desc = could not find container \"3fdde4c47dd34a82632b4a474e1fdf5b9c71ee095c6c88ddb032112c83576295\": container with ID starting with 3fdde4c47dd34a82632b4a474e1fdf5b9c71ee095c6c88ddb032112c83576295 not found: ID does not exist" Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.696314 4812 scope.go:117] "RemoveContainer" containerID="2fd338ef6ec61e882577e92cd9ac2a2af3c7e1a175ff771c74b488045a66fb54" Nov 24 19:59:13 crc kubenswrapper[4812]: E1124 19:59:13.696791 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fd338ef6ec61e882577e92cd9ac2a2af3c7e1a175ff771c74b488045a66fb54\": container with ID starting with 2fd338ef6ec61e882577e92cd9ac2a2af3c7e1a175ff771c74b488045a66fb54 not found: ID does not exist" containerID="2fd338ef6ec61e882577e92cd9ac2a2af3c7e1a175ff771c74b488045a66fb54" Nov 24 19:59:13 crc kubenswrapper[4812]: I1124 19:59:13.696842 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fd338ef6ec61e882577e92cd9ac2a2af3c7e1a175ff771c74b488045a66fb54"} err="failed to get container status \"2fd338ef6ec61e882577e92cd9ac2a2af3c7e1a175ff771c74b488045a66fb54\": rpc error: code = NotFound desc = could not find container \"2fd338ef6ec61e882577e92cd9ac2a2af3c7e1a175ff771c74b488045a66fb54\": container with ID starting with 2fd338ef6ec61e882577e92cd9ac2a2af3c7e1a175ff771c74b488045a66fb54 not found: ID does not exist" Nov 24 19:59:14 crc kubenswrapper[4812]: I1124 19:59:14.981571 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90724d16-7db7-43e8-bab7-86123d4c911d" path="/var/lib/kubelet/pods/90724d16-7db7-43e8-bab7-86123d4c911d/volumes" Nov 24 19:59:20 crc kubenswrapper[4812]: I1124 19:59:20.966493 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 19:59:20 crc kubenswrapper[4812]: E1124 19:59:20.967351 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:59:32 crc kubenswrapper[4812]: I1124 19:59:32.967085 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 19:59:32 crc kubenswrapper[4812]: E1124 19:59:32.968015 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:59:43 crc kubenswrapper[4812]: I1124 19:59:43.967078 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 19:59:43 crc kubenswrapper[4812]: E1124 19:59:43.968364 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 19:59:58 crc kubenswrapper[4812]: I1124 19:59:58.966078 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 19:59:58 crc kubenswrapper[4812]: E1124 19:59:58.967246 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.168538 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz"] Nov 24 20:00:00 crc kubenswrapper[4812]: E1124 20:00:00.168897 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d8257f-c44f-455e-8648-be87425c6242" containerName="registry-server" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.168912 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d8257f-c44f-455e-8648-be87425c6242" containerName="registry-server" Nov 24 20:00:00 crc kubenswrapper[4812]: E1124 20:00:00.168923 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90724d16-7db7-43e8-bab7-86123d4c911d" containerName="registry-server" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.168931 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="90724d16-7db7-43e8-bab7-86123d4c911d" containerName="registry-server" Nov 24 20:00:00 crc kubenswrapper[4812]: E1124 20:00:00.168945 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20887a94-307b-45ae-93a2-34f12ccc335e" containerName="extract-content" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.168953 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="20887a94-307b-45ae-93a2-34f12ccc335e" containerName="extract-content" Nov 24 20:00:00 crc kubenswrapper[4812]: E1124 20:00:00.168969 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d8257f-c44f-455e-8648-be87425c6242" containerName="extract-utilities" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.168977 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d8257f-c44f-455e-8648-be87425c6242" containerName="extract-utilities" Nov 24 20:00:00 crc kubenswrapper[4812]: E1124 20:00:00.168991 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90724d16-7db7-43e8-bab7-86123d4c911d" containerName="extract-content" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.168998 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="90724d16-7db7-43e8-bab7-86123d4c911d" containerName="extract-content" Nov 24 20:00:00 crc kubenswrapper[4812]: E1124 20:00:00.169015 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20887a94-307b-45ae-93a2-34f12ccc335e" containerName="extract-utilities" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.169022 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="20887a94-307b-45ae-93a2-34f12ccc335e" containerName="extract-utilities" Nov 24 20:00:00 crc kubenswrapper[4812]: E1124 20:00:00.169042 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7577bfbb-5db5-40ba-af0b-6eeca708a7da" containerName="extract-utilities" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.169049 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7577bfbb-5db5-40ba-af0b-6eeca708a7da" containerName="extract-utilities" Nov 24 20:00:00 crc kubenswrapper[4812]: E1124 20:00:00.169061 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90724d16-7db7-43e8-bab7-86123d4c911d" containerName="extract-utilities" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.169068 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="90724d16-7db7-43e8-bab7-86123d4c911d" containerName="extract-utilities" Nov 24 20:00:00 crc kubenswrapper[4812]: E1124 20:00:00.169081 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1d8257f-c44f-455e-8648-be87425c6242" containerName="extract-content" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.169089 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1d8257f-c44f-455e-8648-be87425c6242" containerName="extract-content" Nov 24 20:00:00 crc kubenswrapper[4812]: E1124 20:00:00.169101 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7577bfbb-5db5-40ba-af0b-6eeca708a7da" containerName="registry-server" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.169109 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7577bfbb-5db5-40ba-af0b-6eeca708a7da" containerName="registry-server" Nov 24 20:00:00 crc kubenswrapper[4812]: E1124 20:00:00.169127 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20887a94-307b-45ae-93a2-34f12ccc335e" containerName="registry-server" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.169134 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="20887a94-307b-45ae-93a2-34f12ccc335e" containerName="registry-server" Nov 24 20:00:00 crc kubenswrapper[4812]: E1124 20:00:00.169147 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7577bfbb-5db5-40ba-af0b-6eeca708a7da" containerName="extract-content" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.169154 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7577bfbb-5db5-40ba-af0b-6eeca708a7da" containerName="extract-content" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.169301 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1d8257f-c44f-455e-8648-be87425c6242" containerName="registry-server" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.169316 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="20887a94-307b-45ae-93a2-34f12ccc335e" containerName="registry-server" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.169358 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7577bfbb-5db5-40ba-af0b-6eeca708a7da" containerName="registry-server" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.169385 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="90724d16-7db7-43e8-bab7-86123d4c911d" containerName="registry-server" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.169913 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.173959 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.174565 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.177875 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz"] Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.298196 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85bbfa18-5481-4840-a56f-2b2ba9da1bab-secret-volume\") pod \"collect-profiles-29400240-qknhz\" (UID: \"85bbfa18-5481-4840-a56f-2b2ba9da1bab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.298263 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq5sz\" (UniqueName: \"kubernetes.io/projected/85bbfa18-5481-4840-a56f-2b2ba9da1bab-kube-api-access-gq5sz\") pod \"collect-profiles-29400240-qknhz\" (UID: \"85bbfa18-5481-4840-a56f-2b2ba9da1bab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.298324 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85bbfa18-5481-4840-a56f-2b2ba9da1bab-config-volume\") pod \"collect-profiles-29400240-qknhz\" (UID: \"85bbfa18-5481-4840-a56f-2b2ba9da1bab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.400028 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85bbfa18-5481-4840-a56f-2b2ba9da1bab-secret-volume\") pod \"collect-profiles-29400240-qknhz\" (UID: \"85bbfa18-5481-4840-a56f-2b2ba9da1bab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.400084 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq5sz\" (UniqueName: \"kubernetes.io/projected/85bbfa18-5481-4840-a56f-2b2ba9da1bab-kube-api-access-gq5sz\") pod \"collect-profiles-29400240-qknhz\" (UID: \"85bbfa18-5481-4840-a56f-2b2ba9da1bab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.400112 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85bbfa18-5481-4840-a56f-2b2ba9da1bab-config-volume\") pod \"collect-profiles-29400240-qknhz\" (UID: \"85bbfa18-5481-4840-a56f-2b2ba9da1bab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.401078 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85bbfa18-5481-4840-a56f-2b2ba9da1bab-config-volume\") pod \"collect-profiles-29400240-qknhz\" (UID: \"85bbfa18-5481-4840-a56f-2b2ba9da1bab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.408757 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85bbfa18-5481-4840-a56f-2b2ba9da1bab-secret-volume\") pod \"collect-profiles-29400240-qknhz\" (UID: \"85bbfa18-5481-4840-a56f-2b2ba9da1bab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.433681 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq5sz\" (UniqueName: \"kubernetes.io/projected/85bbfa18-5481-4840-a56f-2b2ba9da1bab-kube-api-access-gq5sz\") pod \"collect-profiles-29400240-qknhz\" (UID: \"85bbfa18-5481-4840-a56f-2b2ba9da1bab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.506631 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz" Nov 24 20:00:00 crc kubenswrapper[4812]: I1124 20:00:00.768186 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz"] Nov 24 20:00:01 crc kubenswrapper[4812]: I1124 20:00:01.038079 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz" event={"ID":"85bbfa18-5481-4840-a56f-2b2ba9da1bab","Type":"ContainerStarted","Data":"fb24b21c7a2f5f6be724b3b06b23490ebbf4b8adabe30d4ac7b2bedadda869d9"} Nov 24 20:00:01 crc kubenswrapper[4812]: I1124 20:00:01.038697 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz" event={"ID":"85bbfa18-5481-4840-a56f-2b2ba9da1bab","Type":"ContainerStarted","Data":"1c740b4bb4c08127671138aebf9099a7233c673adf8126bd3c7b454f32b9a58b"} Nov 24 20:00:01 crc kubenswrapper[4812]: I1124 20:00:01.063317 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz" podStartSLOduration=1.063294359 podStartE2EDuration="1.063294359s" podCreationTimestamp="2025-11-24 20:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:00:01.055989862 +0000 UTC m=+2594.844942273" watchObservedRunningTime="2025-11-24 20:00:01.063294359 +0000 UTC m=+2594.852246740" Nov 24 20:00:02 crc kubenswrapper[4812]: I1124 20:00:02.048586 4812 generic.go:334] "Generic (PLEG): container finished" podID="85bbfa18-5481-4840-a56f-2b2ba9da1bab" containerID="fb24b21c7a2f5f6be724b3b06b23490ebbf4b8adabe30d4ac7b2bedadda869d9" exitCode=0 Nov 24 20:00:02 crc kubenswrapper[4812]: I1124 20:00:02.048681 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz" event={"ID":"85bbfa18-5481-4840-a56f-2b2ba9da1bab","Type":"ContainerDied","Data":"fb24b21c7a2f5f6be724b3b06b23490ebbf4b8adabe30d4ac7b2bedadda869d9"} Nov 24 20:00:03 crc kubenswrapper[4812]: I1124 20:00:03.385796 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz" Nov 24 20:00:03 crc kubenswrapper[4812]: I1124 20:00:03.551914 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85bbfa18-5481-4840-a56f-2b2ba9da1bab-secret-volume\") pod \"85bbfa18-5481-4840-a56f-2b2ba9da1bab\" (UID: \"85bbfa18-5481-4840-a56f-2b2ba9da1bab\") " Nov 24 20:00:03 crc kubenswrapper[4812]: I1124 20:00:03.552380 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85bbfa18-5481-4840-a56f-2b2ba9da1bab-config-volume\") pod \"85bbfa18-5481-4840-a56f-2b2ba9da1bab\" (UID: \"85bbfa18-5481-4840-a56f-2b2ba9da1bab\") " Nov 24 20:00:03 crc kubenswrapper[4812]: I1124 20:00:03.552468 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq5sz\" (UniqueName: \"kubernetes.io/projected/85bbfa18-5481-4840-a56f-2b2ba9da1bab-kube-api-access-gq5sz\") pod \"85bbfa18-5481-4840-a56f-2b2ba9da1bab\" (UID: \"85bbfa18-5481-4840-a56f-2b2ba9da1bab\") " Nov 24 20:00:03 crc kubenswrapper[4812]: I1124 20:00:03.553941 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85bbfa18-5481-4840-a56f-2b2ba9da1bab-config-volume" (OuterVolumeSpecName: "config-volume") pod "85bbfa18-5481-4840-a56f-2b2ba9da1bab" (UID: "85bbfa18-5481-4840-a56f-2b2ba9da1bab"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:00:03 crc kubenswrapper[4812]: I1124 20:00:03.558311 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85bbfa18-5481-4840-a56f-2b2ba9da1bab-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "85bbfa18-5481-4840-a56f-2b2ba9da1bab" (UID: "85bbfa18-5481-4840-a56f-2b2ba9da1bab"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:00:03 crc kubenswrapper[4812]: I1124 20:00:03.574033 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85bbfa18-5481-4840-a56f-2b2ba9da1bab-kube-api-access-gq5sz" (OuterVolumeSpecName: "kube-api-access-gq5sz") pod "85bbfa18-5481-4840-a56f-2b2ba9da1bab" (UID: "85bbfa18-5481-4840-a56f-2b2ba9da1bab"). InnerVolumeSpecName "kube-api-access-gq5sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:00:03 crc kubenswrapper[4812]: I1124 20:00:03.654714 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/85bbfa18-5481-4840-a56f-2b2ba9da1bab-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 20:00:03 crc kubenswrapper[4812]: I1124 20:00:03.655031 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85bbfa18-5481-4840-a56f-2b2ba9da1bab-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 20:00:03 crc kubenswrapper[4812]: I1124 20:00:03.655173 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq5sz\" (UniqueName: \"kubernetes.io/projected/85bbfa18-5481-4840-a56f-2b2ba9da1bab-kube-api-access-gq5sz\") on node \"crc\" DevicePath \"\"" Nov 24 20:00:04 crc kubenswrapper[4812]: I1124 20:00:04.072059 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz" event={"ID":"85bbfa18-5481-4840-a56f-2b2ba9da1bab","Type":"ContainerDied","Data":"1c740b4bb4c08127671138aebf9099a7233c673adf8126bd3c7b454f32b9a58b"} Nov 24 20:00:04 crc kubenswrapper[4812]: I1124 20:00:04.072454 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c740b4bb4c08127671138aebf9099a7233c673adf8126bd3c7b454f32b9a58b" Nov 24 20:00:04 crc kubenswrapper[4812]: I1124 20:00:04.072176 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz" Nov 24 20:00:04 crc kubenswrapper[4812]: I1124 20:00:04.468745 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx"] Nov 24 20:00:04 crc kubenswrapper[4812]: I1124 20:00:04.474867 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400195-lwbwx"] Nov 24 20:00:04 crc kubenswrapper[4812]: I1124 20:00:04.985706 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a685cc-973c-4317-b6fe-5bc842e32099" path="/var/lib/kubelet/pods/89a685cc-973c-4317-b6fe-5bc842e32099/volumes" Nov 24 20:00:12 crc kubenswrapper[4812]: I1124 20:00:12.966027 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 20:00:12 crc kubenswrapper[4812]: E1124 20:00:12.966565 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:00:24 crc kubenswrapper[4812]: I1124 20:00:24.965735 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 20:00:24 crc kubenswrapper[4812]: E1124 20:00:24.966833 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:00:38 crc kubenswrapper[4812]: I1124 20:00:38.966119 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 20:00:38 crc kubenswrapper[4812]: E1124 20:00:38.967901 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:00:51 crc kubenswrapper[4812]: I1124 20:00:51.555224 4812 scope.go:117] "RemoveContainer" containerID="04cb93a7af0996c4077c97be9869ff3e819c474357cc487a1130750133c5197e" Nov 24 20:00:51 crc kubenswrapper[4812]: I1124 20:00:51.965271 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 20:00:51 crc kubenswrapper[4812]: E1124 20:00:51.965645 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:01:03 crc kubenswrapper[4812]: I1124 20:01:03.966308 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 20:01:03 crc kubenswrapper[4812]: E1124 20:01:03.967499 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:01:18 crc kubenswrapper[4812]: I1124 20:01:18.966233 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 20:01:18 crc kubenswrapper[4812]: E1124 20:01:18.967234 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:01:30 crc kubenswrapper[4812]: I1124 20:01:30.966898 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 20:01:30 crc kubenswrapper[4812]: E1124 20:01:30.968102 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:01:45 crc kubenswrapper[4812]: I1124 20:01:45.965152 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 20:01:45 crc kubenswrapper[4812]: E1124 20:01:45.965779 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:01:58 crc kubenswrapper[4812]: I1124 20:01:58.965535 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 20:01:58 crc kubenswrapper[4812]: E1124 20:01:58.966220 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:02:12 crc kubenswrapper[4812]: I1124 20:02:12.967688 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 20:02:13 crc kubenswrapper[4812]: I1124 20:02:13.291466 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"efc9653770e8a7159f93bbb0600f4253c0d85ff3f24447adda2f8018ee8fd800"} Nov 24 20:04:33 crc kubenswrapper[4812]: I1124 20:04:32.999445 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:04:33 crc kubenswrapper[4812]: I1124 20:04:33.000102 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:05:02 crc kubenswrapper[4812]: I1124 20:05:02.999100 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:05:03 crc kubenswrapper[4812]: I1124 20:05:02.999985 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:05:32 crc kubenswrapper[4812]: I1124 20:05:32.998310 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:05:33 crc kubenswrapper[4812]: I1124 20:05:32.999466 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:05:33 crc kubenswrapper[4812]: I1124 20:05:32.999530 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 20:05:33 crc kubenswrapper[4812]: I1124 20:05:33.000432 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"efc9653770e8a7159f93bbb0600f4253c0d85ff3f24447adda2f8018ee8fd800"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 20:05:33 crc kubenswrapper[4812]: I1124 20:05:33.000590 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://efc9653770e8a7159f93bbb0600f4253c0d85ff3f24447adda2f8018ee8fd800" gracePeriod=600 Nov 24 20:05:33 crc kubenswrapper[4812]: I1124 20:05:33.219253 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="efc9653770e8a7159f93bbb0600f4253c0d85ff3f24447adda2f8018ee8fd800" exitCode=0 Nov 24 20:05:33 crc kubenswrapper[4812]: I1124 20:05:33.219313 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"efc9653770e8a7159f93bbb0600f4253c0d85ff3f24447adda2f8018ee8fd800"} Nov 24 20:05:33 crc kubenswrapper[4812]: I1124 20:05:33.219676 4812 scope.go:117] "RemoveContainer" containerID="f05d4dde1188f9ee7994b485756b7397e6ede37dada34c1da4f0607137142eb0" Nov 24 20:05:34 crc kubenswrapper[4812]: I1124 20:05:34.235620 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524"} Nov 24 20:08:02 crc kubenswrapper[4812]: I1124 20:08:02.999191 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:08:03 crc kubenswrapper[4812]: I1124 20:08:02.999928 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:08:32 crc kubenswrapper[4812]: I1124 20:08:32.998530 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:08:33 crc kubenswrapper[4812]: I1124 20:08:32.999291 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:09:03 crc kubenswrapper[4812]: I1124 20:09:02.999259 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:09:03 crc kubenswrapper[4812]: I1124 20:09:03.000127 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:09:03 crc kubenswrapper[4812]: I1124 20:09:03.000218 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 20:09:03 crc kubenswrapper[4812]: I1124 20:09:03.001298 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 20:09:03 crc kubenswrapper[4812]: I1124 20:09:03.001635 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" gracePeriod=600 Nov 24 20:09:03 crc kubenswrapper[4812]: I1124 20:09:03.229685 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" exitCode=0 Nov 24 20:09:03 crc kubenswrapper[4812]: I1124 20:09:03.229741 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524"} Nov 24 20:09:03 crc kubenswrapper[4812]: I1124 20:09:03.229821 4812 scope.go:117] "RemoveContainer" containerID="efc9653770e8a7159f93bbb0600f4253c0d85ff3f24447adda2f8018ee8fd800" Nov 24 20:09:03 crc kubenswrapper[4812]: E1124 20:09:03.679385 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:09:04 crc kubenswrapper[4812]: I1124 20:09:04.247210 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:09:04 crc kubenswrapper[4812]: E1124 20:09:04.247841 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:09:19 crc kubenswrapper[4812]: I1124 20:09:19.966776 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:09:19 crc kubenswrapper[4812]: E1124 20:09:19.967791 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:09:30 crc kubenswrapper[4812]: I1124 20:09:30.966556 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:09:30 crc kubenswrapper[4812]: E1124 20:09:30.967857 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:09:38 crc kubenswrapper[4812]: I1124 20:09:38.594850 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7wlq8"] Nov 24 20:09:38 crc kubenswrapper[4812]: E1124 20:09:38.595891 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85bbfa18-5481-4840-a56f-2b2ba9da1bab" containerName="collect-profiles" Nov 24 20:09:38 crc kubenswrapper[4812]: I1124 20:09:38.595911 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="85bbfa18-5481-4840-a56f-2b2ba9da1bab" containerName="collect-profiles" Nov 24 20:09:38 crc kubenswrapper[4812]: I1124 20:09:38.596184 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="85bbfa18-5481-4840-a56f-2b2ba9da1bab" containerName="collect-profiles" Nov 24 20:09:38 crc kubenswrapper[4812]: I1124 20:09:38.598173 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wlq8" Nov 24 20:09:38 crc kubenswrapper[4812]: I1124 20:09:38.609393 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wlq8"] Nov 24 20:09:38 crc kubenswrapper[4812]: I1124 20:09:38.674210 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6j4x\" (UniqueName: \"kubernetes.io/projected/6b925a7f-2b72-4ece-ac2f-5a313d440ed1-kube-api-access-b6j4x\") pod \"community-operators-7wlq8\" (UID: \"6b925a7f-2b72-4ece-ac2f-5a313d440ed1\") " pod="openshift-marketplace/community-operators-7wlq8" Nov 24 20:09:38 crc kubenswrapper[4812]: I1124 20:09:38.674285 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b925a7f-2b72-4ece-ac2f-5a313d440ed1-utilities\") pod \"community-operators-7wlq8\" (UID: \"6b925a7f-2b72-4ece-ac2f-5a313d440ed1\") " pod="openshift-marketplace/community-operators-7wlq8" Nov 24 20:09:38 crc kubenswrapper[4812]: I1124 20:09:38.674311 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b925a7f-2b72-4ece-ac2f-5a313d440ed1-catalog-content\") pod \"community-operators-7wlq8\" (UID: \"6b925a7f-2b72-4ece-ac2f-5a313d440ed1\") " pod="openshift-marketplace/community-operators-7wlq8" Nov 24 20:09:38 crc kubenswrapper[4812]: I1124 20:09:38.775362 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b925a7f-2b72-4ece-ac2f-5a313d440ed1-utilities\") pod \"community-operators-7wlq8\" (UID: \"6b925a7f-2b72-4ece-ac2f-5a313d440ed1\") " pod="openshift-marketplace/community-operators-7wlq8" Nov 24 20:09:38 crc kubenswrapper[4812]: I1124 20:09:38.775409 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b925a7f-2b72-4ece-ac2f-5a313d440ed1-catalog-content\") pod \"community-operators-7wlq8\" (UID: \"6b925a7f-2b72-4ece-ac2f-5a313d440ed1\") " pod="openshift-marketplace/community-operators-7wlq8" Nov 24 20:09:38 crc kubenswrapper[4812]: I1124 20:09:38.775473 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6j4x\" (UniqueName: \"kubernetes.io/projected/6b925a7f-2b72-4ece-ac2f-5a313d440ed1-kube-api-access-b6j4x\") pod \"community-operators-7wlq8\" (UID: \"6b925a7f-2b72-4ece-ac2f-5a313d440ed1\") " pod="openshift-marketplace/community-operators-7wlq8" Nov 24 20:09:38 crc kubenswrapper[4812]: I1124 20:09:38.776077 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b925a7f-2b72-4ece-ac2f-5a313d440ed1-utilities\") pod \"community-operators-7wlq8\" (UID: \"6b925a7f-2b72-4ece-ac2f-5a313d440ed1\") " pod="openshift-marketplace/community-operators-7wlq8" Nov 24 20:09:38 crc kubenswrapper[4812]: I1124 20:09:38.777645 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b925a7f-2b72-4ece-ac2f-5a313d440ed1-catalog-content\") pod \"community-operators-7wlq8\" (UID: \"6b925a7f-2b72-4ece-ac2f-5a313d440ed1\") " pod="openshift-marketplace/community-operators-7wlq8" Nov 24 20:09:38 crc kubenswrapper[4812]: I1124 20:09:38.801508 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6j4x\" (UniqueName: \"kubernetes.io/projected/6b925a7f-2b72-4ece-ac2f-5a313d440ed1-kube-api-access-b6j4x\") pod \"community-operators-7wlq8\" (UID: \"6b925a7f-2b72-4ece-ac2f-5a313d440ed1\") " pod="openshift-marketplace/community-operators-7wlq8" Nov 24 20:09:38 crc kubenswrapper[4812]: I1124 20:09:38.964490 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wlq8" Nov 24 20:09:39 crc kubenswrapper[4812]: I1124 20:09:39.515693 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7wlq8"] Nov 24 20:09:39 crc kubenswrapper[4812]: I1124 20:09:39.579370 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wlq8" event={"ID":"6b925a7f-2b72-4ece-ac2f-5a313d440ed1","Type":"ContainerStarted","Data":"eee6703a6c5f0ae8caac1cd536f4c502a5258a89fc8450ccc21b477c4d98730f"} Nov 24 20:09:40 crc kubenswrapper[4812]: I1124 20:09:40.592246 4812 generic.go:334] "Generic (PLEG): container finished" podID="6b925a7f-2b72-4ece-ac2f-5a313d440ed1" containerID="44810591b1f62ace9095c71dad71e585f93a385753cf918d74f5806ed44a594d" exitCode=0 Nov 24 20:09:40 crc kubenswrapper[4812]: I1124 20:09:40.592532 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wlq8" event={"ID":"6b925a7f-2b72-4ece-ac2f-5a313d440ed1","Type":"ContainerDied","Data":"44810591b1f62ace9095c71dad71e585f93a385753cf918d74f5806ed44a594d"} Nov 24 20:09:40 crc kubenswrapper[4812]: I1124 20:09:40.595022 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 20:09:41 crc kubenswrapper[4812]: I1124 20:09:41.605189 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wlq8" event={"ID":"6b925a7f-2b72-4ece-ac2f-5a313d440ed1","Type":"ContainerStarted","Data":"de7a1dff985e8372e528109c3c6ec0bd4fa88febe749fc1878fd37b5ed8562f1"} Nov 24 20:09:41 crc kubenswrapper[4812]: I1124 20:09:41.966933 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:09:41 crc kubenswrapper[4812]: E1124 20:09:41.967400 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:09:42 crc kubenswrapper[4812]: I1124 20:09:42.355310 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sf44b"] Nov 24 20:09:42 crc kubenswrapper[4812]: I1124 20:09:42.358901 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sf44b" Nov 24 20:09:42 crc kubenswrapper[4812]: I1124 20:09:42.372704 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sf44b"] Nov 24 20:09:42 crc kubenswrapper[4812]: I1124 20:09:42.458855 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcslp\" (UniqueName: \"kubernetes.io/projected/df846edb-06bc-4d8e-950e-56f30fdea78f-kube-api-access-zcslp\") pod \"redhat-marketplace-sf44b\" (UID: \"df846edb-06bc-4d8e-950e-56f30fdea78f\") " pod="openshift-marketplace/redhat-marketplace-sf44b" Nov 24 20:09:42 crc kubenswrapper[4812]: I1124 20:09:42.458910 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df846edb-06bc-4d8e-950e-56f30fdea78f-utilities\") pod \"redhat-marketplace-sf44b\" (UID: \"df846edb-06bc-4d8e-950e-56f30fdea78f\") " pod="openshift-marketplace/redhat-marketplace-sf44b" Nov 24 20:09:42 crc kubenswrapper[4812]: I1124 20:09:42.459001 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df846edb-06bc-4d8e-950e-56f30fdea78f-catalog-content\") pod \"redhat-marketplace-sf44b\" (UID: \"df846edb-06bc-4d8e-950e-56f30fdea78f\") " pod="openshift-marketplace/redhat-marketplace-sf44b" Nov 24 20:09:42 crc kubenswrapper[4812]: I1124 20:09:42.561054 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcslp\" (UniqueName: \"kubernetes.io/projected/df846edb-06bc-4d8e-950e-56f30fdea78f-kube-api-access-zcslp\") pod \"redhat-marketplace-sf44b\" (UID: \"df846edb-06bc-4d8e-950e-56f30fdea78f\") " pod="openshift-marketplace/redhat-marketplace-sf44b" Nov 24 20:09:42 crc kubenswrapper[4812]: I1124 20:09:42.561466 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df846edb-06bc-4d8e-950e-56f30fdea78f-utilities\") pod \"redhat-marketplace-sf44b\" (UID: \"df846edb-06bc-4d8e-950e-56f30fdea78f\") " pod="openshift-marketplace/redhat-marketplace-sf44b" Nov 24 20:09:42 crc kubenswrapper[4812]: I1124 20:09:42.561752 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df846edb-06bc-4d8e-950e-56f30fdea78f-catalog-content\") pod \"redhat-marketplace-sf44b\" (UID: \"df846edb-06bc-4d8e-950e-56f30fdea78f\") " pod="openshift-marketplace/redhat-marketplace-sf44b" Nov 24 20:09:42 crc kubenswrapper[4812]: I1124 20:09:42.562389 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df846edb-06bc-4d8e-950e-56f30fdea78f-catalog-content\") pod \"redhat-marketplace-sf44b\" (UID: \"df846edb-06bc-4d8e-950e-56f30fdea78f\") " pod="openshift-marketplace/redhat-marketplace-sf44b" Nov 24 20:09:42 crc kubenswrapper[4812]: I1124 20:09:42.562427 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df846edb-06bc-4d8e-950e-56f30fdea78f-utilities\") pod \"redhat-marketplace-sf44b\" (UID: \"df846edb-06bc-4d8e-950e-56f30fdea78f\") " pod="openshift-marketplace/redhat-marketplace-sf44b" Nov 24 20:09:42 crc kubenswrapper[4812]: I1124 20:09:42.585699 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcslp\" (UniqueName: \"kubernetes.io/projected/df846edb-06bc-4d8e-950e-56f30fdea78f-kube-api-access-zcslp\") pod \"redhat-marketplace-sf44b\" (UID: \"df846edb-06bc-4d8e-950e-56f30fdea78f\") " pod="openshift-marketplace/redhat-marketplace-sf44b" Nov 24 20:09:42 crc kubenswrapper[4812]: I1124 20:09:42.615009 4812 generic.go:334] "Generic (PLEG): container finished" podID="6b925a7f-2b72-4ece-ac2f-5a313d440ed1" containerID="de7a1dff985e8372e528109c3c6ec0bd4fa88febe749fc1878fd37b5ed8562f1" exitCode=0 Nov 24 20:09:42 crc kubenswrapper[4812]: I1124 20:09:42.615075 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wlq8" event={"ID":"6b925a7f-2b72-4ece-ac2f-5a313d440ed1","Type":"ContainerDied","Data":"de7a1dff985e8372e528109c3c6ec0bd4fa88febe749fc1878fd37b5ed8562f1"} Nov 24 20:09:42 crc kubenswrapper[4812]: I1124 20:09:42.685222 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sf44b" Nov 24 20:09:43 crc kubenswrapper[4812]: I1124 20:09:43.149752 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sf44b"] Nov 24 20:09:43 crc kubenswrapper[4812]: W1124 20:09:43.160268 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf846edb_06bc_4d8e_950e_56f30fdea78f.slice/crio-7a5aac95431e3ae9dfc09eeaf3c9496db5d8003024ce514ec1aaa3ebea179c64 WatchSource:0}: Error finding container 7a5aac95431e3ae9dfc09eeaf3c9496db5d8003024ce514ec1aaa3ebea179c64: Status 404 returned error can't find the container with id 7a5aac95431e3ae9dfc09eeaf3c9496db5d8003024ce514ec1aaa3ebea179c64 Nov 24 20:09:43 crc kubenswrapper[4812]: I1124 20:09:43.625143 4812 generic.go:334] "Generic (PLEG): container finished" podID="df846edb-06bc-4d8e-950e-56f30fdea78f" containerID="1be79251ee2c72663de55e2626bb715da0c7a61c77d4ebca01557bd2be9c9d1f" exitCode=0 Nov 24 20:09:43 crc kubenswrapper[4812]: I1124 20:09:43.625260 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sf44b" event={"ID":"df846edb-06bc-4d8e-950e-56f30fdea78f","Type":"ContainerDied","Data":"1be79251ee2c72663de55e2626bb715da0c7a61c77d4ebca01557bd2be9c9d1f"} Nov 24 20:09:43 crc kubenswrapper[4812]: I1124 20:09:43.625301 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sf44b" event={"ID":"df846edb-06bc-4d8e-950e-56f30fdea78f","Type":"ContainerStarted","Data":"7a5aac95431e3ae9dfc09eeaf3c9496db5d8003024ce514ec1aaa3ebea179c64"} Nov 24 20:09:43 crc kubenswrapper[4812]: I1124 20:09:43.629160 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wlq8" event={"ID":"6b925a7f-2b72-4ece-ac2f-5a313d440ed1","Type":"ContainerStarted","Data":"9751d13419561c487eeddccb3c853dd52c407e867fe9c419d968efd9b3f838da"} Nov 24 20:09:43 crc kubenswrapper[4812]: I1124 20:09:43.688706 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7wlq8" podStartSLOduration=3.247996846 podStartE2EDuration="5.688684319s" podCreationTimestamp="2025-11-24 20:09:38 +0000 UTC" firstStartedPulling="2025-11-24 20:09:40.594820814 +0000 UTC m=+3174.383773175" lastFinishedPulling="2025-11-24 20:09:43.035508277 +0000 UTC m=+3176.824460648" observedRunningTime="2025-11-24 20:09:43.685494818 +0000 UTC m=+3177.474447209" watchObservedRunningTime="2025-11-24 20:09:43.688684319 +0000 UTC m=+3177.477636730" Nov 24 20:09:44 crc kubenswrapper[4812]: I1124 20:09:44.641747 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sf44b" event={"ID":"df846edb-06bc-4d8e-950e-56f30fdea78f","Type":"ContainerStarted","Data":"1e5dcebd2d55ae3450d091c0ad0538f787e3cd71e1510dab3efb4e40b0691c6b"} Nov 24 20:09:45 crc kubenswrapper[4812]: I1124 20:09:45.668300 4812 generic.go:334] "Generic (PLEG): container finished" podID="df846edb-06bc-4d8e-950e-56f30fdea78f" containerID="1e5dcebd2d55ae3450d091c0ad0538f787e3cd71e1510dab3efb4e40b0691c6b" exitCode=0 Nov 24 20:09:45 crc kubenswrapper[4812]: I1124 20:09:45.668383 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sf44b" event={"ID":"df846edb-06bc-4d8e-950e-56f30fdea78f","Type":"ContainerDied","Data":"1e5dcebd2d55ae3450d091c0ad0538f787e3cd71e1510dab3efb4e40b0691c6b"} Nov 24 20:09:46 crc kubenswrapper[4812]: I1124 20:09:46.682004 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sf44b" event={"ID":"df846edb-06bc-4d8e-950e-56f30fdea78f","Type":"ContainerStarted","Data":"f0b08888f193f2c753f7ec0456f0510e9a58f3c42c15041975955ad2931eab8e"} Nov 24 20:09:46 crc kubenswrapper[4812]: I1124 20:09:46.718198 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sf44b" podStartSLOduration=2.176800873 podStartE2EDuration="4.718173085s" podCreationTimestamp="2025-11-24 20:09:42 +0000 UTC" firstStartedPulling="2025-11-24 20:09:43.627771159 +0000 UTC m=+3177.416723560" lastFinishedPulling="2025-11-24 20:09:46.169143371 +0000 UTC m=+3179.958095772" observedRunningTime="2025-11-24 20:09:46.710151247 +0000 UTC m=+3180.499103638" watchObservedRunningTime="2025-11-24 20:09:46.718173085 +0000 UTC m=+3180.507125496" Nov 24 20:09:48 crc kubenswrapper[4812]: I1124 20:09:48.964898 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7wlq8" Nov 24 20:09:48 crc kubenswrapper[4812]: I1124 20:09:48.983750 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7wlq8" Nov 24 20:09:49 crc kubenswrapper[4812]: I1124 20:09:49.045770 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7wlq8" Nov 24 20:09:49 crc kubenswrapper[4812]: I1124 20:09:49.778915 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7wlq8" Nov 24 20:09:50 crc kubenswrapper[4812]: I1124 20:09:50.141780 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wlq8"] Nov 24 20:09:51 crc kubenswrapper[4812]: I1124 20:09:51.727798 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7wlq8" podUID="6b925a7f-2b72-4ece-ac2f-5a313d440ed1" containerName="registry-server" containerID="cri-o://9751d13419561c487eeddccb3c853dd52c407e867fe9c419d968efd9b3f838da" gracePeriod=2 Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.217211 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wlq8" Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.321490 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6j4x\" (UniqueName: \"kubernetes.io/projected/6b925a7f-2b72-4ece-ac2f-5a313d440ed1-kube-api-access-b6j4x\") pod \"6b925a7f-2b72-4ece-ac2f-5a313d440ed1\" (UID: \"6b925a7f-2b72-4ece-ac2f-5a313d440ed1\") " Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.321563 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b925a7f-2b72-4ece-ac2f-5a313d440ed1-utilities\") pod \"6b925a7f-2b72-4ece-ac2f-5a313d440ed1\" (UID: \"6b925a7f-2b72-4ece-ac2f-5a313d440ed1\") " Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.321597 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b925a7f-2b72-4ece-ac2f-5a313d440ed1-catalog-content\") pod \"6b925a7f-2b72-4ece-ac2f-5a313d440ed1\" (UID: \"6b925a7f-2b72-4ece-ac2f-5a313d440ed1\") " Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.322759 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b925a7f-2b72-4ece-ac2f-5a313d440ed1-utilities" (OuterVolumeSpecName: "utilities") pod "6b925a7f-2b72-4ece-ac2f-5a313d440ed1" (UID: "6b925a7f-2b72-4ece-ac2f-5a313d440ed1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.327123 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b925a7f-2b72-4ece-ac2f-5a313d440ed1-kube-api-access-b6j4x" (OuterVolumeSpecName: "kube-api-access-b6j4x") pod "6b925a7f-2b72-4ece-ac2f-5a313d440ed1" (UID: "6b925a7f-2b72-4ece-ac2f-5a313d440ed1"). InnerVolumeSpecName "kube-api-access-b6j4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.423615 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6j4x\" (UniqueName: \"kubernetes.io/projected/6b925a7f-2b72-4ece-ac2f-5a313d440ed1-kube-api-access-b6j4x\") on node \"crc\" DevicePath \"\"" Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.423656 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b925a7f-2b72-4ece-ac2f-5a313d440ed1-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.686063 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sf44b" Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.686145 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sf44b" Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.755748 4812 generic.go:334] "Generic (PLEG): container finished" podID="6b925a7f-2b72-4ece-ac2f-5a313d440ed1" containerID="9751d13419561c487eeddccb3c853dd52c407e867fe9c419d968efd9b3f838da" exitCode=0 Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.755844 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7wlq8" Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.755850 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wlq8" event={"ID":"6b925a7f-2b72-4ece-ac2f-5a313d440ed1","Type":"ContainerDied","Data":"9751d13419561c487eeddccb3c853dd52c407e867fe9c419d968efd9b3f838da"} Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.755974 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7wlq8" event={"ID":"6b925a7f-2b72-4ece-ac2f-5a313d440ed1","Type":"ContainerDied","Data":"eee6703a6c5f0ae8caac1cd536f4c502a5258a89fc8450ccc21b477c4d98730f"} Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.756002 4812 scope.go:117] "RemoveContainer" containerID="9751d13419561c487eeddccb3c853dd52c407e867fe9c419d968efd9b3f838da" Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.758363 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sf44b" Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.783182 4812 scope.go:117] "RemoveContainer" containerID="de7a1dff985e8372e528109c3c6ec0bd4fa88febe749fc1878fd37b5ed8562f1" Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.804793 4812 scope.go:117] "RemoveContainer" containerID="44810591b1f62ace9095c71dad71e585f93a385753cf918d74f5806ed44a594d" Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.820073 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sf44b" Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.859947 4812 scope.go:117] "RemoveContainer" containerID="9751d13419561c487eeddccb3c853dd52c407e867fe9c419d968efd9b3f838da" Nov 24 20:09:52 crc kubenswrapper[4812]: E1124 20:09:52.860709 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9751d13419561c487eeddccb3c853dd52c407e867fe9c419d968efd9b3f838da\": container with ID starting with 9751d13419561c487eeddccb3c853dd52c407e867fe9c419d968efd9b3f838da not found: ID does not exist" containerID="9751d13419561c487eeddccb3c853dd52c407e867fe9c419d968efd9b3f838da" Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.860773 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9751d13419561c487eeddccb3c853dd52c407e867fe9c419d968efd9b3f838da"} err="failed to get container status \"9751d13419561c487eeddccb3c853dd52c407e867fe9c419d968efd9b3f838da\": rpc error: code = NotFound desc = could not find container \"9751d13419561c487eeddccb3c853dd52c407e867fe9c419d968efd9b3f838da\": container with ID starting with 9751d13419561c487eeddccb3c853dd52c407e867fe9c419d968efd9b3f838da not found: ID does not exist" Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.860803 4812 scope.go:117] "RemoveContainer" containerID="de7a1dff985e8372e528109c3c6ec0bd4fa88febe749fc1878fd37b5ed8562f1" Nov 24 20:09:52 crc kubenswrapper[4812]: E1124 20:09:52.861268 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de7a1dff985e8372e528109c3c6ec0bd4fa88febe749fc1878fd37b5ed8562f1\": container with ID starting with de7a1dff985e8372e528109c3c6ec0bd4fa88febe749fc1878fd37b5ed8562f1 not found: ID does not exist" containerID="de7a1dff985e8372e528109c3c6ec0bd4fa88febe749fc1878fd37b5ed8562f1" Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.861375 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de7a1dff985e8372e528109c3c6ec0bd4fa88febe749fc1878fd37b5ed8562f1"} err="failed to get container status \"de7a1dff985e8372e528109c3c6ec0bd4fa88febe749fc1878fd37b5ed8562f1\": rpc error: code = NotFound desc = could not find container \"de7a1dff985e8372e528109c3c6ec0bd4fa88febe749fc1878fd37b5ed8562f1\": container with ID starting with de7a1dff985e8372e528109c3c6ec0bd4fa88febe749fc1878fd37b5ed8562f1 not found: ID does not exist" Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.861514 4812 scope.go:117] "RemoveContainer" containerID="44810591b1f62ace9095c71dad71e585f93a385753cf918d74f5806ed44a594d" Nov 24 20:09:52 crc kubenswrapper[4812]: E1124 20:09:52.861851 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44810591b1f62ace9095c71dad71e585f93a385753cf918d74f5806ed44a594d\": container with ID starting with 44810591b1f62ace9095c71dad71e585f93a385753cf918d74f5806ed44a594d not found: ID does not exist" containerID="44810591b1f62ace9095c71dad71e585f93a385753cf918d74f5806ed44a594d" Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.861892 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44810591b1f62ace9095c71dad71e585f93a385753cf918d74f5806ed44a594d"} err="failed to get container status \"44810591b1f62ace9095c71dad71e585f93a385753cf918d74f5806ed44a594d\": rpc error: code = NotFound desc = could not find container \"44810591b1f62ace9095c71dad71e585f93a385753cf918d74f5806ed44a594d\": container with ID starting with 44810591b1f62ace9095c71dad71e585f93a385753cf918d74f5806ed44a594d not found: ID does not exist" Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.885287 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b925a7f-2b72-4ece-ac2f-5a313d440ed1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b925a7f-2b72-4ece-ac2f-5a313d440ed1" (UID: "6b925a7f-2b72-4ece-ac2f-5a313d440ed1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:09:52 crc kubenswrapper[4812]: I1124 20:09:52.932602 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b925a7f-2b72-4ece-ac2f-5a313d440ed1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 20:09:53 crc kubenswrapper[4812]: I1124 20:09:53.082706 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7wlq8"] Nov 24 20:09:53 crc kubenswrapper[4812]: I1124 20:09:53.092791 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7wlq8"] Nov 24 20:09:53 crc kubenswrapper[4812]: I1124 20:09:53.965849 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:09:53 crc kubenswrapper[4812]: E1124 20:09:53.966528 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:09:54 crc kubenswrapper[4812]: I1124 20:09:54.980286 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b925a7f-2b72-4ece-ac2f-5a313d440ed1" path="/var/lib/kubelet/pods/6b925a7f-2b72-4ece-ac2f-5a313d440ed1/volumes" Nov 24 20:09:55 crc kubenswrapper[4812]: I1124 20:09:55.143824 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sf44b"] Nov 24 20:09:55 crc kubenswrapper[4812]: I1124 20:09:55.144143 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sf44b" podUID="df846edb-06bc-4d8e-950e-56f30fdea78f" containerName="registry-server" containerID="cri-o://f0b08888f193f2c753f7ec0456f0510e9a58f3c42c15041975955ad2931eab8e" gracePeriod=2 Nov 24 20:09:56 crc kubenswrapper[4812]: I1124 20:09:56.803960 4812 generic.go:334] "Generic (PLEG): container finished" podID="df846edb-06bc-4d8e-950e-56f30fdea78f" containerID="f0b08888f193f2c753f7ec0456f0510e9a58f3c42c15041975955ad2931eab8e" exitCode=0 Nov 24 20:09:56 crc kubenswrapper[4812]: I1124 20:09:56.804013 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sf44b" event={"ID":"df846edb-06bc-4d8e-950e-56f30fdea78f","Type":"ContainerDied","Data":"f0b08888f193f2c753f7ec0456f0510e9a58f3c42c15041975955ad2931eab8e"} Nov 24 20:09:56 crc kubenswrapper[4812]: I1124 20:09:56.849179 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sf44b" Nov 24 20:09:57 crc kubenswrapper[4812]: I1124 20:09:57.004146 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df846edb-06bc-4d8e-950e-56f30fdea78f-catalog-content\") pod \"df846edb-06bc-4d8e-950e-56f30fdea78f\" (UID: \"df846edb-06bc-4d8e-950e-56f30fdea78f\") " Nov 24 20:09:57 crc kubenswrapper[4812]: I1124 20:09:57.004418 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df846edb-06bc-4d8e-950e-56f30fdea78f-utilities\") pod \"df846edb-06bc-4d8e-950e-56f30fdea78f\" (UID: \"df846edb-06bc-4d8e-950e-56f30fdea78f\") " Nov 24 20:09:57 crc kubenswrapper[4812]: I1124 20:09:57.004475 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcslp\" (UniqueName: \"kubernetes.io/projected/df846edb-06bc-4d8e-950e-56f30fdea78f-kube-api-access-zcslp\") pod \"df846edb-06bc-4d8e-950e-56f30fdea78f\" (UID: \"df846edb-06bc-4d8e-950e-56f30fdea78f\") " Nov 24 20:09:57 crc kubenswrapper[4812]: I1124 20:09:57.006240 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df846edb-06bc-4d8e-950e-56f30fdea78f-utilities" (OuterVolumeSpecName: "utilities") pod "df846edb-06bc-4d8e-950e-56f30fdea78f" (UID: "df846edb-06bc-4d8e-950e-56f30fdea78f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:09:57 crc kubenswrapper[4812]: I1124 20:09:57.013189 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df846edb-06bc-4d8e-950e-56f30fdea78f-kube-api-access-zcslp" (OuterVolumeSpecName: "kube-api-access-zcslp") pod "df846edb-06bc-4d8e-950e-56f30fdea78f" (UID: "df846edb-06bc-4d8e-950e-56f30fdea78f"). InnerVolumeSpecName "kube-api-access-zcslp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:09:57 crc kubenswrapper[4812]: I1124 20:09:57.029798 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df846edb-06bc-4d8e-950e-56f30fdea78f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df846edb-06bc-4d8e-950e-56f30fdea78f" (UID: "df846edb-06bc-4d8e-950e-56f30fdea78f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:09:57 crc kubenswrapper[4812]: I1124 20:09:57.106580 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df846edb-06bc-4d8e-950e-56f30fdea78f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 20:09:57 crc kubenswrapper[4812]: I1124 20:09:57.106618 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcslp\" (UniqueName: \"kubernetes.io/projected/df846edb-06bc-4d8e-950e-56f30fdea78f-kube-api-access-zcslp\") on node \"crc\" DevicePath \"\"" Nov 24 20:09:57 crc kubenswrapper[4812]: I1124 20:09:57.106639 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df846edb-06bc-4d8e-950e-56f30fdea78f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 20:09:57 crc kubenswrapper[4812]: I1124 20:09:57.818129 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sf44b" event={"ID":"df846edb-06bc-4d8e-950e-56f30fdea78f","Type":"ContainerDied","Data":"7a5aac95431e3ae9dfc09eeaf3c9496db5d8003024ce514ec1aaa3ebea179c64"} Nov 24 20:09:57 crc kubenswrapper[4812]: I1124 20:09:57.818210 4812 scope.go:117] "RemoveContainer" containerID="f0b08888f193f2c753f7ec0456f0510e9a58f3c42c15041975955ad2931eab8e" Nov 24 20:09:57 crc kubenswrapper[4812]: I1124 20:09:57.818241 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sf44b" Nov 24 20:09:57 crc kubenswrapper[4812]: I1124 20:09:57.848845 4812 scope.go:117] "RemoveContainer" containerID="1e5dcebd2d55ae3450d091c0ad0538f787e3cd71e1510dab3efb4e40b0691c6b" Nov 24 20:09:57 crc kubenswrapper[4812]: I1124 20:09:57.873945 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sf44b"] Nov 24 20:09:57 crc kubenswrapper[4812]: I1124 20:09:57.882704 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sf44b"] Nov 24 20:09:57 crc kubenswrapper[4812]: I1124 20:09:57.903313 4812 scope.go:117] "RemoveContainer" containerID="1be79251ee2c72663de55e2626bb715da0c7a61c77d4ebca01557bd2be9c9d1f" Nov 24 20:09:58 crc kubenswrapper[4812]: I1124 20:09:58.981133 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df846edb-06bc-4d8e-950e-56f30fdea78f" path="/var/lib/kubelet/pods/df846edb-06bc-4d8e-950e-56f30fdea78f/volumes" Nov 24 20:10:08 crc kubenswrapper[4812]: I1124 20:10:08.967144 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:10:08 crc kubenswrapper[4812]: E1124 20:10:08.968118 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:10:23 crc kubenswrapper[4812]: I1124 20:10:23.965363 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:10:23 crc kubenswrapper[4812]: E1124 20:10:23.966103 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:10:36 crc kubenswrapper[4812]: I1124 20:10:36.973030 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:10:36 crc kubenswrapper[4812]: E1124 20:10:36.974268 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:10:49 crc kubenswrapper[4812]: I1124 20:10:49.965864 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:10:49 crc kubenswrapper[4812]: E1124 20:10:49.966880 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:11:04 crc kubenswrapper[4812]: I1124 20:11:04.966053 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:11:04 crc kubenswrapper[4812]: E1124 20:11:04.967081 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:11:15 crc kubenswrapper[4812]: I1124 20:11:15.966179 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:11:15 crc kubenswrapper[4812]: E1124 20:11:15.967324 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:11:26 crc kubenswrapper[4812]: I1124 20:11:26.973497 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:11:26 crc kubenswrapper[4812]: E1124 20:11:26.974481 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:11:41 crc kubenswrapper[4812]: I1124 20:11:41.966665 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:11:41 crc kubenswrapper[4812]: E1124 20:11:41.967809 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:11:53 crc kubenswrapper[4812]: I1124 20:11:53.966327 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:11:53 crc kubenswrapper[4812]: E1124 20:11:53.967286 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:11:56 crc kubenswrapper[4812]: I1124 20:11:56.722695 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l69dt"] Nov 24 20:11:56 crc kubenswrapper[4812]: E1124 20:11:56.723446 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b925a7f-2b72-4ece-ac2f-5a313d440ed1" containerName="extract-utilities" Nov 24 20:11:56 crc kubenswrapper[4812]: I1124 20:11:56.723468 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b925a7f-2b72-4ece-ac2f-5a313d440ed1" containerName="extract-utilities" Nov 24 20:11:56 crc kubenswrapper[4812]: E1124 20:11:56.723509 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df846edb-06bc-4d8e-950e-56f30fdea78f" containerName="extract-content" Nov 24 20:11:56 crc kubenswrapper[4812]: I1124 20:11:56.723521 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="df846edb-06bc-4d8e-950e-56f30fdea78f" containerName="extract-content" Nov 24 20:11:56 crc kubenswrapper[4812]: E1124 20:11:56.723557 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b925a7f-2b72-4ece-ac2f-5a313d440ed1" containerName="extract-content" Nov 24 20:11:56 crc kubenswrapper[4812]: I1124 20:11:56.723572 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b925a7f-2b72-4ece-ac2f-5a313d440ed1" containerName="extract-content" Nov 24 20:11:56 crc kubenswrapper[4812]: E1124 20:11:56.723590 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b925a7f-2b72-4ece-ac2f-5a313d440ed1" containerName="registry-server" Nov 24 20:11:56 crc kubenswrapper[4812]: I1124 20:11:56.723601 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b925a7f-2b72-4ece-ac2f-5a313d440ed1" containerName="registry-server" Nov 24 20:11:56 crc kubenswrapper[4812]: E1124 20:11:56.723619 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df846edb-06bc-4d8e-950e-56f30fdea78f" containerName="extract-utilities" Nov 24 20:11:56 crc kubenswrapper[4812]: I1124 20:11:56.723631 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="df846edb-06bc-4d8e-950e-56f30fdea78f" containerName="extract-utilities" Nov 24 20:11:56 crc kubenswrapper[4812]: E1124 20:11:56.723662 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df846edb-06bc-4d8e-950e-56f30fdea78f" containerName="registry-server" Nov 24 20:11:56 crc kubenswrapper[4812]: I1124 20:11:56.723674 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="df846edb-06bc-4d8e-950e-56f30fdea78f" containerName="registry-server" Nov 24 20:11:56 crc kubenswrapper[4812]: I1124 20:11:56.723935 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b925a7f-2b72-4ece-ac2f-5a313d440ed1" containerName="registry-server" Nov 24 20:11:56 crc kubenswrapper[4812]: I1124 20:11:56.723979 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="df846edb-06bc-4d8e-950e-56f30fdea78f" containerName="registry-server" Nov 24 20:11:56 crc kubenswrapper[4812]: I1124 20:11:56.725630 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l69dt" Nov 24 20:11:56 crc kubenswrapper[4812]: I1124 20:11:56.746999 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l69dt"] Nov 24 20:11:56 crc kubenswrapper[4812]: I1124 20:11:56.884301 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba6d9209-353e-4c01-859b-eccdd319a28f-utilities\") pod \"certified-operators-l69dt\" (UID: \"ba6d9209-353e-4c01-859b-eccdd319a28f\") " pod="openshift-marketplace/certified-operators-l69dt" Nov 24 20:11:56 crc kubenswrapper[4812]: I1124 20:11:56.884587 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba6d9209-353e-4c01-859b-eccdd319a28f-catalog-content\") pod \"certified-operators-l69dt\" (UID: \"ba6d9209-353e-4c01-859b-eccdd319a28f\") " pod="openshift-marketplace/certified-operators-l69dt" Nov 24 20:11:56 crc kubenswrapper[4812]: I1124 20:11:56.884824 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rqbf\" (UniqueName: \"kubernetes.io/projected/ba6d9209-353e-4c01-859b-eccdd319a28f-kube-api-access-9rqbf\") pod \"certified-operators-l69dt\" (UID: \"ba6d9209-353e-4c01-859b-eccdd319a28f\") " pod="openshift-marketplace/certified-operators-l69dt" Nov 24 20:11:56 crc kubenswrapper[4812]: I1124 20:11:56.986585 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba6d9209-353e-4c01-859b-eccdd319a28f-utilities\") pod \"certified-operators-l69dt\" (UID: \"ba6d9209-353e-4c01-859b-eccdd319a28f\") " pod="openshift-marketplace/certified-operators-l69dt" Nov 24 20:11:56 crc kubenswrapper[4812]: I1124 20:11:56.986664 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba6d9209-353e-4c01-859b-eccdd319a28f-catalog-content\") pod \"certified-operators-l69dt\" (UID: \"ba6d9209-353e-4c01-859b-eccdd319a28f\") " pod="openshift-marketplace/certified-operators-l69dt" Nov 24 20:11:56 crc kubenswrapper[4812]: I1124 20:11:56.986829 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rqbf\" (UniqueName: \"kubernetes.io/projected/ba6d9209-353e-4c01-859b-eccdd319a28f-kube-api-access-9rqbf\") pod \"certified-operators-l69dt\" (UID: \"ba6d9209-353e-4c01-859b-eccdd319a28f\") " pod="openshift-marketplace/certified-operators-l69dt" Nov 24 20:11:56 crc kubenswrapper[4812]: I1124 20:11:56.987509 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba6d9209-353e-4c01-859b-eccdd319a28f-catalog-content\") pod \"certified-operators-l69dt\" (UID: \"ba6d9209-353e-4c01-859b-eccdd319a28f\") " pod="openshift-marketplace/certified-operators-l69dt" Nov 24 20:11:56 crc kubenswrapper[4812]: I1124 20:11:56.987622 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba6d9209-353e-4c01-859b-eccdd319a28f-utilities\") pod \"certified-operators-l69dt\" (UID: \"ba6d9209-353e-4c01-859b-eccdd319a28f\") " pod="openshift-marketplace/certified-operators-l69dt" Nov 24 20:11:57 crc kubenswrapper[4812]: I1124 20:11:57.027564 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rqbf\" (UniqueName: \"kubernetes.io/projected/ba6d9209-353e-4c01-859b-eccdd319a28f-kube-api-access-9rqbf\") pod \"certified-operators-l69dt\" (UID: \"ba6d9209-353e-4c01-859b-eccdd319a28f\") " pod="openshift-marketplace/certified-operators-l69dt" Nov 24 20:11:57 crc kubenswrapper[4812]: I1124 20:11:57.102342 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l69dt" Nov 24 20:11:57 crc kubenswrapper[4812]: I1124 20:11:57.329409 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l69dt"] Nov 24 20:11:57 crc kubenswrapper[4812]: I1124 20:11:57.919394 4812 generic.go:334] "Generic (PLEG): container finished" podID="ba6d9209-353e-4c01-859b-eccdd319a28f" containerID="8f99437a2b13a3e2df658c376c885b98e12732f1d5a9ab6949243cf87c62fe48" exitCode=0 Nov 24 20:11:57 crc kubenswrapper[4812]: I1124 20:11:57.919444 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l69dt" event={"ID":"ba6d9209-353e-4c01-859b-eccdd319a28f","Type":"ContainerDied","Data":"8f99437a2b13a3e2df658c376c885b98e12732f1d5a9ab6949243cf87c62fe48"} Nov 24 20:11:57 crc kubenswrapper[4812]: I1124 20:11:57.919472 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l69dt" event={"ID":"ba6d9209-353e-4c01-859b-eccdd319a28f","Type":"ContainerStarted","Data":"63783d2e4e50876e2e223cec539501c67c64f1b44d03e7013237f94d8f1df9bd"} Nov 24 20:11:59 crc kubenswrapper[4812]: I1124 20:11:59.326309 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kmtc2"] Nov 24 20:11:59 crc kubenswrapper[4812]: I1124 20:11:59.329929 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kmtc2" Nov 24 20:11:59 crc kubenswrapper[4812]: I1124 20:11:59.334483 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kmtc2"] Nov 24 20:11:59 crc kubenswrapper[4812]: I1124 20:11:59.443128 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/240d37a5-0424-4c3b-bb65-d45d1580e93b-catalog-content\") pod \"redhat-operators-kmtc2\" (UID: \"240d37a5-0424-4c3b-bb65-d45d1580e93b\") " pod="openshift-marketplace/redhat-operators-kmtc2" Nov 24 20:11:59 crc kubenswrapper[4812]: I1124 20:11:59.443465 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlqpl\" (UniqueName: \"kubernetes.io/projected/240d37a5-0424-4c3b-bb65-d45d1580e93b-kube-api-access-tlqpl\") pod \"redhat-operators-kmtc2\" (UID: \"240d37a5-0424-4c3b-bb65-d45d1580e93b\") " pod="openshift-marketplace/redhat-operators-kmtc2" Nov 24 20:11:59 crc kubenswrapper[4812]: I1124 20:11:59.443592 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/240d37a5-0424-4c3b-bb65-d45d1580e93b-utilities\") pod \"redhat-operators-kmtc2\" (UID: \"240d37a5-0424-4c3b-bb65-d45d1580e93b\") " pod="openshift-marketplace/redhat-operators-kmtc2" Nov 24 20:11:59 crc kubenswrapper[4812]: I1124 20:11:59.544602 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/240d37a5-0424-4c3b-bb65-d45d1580e93b-catalog-content\") pod \"redhat-operators-kmtc2\" (UID: \"240d37a5-0424-4c3b-bb65-d45d1580e93b\") " pod="openshift-marketplace/redhat-operators-kmtc2" Nov 24 20:11:59 crc kubenswrapper[4812]: I1124 20:11:59.544666 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlqpl\" (UniqueName: \"kubernetes.io/projected/240d37a5-0424-4c3b-bb65-d45d1580e93b-kube-api-access-tlqpl\") pod \"redhat-operators-kmtc2\" (UID: \"240d37a5-0424-4c3b-bb65-d45d1580e93b\") " pod="openshift-marketplace/redhat-operators-kmtc2" Nov 24 20:11:59 crc kubenswrapper[4812]: I1124 20:11:59.544760 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/240d37a5-0424-4c3b-bb65-d45d1580e93b-utilities\") pod \"redhat-operators-kmtc2\" (UID: \"240d37a5-0424-4c3b-bb65-d45d1580e93b\") " pod="openshift-marketplace/redhat-operators-kmtc2" Nov 24 20:11:59 crc kubenswrapper[4812]: I1124 20:11:59.545106 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/240d37a5-0424-4c3b-bb65-d45d1580e93b-catalog-content\") pod \"redhat-operators-kmtc2\" (UID: \"240d37a5-0424-4c3b-bb65-d45d1580e93b\") " pod="openshift-marketplace/redhat-operators-kmtc2" Nov 24 20:11:59 crc kubenswrapper[4812]: I1124 20:11:59.545150 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/240d37a5-0424-4c3b-bb65-d45d1580e93b-utilities\") pod \"redhat-operators-kmtc2\" (UID: \"240d37a5-0424-4c3b-bb65-d45d1580e93b\") " pod="openshift-marketplace/redhat-operators-kmtc2" Nov 24 20:11:59 crc kubenswrapper[4812]: I1124 20:11:59.562779 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlqpl\" (UniqueName: \"kubernetes.io/projected/240d37a5-0424-4c3b-bb65-d45d1580e93b-kube-api-access-tlqpl\") pod \"redhat-operators-kmtc2\" (UID: \"240d37a5-0424-4c3b-bb65-d45d1580e93b\") " pod="openshift-marketplace/redhat-operators-kmtc2" Nov 24 20:11:59 crc kubenswrapper[4812]: I1124 20:11:59.649925 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kmtc2" Nov 24 20:11:59 crc kubenswrapper[4812]: I1124 20:11:59.896593 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kmtc2"] Nov 24 20:11:59 crc kubenswrapper[4812]: I1124 20:11:59.936529 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmtc2" event={"ID":"240d37a5-0424-4c3b-bb65-d45d1580e93b","Type":"ContainerStarted","Data":"6fdfd177f3baf91accdf5f1508706d3adf84f77eb6186fc6e5e79f204205f876"} Nov 24 20:11:59 crc kubenswrapper[4812]: I1124 20:11:59.938382 4812 generic.go:334] "Generic (PLEG): container finished" podID="ba6d9209-353e-4c01-859b-eccdd319a28f" containerID="585da5df43b46720490c1ebdfb9b7907a96d307b22619ff21fc7dd662c2d08cc" exitCode=0 Nov 24 20:11:59 crc kubenswrapper[4812]: I1124 20:11:59.938421 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l69dt" event={"ID":"ba6d9209-353e-4c01-859b-eccdd319a28f","Type":"ContainerDied","Data":"585da5df43b46720490c1ebdfb9b7907a96d307b22619ff21fc7dd662c2d08cc"} Nov 24 20:12:00 crc kubenswrapper[4812]: I1124 20:12:00.950452 4812 generic.go:334] "Generic (PLEG): container finished" podID="240d37a5-0424-4c3b-bb65-d45d1580e93b" containerID="d3e18191071e3dc1220cef7d3c7f4261e8f222f427481a780e55b56551c4ec7c" exitCode=0 Nov 24 20:12:00 crc kubenswrapper[4812]: I1124 20:12:00.950541 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmtc2" event={"ID":"240d37a5-0424-4c3b-bb65-d45d1580e93b","Type":"ContainerDied","Data":"d3e18191071e3dc1220cef7d3c7f4261e8f222f427481a780e55b56551c4ec7c"} Nov 24 20:12:00 crc kubenswrapper[4812]: I1124 20:12:00.959045 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l69dt" event={"ID":"ba6d9209-353e-4c01-859b-eccdd319a28f","Type":"ContainerStarted","Data":"8fa594207a3db53516f32d60a976f4ff510b048d1ac0822ba109ebef72fdf7e1"} Nov 24 20:12:01 crc kubenswrapper[4812]: I1124 20:12:01.015321 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l69dt" podStartSLOduration=2.575333592 podStartE2EDuration="5.015299672s" podCreationTimestamp="2025-11-24 20:11:56 +0000 UTC" firstStartedPulling="2025-11-24 20:11:57.921813617 +0000 UTC m=+3311.710766018" lastFinishedPulling="2025-11-24 20:12:00.361779727 +0000 UTC m=+3314.150732098" observedRunningTime="2025-11-24 20:12:01.011313299 +0000 UTC m=+3314.800265700" watchObservedRunningTime="2025-11-24 20:12:01.015299672 +0000 UTC m=+3314.804252053" Nov 24 20:12:01 crc kubenswrapper[4812]: I1124 20:12:01.969754 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmtc2" event={"ID":"240d37a5-0424-4c3b-bb65-d45d1580e93b","Type":"ContainerStarted","Data":"00780bc9b8a3842c61581ac3b26d781f946952816210f5370bf00472e4bf27c1"} Nov 24 20:12:02 crc kubenswrapper[4812]: I1124 20:12:02.981527 4812 generic.go:334] "Generic (PLEG): container finished" podID="240d37a5-0424-4c3b-bb65-d45d1580e93b" containerID="00780bc9b8a3842c61581ac3b26d781f946952816210f5370bf00472e4bf27c1" exitCode=0 Nov 24 20:12:02 crc kubenswrapper[4812]: I1124 20:12:02.981667 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmtc2" event={"ID":"240d37a5-0424-4c3b-bb65-d45d1580e93b","Type":"ContainerDied","Data":"00780bc9b8a3842c61581ac3b26d781f946952816210f5370bf00472e4bf27c1"} Nov 24 20:12:03 crc kubenswrapper[4812]: I1124 20:12:03.996822 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmtc2" event={"ID":"240d37a5-0424-4c3b-bb65-d45d1580e93b","Type":"ContainerStarted","Data":"ec699de181ea6be633c7b982c90b1d40c079bf09066628be15da9ab60d0db862"} Nov 24 20:12:04 crc kubenswrapper[4812]: I1124 20:12:04.032908 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kmtc2" podStartSLOduration=2.539709375 podStartE2EDuration="5.032889273s" podCreationTimestamp="2025-11-24 20:11:59 +0000 UTC" firstStartedPulling="2025-11-24 20:12:00.952581263 +0000 UTC m=+3314.741533664" lastFinishedPulling="2025-11-24 20:12:03.445761151 +0000 UTC m=+3317.234713562" observedRunningTime="2025-11-24 20:12:04.030199557 +0000 UTC m=+3317.819151968" watchObservedRunningTime="2025-11-24 20:12:04.032889273 +0000 UTC m=+3317.821841654" Nov 24 20:12:06 crc kubenswrapper[4812]: I1124 20:12:06.975467 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:12:06 crc kubenswrapper[4812]: E1124 20:12:06.976484 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:12:07 crc kubenswrapper[4812]: I1124 20:12:07.103060 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l69dt" Nov 24 20:12:07 crc kubenswrapper[4812]: I1124 20:12:07.103679 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l69dt" Nov 24 20:12:07 crc kubenswrapper[4812]: I1124 20:12:07.186212 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l69dt" Nov 24 20:12:08 crc kubenswrapper[4812]: I1124 20:12:08.108838 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l69dt" Nov 24 20:12:08 crc kubenswrapper[4812]: I1124 20:12:08.306824 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l69dt"] Nov 24 20:12:09 crc kubenswrapper[4812]: I1124 20:12:09.650609 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kmtc2" Nov 24 20:12:09 crc kubenswrapper[4812]: I1124 20:12:09.651279 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kmtc2" Nov 24 20:12:10 crc kubenswrapper[4812]: I1124 20:12:10.055496 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l69dt" podUID="ba6d9209-353e-4c01-859b-eccdd319a28f" containerName="registry-server" containerID="cri-o://8fa594207a3db53516f32d60a976f4ff510b048d1ac0822ba109ebef72fdf7e1" gracePeriod=2 Nov 24 20:12:10 crc kubenswrapper[4812]: I1124 20:12:10.722442 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kmtc2" podUID="240d37a5-0424-4c3b-bb65-d45d1580e93b" containerName="registry-server" probeResult="failure" output=< Nov 24 20:12:10 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 20:12:10 crc kubenswrapper[4812]: > Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.048177 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l69dt" Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.076473 4812 generic.go:334] "Generic (PLEG): container finished" podID="ba6d9209-353e-4c01-859b-eccdd319a28f" containerID="8fa594207a3db53516f32d60a976f4ff510b048d1ac0822ba109ebef72fdf7e1" exitCode=0 Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.076532 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l69dt" event={"ID":"ba6d9209-353e-4c01-859b-eccdd319a28f","Type":"ContainerDied","Data":"8fa594207a3db53516f32d60a976f4ff510b048d1ac0822ba109ebef72fdf7e1"} Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.076573 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l69dt" event={"ID":"ba6d9209-353e-4c01-859b-eccdd319a28f","Type":"ContainerDied","Data":"63783d2e4e50876e2e223cec539501c67c64f1b44d03e7013237f94d8f1df9bd"} Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.076603 4812 scope.go:117] "RemoveContainer" containerID="8fa594207a3db53516f32d60a976f4ff510b048d1ac0822ba109ebef72fdf7e1" Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.076807 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l69dt" Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.107988 4812 scope.go:117] "RemoveContainer" containerID="585da5df43b46720490c1ebdfb9b7907a96d307b22619ff21fc7dd662c2d08cc" Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.136971 4812 scope.go:117] "RemoveContainer" containerID="8f99437a2b13a3e2df658c376c885b98e12732f1d5a9ab6949243cf87c62fe48" Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.181173 4812 scope.go:117] "RemoveContainer" containerID="8fa594207a3db53516f32d60a976f4ff510b048d1ac0822ba109ebef72fdf7e1" Nov 24 20:12:11 crc kubenswrapper[4812]: E1124 20:12:11.181759 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fa594207a3db53516f32d60a976f4ff510b048d1ac0822ba109ebef72fdf7e1\": container with ID starting with 8fa594207a3db53516f32d60a976f4ff510b048d1ac0822ba109ebef72fdf7e1 not found: ID does not exist" containerID="8fa594207a3db53516f32d60a976f4ff510b048d1ac0822ba109ebef72fdf7e1" Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.181806 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fa594207a3db53516f32d60a976f4ff510b048d1ac0822ba109ebef72fdf7e1"} err="failed to get container status \"8fa594207a3db53516f32d60a976f4ff510b048d1ac0822ba109ebef72fdf7e1\": rpc error: code = NotFound desc = could not find container \"8fa594207a3db53516f32d60a976f4ff510b048d1ac0822ba109ebef72fdf7e1\": container with ID starting with 8fa594207a3db53516f32d60a976f4ff510b048d1ac0822ba109ebef72fdf7e1 not found: ID does not exist" Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.181837 4812 scope.go:117] "RemoveContainer" containerID="585da5df43b46720490c1ebdfb9b7907a96d307b22619ff21fc7dd662c2d08cc" Nov 24 20:12:11 crc kubenswrapper[4812]: E1124 20:12:11.182187 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"585da5df43b46720490c1ebdfb9b7907a96d307b22619ff21fc7dd662c2d08cc\": container with ID starting with 585da5df43b46720490c1ebdfb9b7907a96d307b22619ff21fc7dd662c2d08cc not found: ID does not exist" containerID="585da5df43b46720490c1ebdfb9b7907a96d307b22619ff21fc7dd662c2d08cc" Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.182221 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"585da5df43b46720490c1ebdfb9b7907a96d307b22619ff21fc7dd662c2d08cc"} err="failed to get container status \"585da5df43b46720490c1ebdfb9b7907a96d307b22619ff21fc7dd662c2d08cc\": rpc error: code = NotFound desc = could not find container \"585da5df43b46720490c1ebdfb9b7907a96d307b22619ff21fc7dd662c2d08cc\": container with ID starting with 585da5df43b46720490c1ebdfb9b7907a96d307b22619ff21fc7dd662c2d08cc not found: ID does not exist" Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.182247 4812 scope.go:117] "RemoveContainer" containerID="8f99437a2b13a3e2df658c376c885b98e12732f1d5a9ab6949243cf87c62fe48" Nov 24 20:12:11 crc kubenswrapper[4812]: E1124 20:12:11.182756 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f99437a2b13a3e2df658c376c885b98e12732f1d5a9ab6949243cf87c62fe48\": container with ID starting with 8f99437a2b13a3e2df658c376c885b98e12732f1d5a9ab6949243cf87c62fe48 not found: ID does not exist" containerID="8f99437a2b13a3e2df658c376c885b98e12732f1d5a9ab6949243cf87c62fe48" Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.182824 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f99437a2b13a3e2df658c376c885b98e12732f1d5a9ab6949243cf87c62fe48"} err="failed to get container status \"8f99437a2b13a3e2df658c376c885b98e12732f1d5a9ab6949243cf87c62fe48\": rpc error: code = NotFound desc = could not find container \"8f99437a2b13a3e2df658c376c885b98e12732f1d5a9ab6949243cf87c62fe48\": container with ID starting with 8f99437a2b13a3e2df658c376c885b98e12732f1d5a9ab6949243cf87c62fe48 not found: ID does not exist" Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.235292 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba6d9209-353e-4c01-859b-eccdd319a28f-utilities\") pod \"ba6d9209-353e-4c01-859b-eccdd319a28f\" (UID: \"ba6d9209-353e-4c01-859b-eccdd319a28f\") " Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.235420 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba6d9209-353e-4c01-859b-eccdd319a28f-catalog-content\") pod \"ba6d9209-353e-4c01-859b-eccdd319a28f\" (UID: \"ba6d9209-353e-4c01-859b-eccdd319a28f\") " Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.235550 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rqbf\" (UniqueName: \"kubernetes.io/projected/ba6d9209-353e-4c01-859b-eccdd319a28f-kube-api-access-9rqbf\") pod \"ba6d9209-353e-4c01-859b-eccdd319a28f\" (UID: \"ba6d9209-353e-4c01-859b-eccdd319a28f\") " Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.236292 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba6d9209-353e-4c01-859b-eccdd319a28f-utilities" (OuterVolumeSpecName: "utilities") pod "ba6d9209-353e-4c01-859b-eccdd319a28f" (UID: "ba6d9209-353e-4c01-859b-eccdd319a28f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.245136 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba6d9209-353e-4c01-859b-eccdd319a28f-kube-api-access-9rqbf" (OuterVolumeSpecName: "kube-api-access-9rqbf") pod "ba6d9209-353e-4c01-859b-eccdd319a28f" (UID: "ba6d9209-353e-4c01-859b-eccdd319a28f"). InnerVolumeSpecName "kube-api-access-9rqbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.324608 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba6d9209-353e-4c01-859b-eccdd319a28f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba6d9209-353e-4c01-859b-eccdd319a28f" (UID: "ba6d9209-353e-4c01-859b-eccdd319a28f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.337311 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba6d9209-353e-4c01-859b-eccdd319a28f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.337360 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba6d9209-353e-4c01-859b-eccdd319a28f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.337376 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rqbf\" (UniqueName: \"kubernetes.io/projected/ba6d9209-353e-4c01-859b-eccdd319a28f-kube-api-access-9rqbf\") on node \"crc\" DevicePath \"\"" Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.423441 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l69dt"] Nov 24 20:12:11 crc kubenswrapper[4812]: I1124 20:12:11.433260 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l69dt"] Nov 24 20:12:12 crc kubenswrapper[4812]: I1124 20:12:12.984248 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba6d9209-353e-4c01-859b-eccdd319a28f" path="/var/lib/kubelet/pods/ba6d9209-353e-4c01-859b-eccdd319a28f/volumes" Nov 24 20:12:17 crc kubenswrapper[4812]: I1124 20:12:17.966165 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:12:17 crc kubenswrapper[4812]: E1124 20:12:17.967181 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:12:19 crc kubenswrapper[4812]: I1124 20:12:19.722965 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kmtc2" Nov 24 20:12:19 crc kubenswrapper[4812]: I1124 20:12:19.804208 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kmtc2" Nov 24 20:12:19 crc kubenswrapper[4812]: I1124 20:12:19.983157 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kmtc2"] Nov 24 20:12:21 crc kubenswrapper[4812]: I1124 20:12:21.181431 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kmtc2" podUID="240d37a5-0424-4c3b-bb65-d45d1580e93b" containerName="registry-server" containerID="cri-o://ec699de181ea6be633c7b982c90b1d40c079bf09066628be15da9ab60d0db862" gracePeriod=2 Nov 24 20:12:21 crc kubenswrapper[4812]: I1124 20:12:21.706844 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kmtc2" Nov 24 20:12:21 crc kubenswrapper[4812]: I1124 20:12:21.755283 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/240d37a5-0424-4c3b-bb65-d45d1580e93b-utilities\") pod \"240d37a5-0424-4c3b-bb65-d45d1580e93b\" (UID: \"240d37a5-0424-4c3b-bb65-d45d1580e93b\") " Nov 24 20:12:21 crc kubenswrapper[4812]: I1124 20:12:21.755361 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlqpl\" (UniqueName: \"kubernetes.io/projected/240d37a5-0424-4c3b-bb65-d45d1580e93b-kube-api-access-tlqpl\") pod \"240d37a5-0424-4c3b-bb65-d45d1580e93b\" (UID: \"240d37a5-0424-4c3b-bb65-d45d1580e93b\") " Nov 24 20:12:21 crc kubenswrapper[4812]: I1124 20:12:21.755429 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/240d37a5-0424-4c3b-bb65-d45d1580e93b-catalog-content\") pod \"240d37a5-0424-4c3b-bb65-d45d1580e93b\" (UID: \"240d37a5-0424-4c3b-bb65-d45d1580e93b\") " Nov 24 20:12:21 crc kubenswrapper[4812]: I1124 20:12:21.757735 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/240d37a5-0424-4c3b-bb65-d45d1580e93b-utilities" (OuterVolumeSpecName: "utilities") pod "240d37a5-0424-4c3b-bb65-d45d1580e93b" (UID: "240d37a5-0424-4c3b-bb65-d45d1580e93b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:12:21 crc kubenswrapper[4812]: I1124 20:12:21.763591 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/240d37a5-0424-4c3b-bb65-d45d1580e93b-kube-api-access-tlqpl" (OuterVolumeSpecName: "kube-api-access-tlqpl") pod "240d37a5-0424-4c3b-bb65-d45d1580e93b" (UID: "240d37a5-0424-4c3b-bb65-d45d1580e93b"). InnerVolumeSpecName "kube-api-access-tlqpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:12:21 crc kubenswrapper[4812]: I1124 20:12:21.848313 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/240d37a5-0424-4c3b-bb65-d45d1580e93b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "240d37a5-0424-4c3b-bb65-d45d1580e93b" (UID: "240d37a5-0424-4c3b-bb65-d45d1580e93b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:12:21 crc kubenswrapper[4812]: I1124 20:12:21.857422 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/240d37a5-0424-4c3b-bb65-d45d1580e93b-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 20:12:21 crc kubenswrapper[4812]: I1124 20:12:21.857535 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlqpl\" (UniqueName: \"kubernetes.io/projected/240d37a5-0424-4c3b-bb65-d45d1580e93b-kube-api-access-tlqpl\") on node \"crc\" DevicePath \"\"" Nov 24 20:12:21 crc kubenswrapper[4812]: I1124 20:12:21.857629 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/240d37a5-0424-4c3b-bb65-d45d1580e93b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 20:12:22 crc kubenswrapper[4812]: I1124 20:12:22.195406 4812 generic.go:334] "Generic (PLEG): container finished" podID="240d37a5-0424-4c3b-bb65-d45d1580e93b" containerID="ec699de181ea6be633c7b982c90b1d40c079bf09066628be15da9ab60d0db862" exitCode=0 Nov 24 20:12:22 crc kubenswrapper[4812]: I1124 20:12:22.195582 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kmtc2" Nov 24 20:12:22 crc kubenswrapper[4812]: I1124 20:12:22.195620 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmtc2" event={"ID":"240d37a5-0424-4c3b-bb65-d45d1580e93b","Type":"ContainerDied","Data":"ec699de181ea6be633c7b982c90b1d40c079bf09066628be15da9ab60d0db862"} Nov 24 20:12:22 crc kubenswrapper[4812]: I1124 20:12:22.197507 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kmtc2" event={"ID":"240d37a5-0424-4c3b-bb65-d45d1580e93b","Type":"ContainerDied","Data":"6fdfd177f3baf91accdf5f1508706d3adf84f77eb6186fc6e5e79f204205f876"} Nov 24 20:12:22 crc kubenswrapper[4812]: I1124 20:12:22.197559 4812 scope.go:117] "RemoveContainer" containerID="ec699de181ea6be633c7b982c90b1d40c079bf09066628be15da9ab60d0db862" Nov 24 20:12:22 crc kubenswrapper[4812]: I1124 20:12:22.227187 4812 scope.go:117] "RemoveContainer" containerID="00780bc9b8a3842c61581ac3b26d781f946952816210f5370bf00472e4bf27c1" Nov 24 20:12:22 crc kubenswrapper[4812]: I1124 20:12:22.251400 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kmtc2"] Nov 24 20:12:22 crc kubenswrapper[4812]: I1124 20:12:22.259514 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kmtc2"] Nov 24 20:12:22 crc kubenswrapper[4812]: I1124 20:12:22.265255 4812 scope.go:117] "RemoveContainer" containerID="d3e18191071e3dc1220cef7d3c7f4261e8f222f427481a780e55b56551c4ec7c" Nov 24 20:12:22 crc kubenswrapper[4812]: I1124 20:12:22.294149 4812 scope.go:117] "RemoveContainer" containerID="ec699de181ea6be633c7b982c90b1d40c079bf09066628be15da9ab60d0db862" Nov 24 20:12:22 crc kubenswrapper[4812]: E1124 20:12:22.294936 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec699de181ea6be633c7b982c90b1d40c079bf09066628be15da9ab60d0db862\": container with ID starting with ec699de181ea6be633c7b982c90b1d40c079bf09066628be15da9ab60d0db862 not found: ID does not exist" containerID="ec699de181ea6be633c7b982c90b1d40c079bf09066628be15da9ab60d0db862" Nov 24 20:12:22 crc kubenswrapper[4812]: I1124 20:12:22.295026 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec699de181ea6be633c7b982c90b1d40c079bf09066628be15da9ab60d0db862"} err="failed to get container status \"ec699de181ea6be633c7b982c90b1d40c079bf09066628be15da9ab60d0db862\": rpc error: code = NotFound desc = could not find container \"ec699de181ea6be633c7b982c90b1d40c079bf09066628be15da9ab60d0db862\": container with ID starting with ec699de181ea6be633c7b982c90b1d40c079bf09066628be15da9ab60d0db862 not found: ID does not exist" Nov 24 20:12:22 crc kubenswrapper[4812]: I1124 20:12:22.295080 4812 scope.go:117] "RemoveContainer" containerID="00780bc9b8a3842c61581ac3b26d781f946952816210f5370bf00472e4bf27c1" Nov 24 20:12:22 crc kubenswrapper[4812]: E1124 20:12:22.296031 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00780bc9b8a3842c61581ac3b26d781f946952816210f5370bf00472e4bf27c1\": container with ID starting with 00780bc9b8a3842c61581ac3b26d781f946952816210f5370bf00472e4bf27c1 not found: ID does not exist" containerID="00780bc9b8a3842c61581ac3b26d781f946952816210f5370bf00472e4bf27c1" Nov 24 20:12:22 crc kubenswrapper[4812]: I1124 20:12:22.296085 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00780bc9b8a3842c61581ac3b26d781f946952816210f5370bf00472e4bf27c1"} err="failed to get container status \"00780bc9b8a3842c61581ac3b26d781f946952816210f5370bf00472e4bf27c1\": rpc error: code = NotFound desc = could not find container \"00780bc9b8a3842c61581ac3b26d781f946952816210f5370bf00472e4bf27c1\": container with ID starting with 00780bc9b8a3842c61581ac3b26d781f946952816210f5370bf00472e4bf27c1 not found: ID does not exist" Nov 24 20:12:22 crc kubenswrapper[4812]: I1124 20:12:22.296119 4812 scope.go:117] "RemoveContainer" containerID="d3e18191071e3dc1220cef7d3c7f4261e8f222f427481a780e55b56551c4ec7c" Nov 24 20:12:22 crc kubenswrapper[4812]: E1124 20:12:22.297050 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3e18191071e3dc1220cef7d3c7f4261e8f222f427481a780e55b56551c4ec7c\": container with ID starting with d3e18191071e3dc1220cef7d3c7f4261e8f222f427481a780e55b56551c4ec7c not found: ID does not exist" containerID="d3e18191071e3dc1220cef7d3c7f4261e8f222f427481a780e55b56551c4ec7c" Nov 24 20:12:22 crc kubenswrapper[4812]: I1124 20:12:22.297087 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3e18191071e3dc1220cef7d3c7f4261e8f222f427481a780e55b56551c4ec7c"} err="failed to get container status \"d3e18191071e3dc1220cef7d3c7f4261e8f222f427481a780e55b56551c4ec7c\": rpc error: code = NotFound desc = could not find container \"d3e18191071e3dc1220cef7d3c7f4261e8f222f427481a780e55b56551c4ec7c\": container with ID starting with d3e18191071e3dc1220cef7d3c7f4261e8f222f427481a780e55b56551c4ec7c not found: ID does not exist" Nov 24 20:12:22 crc kubenswrapper[4812]: I1124 20:12:22.985923 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="240d37a5-0424-4c3b-bb65-d45d1580e93b" path="/var/lib/kubelet/pods/240d37a5-0424-4c3b-bb65-d45d1580e93b/volumes" Nov 24 20:12:32 crc kubenswrapper[4812]: I1124 20:12:32.967789 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:12:32 crc kubenswrapper[4812]: E1124 20:12:32.970065 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:12:45 crc kubenswrapper[4812]: I1124 20:12:45.965936 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:12:45 crc kubenswrapper[4812]: E1124 20:12:45.966942 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:13:00 crc kubenswrapper[4812]: I1124 20:13:00.966143 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:13:00 crc kubenswrapper[4812]: E1124 20:13:00.968149 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:13:13 crc kubenswrapper[4812]: I1124 20:13:13.966574 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:13:13 crc kubenswrapper[4812]: E1124 20:13:13.967459 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:13:24 crc kubenswrapper[4812]: I1124 20:13:24.965920 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:13:24 crc kubenswrapper[4812]: E1124 20:13:24.967263 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:13:38 crc kubenswrapper[4812]: I1124 20:13:38.965905 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:13:38 crc kubenswrapper[4812]: E1124 20:13:38.967384 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:13:53 crc kubenswrapper[4812]: I1124 20:13:53.966553 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:13:53 crc kubenswrapper[4812]: E1124 20:13:53.967827 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:14:06 crc kubenswrapper[4812]: I1124 20:14:06.975513 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:14:08 crc kubenswrapper[4812]: I1124 20:14:08.232830 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"6787f8a2133918148b0e5e0c7108a772d3f17345a49b8a77e3717faeddb53b50"} Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.211148 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400255-jzgzn"] Nov 24 20:15:00 crc kubenswrapper[4812]: E1124 20:15:00.211949 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="240d37a5-0424-4c3b-bb65-d45d1580e93b" containerName="extract-content" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.211965 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="240d37a5-0424-4c3b-bb65-d45d1580e93b" containerName="extract-content" Nov 24 20:15:00 crc kubenswrapper[4812]: E1124 20:15:00.211976 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba6d9209-353e-4c01-859b-eccdd319a28f" containerName="extract-utilities" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.211985 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba6d9209-353e-4c01-859b-eccdd319a28f" containerName="extract-utilities" Nov 24 20:15:00 crc kubenswrapper[4812]: E1124 20:15:00.212006 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="240d37a5-0424-4c3b-bb65-d45d1580e93b" containerName="registry-server" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.212015 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="240d37a5-0424-4c3b-bb65-d45d1580e93b" containerName="registry-server" Nov 24 20:15:00 crc kubenswrapper[4812]: E1124 20:15:00.212024 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba6d9209-353e-4c01-859b-eccdd319a28f" containerName="extract-content" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.212031 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba6d9209-353e-4c01-859b-eccdd319a28f" containerName="extract-content" Nov 24 20:15:00 crc kubenswrapper[4812]: E1124 20:15:00.212060 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba6d9209-353e-4c01-859b-eccdd319a28f" containerName="registry-server" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.212067 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba6d9209-353e-4c01-859b-eccdd319a28f" containerName="registry-server" Nov 24 20:15:00 crc kubenswrapper[4812]: E1124 20:15:00.212079 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="240d37a5-0424-4c3b-bb65-d45d1580e93b" containerName="extract-utilities" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.212087 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="240d37a5-0424-4c3b-bb65-d45d1580e93b" containerName="extract-utilities" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.212251 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="240d37a5-0424-4c3b-bb65-d45d1580e93b" containerName="registry-server" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.212269 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba6d9209-353e-4c01-859b-eccdd319a28f" containerName="registry-server" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.212968 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400255-jzgzn" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.215953 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.216201 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.241239 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400255-jzgzn"] Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.369547 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/636f9036-4397-4aa2-a431-cd8253bf87c3-config-volume\") pod \"collect-profiles-29400255-jzgzn\" (UID: \"636f9036-4397-4aa2-a431-cd8253bf87c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400255-jzgzn" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.369620 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/636f9036-4397-4aa2-a431-cd8253bf87c3-secret-volume\") pod \"collect-profiles-29400255-jzgzn\" (UID: \"636f9036-4397-4aa2-a431-cd8253bf87c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400255-jzgzn" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.369644 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwpfl\" (UniqueName: \"kubernetes.io/projected/636f9036-4397-4aa2-a431-cd8253bf87c3-kube-api-access-pwpfl\") pod \"collect-profiles-29400255-jzgzn\" (UID: \"636f9036-4397-4aa2-a431-cd8253bf87c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400255-jzgzn" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.470877 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/636f9036-4397-4aa2-a431-cd8253bf87c3-config-volume\") pod \"collect-profiles-29400255-jzgzn\" (UID: \"636f9036-4397-4aa2-a431-cd8253bf87c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400255-jzgzn" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.471024 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/636f9036-4397-4aa2-a431-cd8253bf87c3-secret-volume\") pod \"collect-profiles-29400255-jzgzn\" (UID: \"636f9036-4397-4aa2-a431-cd8253bf87c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400255-jzgzn" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.471091 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwpfl\" (UniqueName: \"kubernetes.io/projected/636f9036-4397-4aa2-a431-cd8253bf87c3-kube-api-access-pwpfl\") pod \"collect-profiles-29400255-jzgzn\" (UID: \"636f9036-4397-4aa2-a431-cd8253bf87c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400255-jzgzn" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.471820 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/636f9036-4397-4aa2-a431-cd8253bf87c3-config-volume\") pod \"collect-profiles-29400255-jzgzn\" (UID: \"636f9036-4397-4aa2-a431-cd8253bf87c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400255-jzgzn" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.479750 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/636f9036-4397-4aa2-a431-cd8253bf87c3-secret-volume\") pod \"collect-profiles-29400255-jzgzn\" (UID: \"636f9036-4397-4aa2-a431-cd8253bf87c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400255-jzgzn" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.493766 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwpfl\" (UniqueName: \"kubernetes.io/projected/636f9036-4397-4aa2-a431-cd8253bf87c3-kube-api-access-pwpfl\") pod \"collect-profiles-29400255-jzgzn\" (UID: \"636f9036-4397-4aa2-a431-cd8253bf87c3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400255-jzgzn" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.556788 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400255-jzgzn" Nov 24 20:15:00 crc kubenswrapper[4812]: I1124 20:15:00.840543 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400255-jzgzn"] Nov 24 20:15:00 crc kubenswrapper[4812]: W1124 20:15:00.844201 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod636f9036_4397_4aa2_a431_cd8253bf87c3.slice/crio-a745552b8d30d7cfa5dd66b54b6267cc1f7984c8546e4892beb180d5bb16f9f7 WatchSource:0}: Error finding container a745552b8d30d7cfa5dd66b54b6267cc1f7984c8546e4892beb180d5bb16f9f7: Status 404 returned error can't find the container with id a745552b8d30d7cfa5dd66b54b6267cc1f7984c8546e4892beb180d5bb16f9f7 Nov 24 20:15:01 crc kubenswrapper[4812]: I1124 20:15:01.734153 4812 generic.go:334] "Generic (PLEG): container finished" podID="636f9036-4397-4aa2-a431-cd8253bf87c3" containerID="27751295ee487f4ab8aecf1d392858b8cde1cfd3a2359d90e6ddfb21dd506cd0" exitCode=0 Nov 24 20:15:01 crc kubenswrapper[4812]: I1124 20:15:01.734235 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400255-jzgzn" event={"ID":"636f9036-4397-4aa2-a431-cd8253bf87c3","Type":"ContainerDied","Data":"27751295ee487f4ab8aecf1d392858b8cde1cfd3a2359d90e6ddfb21dd506cd0"} Nov 24 20:15:01 crc kubenswrapper[4812]: I1124 20:15:01.734633 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400255-jzgzn" event={"ID":"636f9036-4397-4aa2-a431-cd8253bf87c3","Type":"ContainerStarted","Data":"a745552b8d30d7cfa5dd66b54b6267cc1f7984c8546e4892beb180d5bb16f9f7"} Nov 24 20:15:03 crc kubenswrapper[4812]: I1124 20:15:03.190725 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400255-jzgzn" Nov 24 20:15:03 crc kubenswrapper[4812]: I1124 20:15:03.210956 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/636f9036-4397-4aa2-a431-cd8253bf87c3-config-volume\") pod \"636f9036-4397-4aa2-a431-cd8253bf87c3\" (UID: \"636f9036-4397-4aa2-a431-cd8253bf87c3\") " Nov 24 20:15:03 crc kubenswrapper[4812]: I1124 20:15:03.211068 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwpfl\" (UniqueName: \"kubernetes.io/projected/636f9036-4397-4aa2-a431-cd8253bf87c3-kube-api-access-pwpfl\") pod \"636f9036-4397-4aa2-a431-cd8253bf87c3\" (UID: \"636f9036-4397-4aa2-a431-cd8253bf87c3\") " Nov 24 20:15:03 crc kubenswrapper[4812]: I1124 20:15:03.211439 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/636f9036-4397-4aa2-a431-cd8253bf87c3-config-volume" (OuterVolumeSpecName: "config-volume") pod "636f9036-4397-4aa2-a431-cd8253bf87c3" (UID: "636f9036-4397-4aa2-a431-cd8253bf87c3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:15:03 crc kubenswrapper[4812]: I1124 20:15:03.216890 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/636f9036-4397-4aa2-a431-cd8253bf87c3-kube-api-access-pwpfl" (OuterVolumeSpecName: "kube-api-access-pwpfl") pod "636f9036-4397-4aa2-a431-cd8253bf87c3" (UID: "636f9036-4397-4aa2-a431-cd8253bf87c3"). InnerVolumeSpecName "kube-api-access-pwpfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:15:03 crc kubenswrapper[4812]: I1124 20:15:03.312043 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/636f9036-4397-4aa2-a431-cd8253bf87c3-secret-volume\") pod \"636f9036-4397-4aa2-a431-cd8253bf87c3\" (UID: \"636f9036-4397-4aa2-a431-cd8253bf87c3\") " Nov 24 20:15:03 crc kubenswrapper[4812]: I1124 20:15:03.312402 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwpfl\" (UniqueName: \"kubernetes.io/projected/636f9036-4397-4aa2-a431-cd8253bf87c3-kube-api-access-pwpfl\") on node \"crc\" DevicePath \"\"" Nov 24 20:15:03 crc kubenswrapper[4812]: I1124 20:15:03.312432 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/636f9036-4397-4aa2-a431-cd8253bf87c3-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 20:15:03 crc kubenswrapper[4812]: I1124 20:15:03.314921 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/636f9036-4397-4aa2-a431-cd8253bf87c3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "636f9036-4397-4aa2-a431-cd8253bf87c3" (UID: "636f9036-4397-4aa2-a431-cd8253bf87c3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:15:03 crc kubenswrapper[4812]: I1124 20:15:03.414070 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/636f9036-4397-4aa2-a431-cd8253bf87c3-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 20:15:03 crc kubenswrapper[4812]: I1124 20:15:03.755633 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400255-jzgzn" event={"ID":"636f9036-4397-4aa2-a431-cd8253bf87c3","Type":"ContainerDied","Data":"a745552b8d30d7cfa5dd66b54b6267cc1f7984c8546e4892beb180d5bb16f9f7"} Nov 24 20:15:03 crc kubenswrapper[4812]: I1124 20:15:03.755686 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a745552b8d30d7cfa5dd66b54b6267cc1f7984c8546e4892beb180d5bb16f9f7" Nov 24 20:15:03 crc kubenswrapper[4812]: I1124 20:15:03.755733 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400255-jzgzn" Nov 24 20:15:04 crc kubenswrapper[4812]: I1124 20:15:04.274778 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400210-7l2c8"] Nov 24 20:15:04 crc kubenswrapper[4812]: I1124 20:15:04.281282 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400210-7l2c8"] Nov 24 20:15:04 crc kubenswrapper[4812]: I1124 20:15:04.982838 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd51106d-321e-45b6-8d19-1925dcfccb82" path="/var/lib/kubelet/pods/cd51106d-321e-45b6-8d19-1925dcfccb82/volumes" Nov 24 20:15:52 crc kubenswrapper[4812]: I1124 20:15:52.008864 4812 scope.go:117] "RemoveContainer" containerID="2ab2843cd8de441af746041831a0b7425ae5087f4c6a77bd10673772c5ad544b" Nov 24 20:16:32 crc kubenswrapper[4812]: I1124 20:16:32.998254 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:16:33 crc kubenswrapper[4812]: I1124 20:16:32.999204 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:17:02 crc kubenswrapper[4812]: I1124 20:17:02.998892 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:17:02 crc kubenswrapper[4812]: I1124 20:17:02.999603 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:17:33 crc kubenswrapper[4812]: I1124 20:17:32.999094 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:17:33 crc kubenswrapper[4812]: I1124 20:17:33.000077 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:17:33 crc kubenswrapper[4812]: I1124 20:17:33.000156 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 20:17:33 crc kubenswrapper[4812]: I1124 20:17:33.001195 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6787f8a2133918148b0e5e0c7108a772d3f17345a49b8a77e3717faeddb53b50"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 20:17:33 crc kubenswrapper[4812]: I1124 20:17:33.001298 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://6787f8a2133918148b0e5e0c7108a772d3f17345a49b8a77e3717faeddb53b50" gracePeriod=600 Nov 24 20:17:33 crc kubenswrapper[4812]: I1124 20:17:33.172794 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="6787f8a2133918148b0e5e0c7108a772d3f17345a49b8a77e3717faeddb53b50" exitCode=0 Nov 24 20:17:33 crc kubenswrapper[4812]: I1124 20:17:33.172853 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"6787f8a2133918148b0e5e0c7108a772d3f17345a49b8a77e3717faeddb53b50"} Nov 24 20:17:33 crc kubenswrapper[4812]: I1124 20:17:33.172889 4812 scope.go:117] "RemoveContainer" containerID="ce719c75ce45941c3a50395d05bcfe10132d523dfafc9b6ff72f7dc17bde8524" Nov 24 20:17:34 crc kubenswrapper[4812]: I1124 20:17:34.184452 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9"} Nov 24 20:20:02 crc kubenswrapper[4812]: I1124 20:20:02.998015 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:20:02 crc kubenswrapper[4812]: I1124 20:20:02.998752 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:20:18 crc kubenswrapper[4812]: I1124 20:20:18.355627 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zn7b4"] Nov 24 20:20:18 crc kubenswrapper[4812]: E1124 20:20:18.358563 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="636f9036-4397-4aa2-a431-cd8253bf87c3" containerName="collect-profiles" Nov 24 20:20:18 crc kubenswrapper[4812]: I1124 20:20:18.358732 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="636f9036-4397-4aa2-a431-cd8253bf87c3" containerName="collect-profiles" Nov 24 20:20:18 crc kubenswrapper[4812]: I1124 20:20:18.359491 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="636f9036-4397-4aa2-a431-cd8253bf87c3" containerName="collect-profiles" Nov 24 20:20:18 crc kubenswrapper[4812]: I1124 20:20:18.361828 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zn7b4" Nov 24 20:20:18 crc kubenswrapper[4812]: I1124 20:20:18.380237 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zn7b4"] Nov 24 20:20:18 crc kubenswrapper[4812]: I1124 20:20:18.552886 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef5df799-6882-4f04-8939-9ffd4b341808-utilities\") pod \"redhat-marketplace-zn7b4\" (UID: \"ef5df799-6882-4f04-8939-9ffd4b341808\") " pod="openshift-marketplace/redhat-marketplace-zn7b4" Nov 24 20:20:18 crc kubenswrapper[4812]: I1124 20:20:18.553380 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkqsz\" (UniqueName: \"kubernetes.io/projected/ef5df799-6882-4f04-8939-9ffd4b341808-kube-api-access-dkqsz\") pod \"redhat-marketplace-zn7b4\" (UID: \"ef5df799-6882-4f04-8939-9ffd4b341808\") " pod="openshift-marketplace/redhat-marketplace-zn7b4" Nov 24 20:20:18 crc kubenswrapper[4812]: I1124 20:20:18.553590 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef5df799-6882-4f04-8939-9ffd4b341808-catalog-content\") pod \"redhat-marketplace-zn7b4\" (UID: \"ef5df799-6882-4f04-8939-9ffd4b341808\") " pod="openshift-marketplace/redhat-marketplace-zn7b4" Nov 24 20:20:18 crc kubenswrapper[4812]: I1124 20:20:18.655469 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkqsz\" (UniqueName: \"kubernetes.io/projected/ef5df799-6882-4f04-8939-9ffd4b341808-kube-api-access-dkqsz\") pod \"redhat-marketplace-zn7b4\" (UID: \"ef5df799-6882-4f04-8939-9ffd4b341808\") " pod="openshift-marketplace/redhat-marketplace-zn7b4" Nov 24 20:20:18 crc kubenswrapper[4812]: I1124 20:20:18.655545 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef5df799-6882-4f04-8939-9ffd4b341808-catalog-content\") pod \"redhat-marketplace-zn7b4\" (UID: \"ef5df799-6882-4f04-8939-9ffd4b341808\") " pod="openshift-marketplace/redhat-marketplace-zn7b4" Nov 24 20:20:18 crc kubenswrapper[4812]: I1124 20:20:18.655673 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef5df799-6882-4f04-8939-9ffd4b341808-utilities\") pod \"redhat-marketplace-zn7b4\" (UID: \"ef5df799-6882-4f04-8939-9ffd4b341808\") " pod="openshift-marketplace/redhat-marketplace-zn7b4" Nov 24 20:20:18 crc kubenswrapper[4812]: I1124 20:20:18.656687 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef5df799-6882-4f04-8939-9ffd4b341808-utilities\") pod \"redhat-marketplace-zn7b4\" (UID: \"ef5df799-6882-4f04-8939-9ffd4b341808\") " pod="openshift-marketplace/redhat-marketplace-zn7b4" Nov 24 20:20:18 crc kubenswrapper[4812]: I1124 20:20:18.656883 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef5df799-6882-4f04-8939-9ffd4b341808-catalog-content\") pod \"redhat-marketplace-zn7b4\" (UID: \"ef5df799-6882-4f04-8939-9ffd4b341808\") " pod="openshift-marketplace/redhat-marketplace-zn7b4" Nov 24 20:20:18 crc kubenswrapper[4812]: I1124 20:20:18.693904 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkqsz\" (UniqueName: \"kubernetes.io/projected/ef5df799-6882-4f04-8939-9ffd4b341808-kube-api-access-dkqsz\") pod \"redhat-marketplace-zn7b4\" (UID: \"ef5df799-6882-4f04-8939-9ffd4b341808\") " pod="openshift-marketplace/redhat-marketplace-zn7b4" Nov 24 20:20:18 crc kubenswrapper[4812]: I1124 20:20:18.694455 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zn7b4" Nov 24 20:20:19 crc kubenswrapper[4812]: I1124 20:20:19.199750 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zn7b4"] Nov 24 20:20:19 crc kubenswrapper[4812]: I1124 20:20:19.833897 4812 generic.go:334] "Generic (PLEG): container finished" podID="ef5df799-6882-4f04-8939-9ffd4b341808" containerID="d728a083e80bec7c45d6049b3cbae925f7e03cb96e431667e5b1a73b18ef96d4" exitCode=0 Nov 24 20:20:19 crc kubenswrapper[4812]: I1124 20:20:19.833976 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zn7b4" event={"ID":"ef5df799-6882-4f04-8939-9ffd4b341808","Type":"ContainerDied","Data":"d728a083e80bec7c45d6049b3cbae925f7e03cb96e431667e5b1a73b18ef96d4"} Nov 24 20:20:19 crc kubenswrapper[4812]: I1124 20:20:19.834274 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zn7b4" event={"ID":"ef5df799-6882-4f04-8939-9ffd4b341808","Type":"ContainerStarted","Data":"c1d56c486e1cac2ca114659914bf96ce098de040e95653162133f1a382ba1a4f"} Nov 24 20:20:19 crc kubenswrapper[4812]: I1124 20:20:19.836813 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 20:20:21 crc kubenswrapper[4812]: I1124 20:20:21.855704 4812 generic.go:334] "Generic (PLEG): container finished" podID="ef5df799-6882-4f04-8939-9ffd4b341808" containerID="9b7c16fb52ae90201c107601873da854ecb3bb092c3c267df8421f904556134f" exitCode=0 Nov 24 20:20:21 crc kubenswrapper[4812]: I1124 20:20:21.855801 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zn7b4" event={"ID":"ef5df799-6882-4f04-8939-9ffd4b341808","Type":"ContainerDied","Data":"9b7c16fb52ae90201c107601873da854ecb3bb092c3c267df8421f904556134f"} Nov 24 20:20:22 crc kubenswrapper[4812]: I1124 20:20:22.868588 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zn7b4" event={"ID":"ef5df799-6882-4f04-8939-9ffd4b341808","Type":"ContainerStarted","Data":"7447eba660cf854d22ccff4863a9fb45594f38fa63a2406a0553250f6ce1fc5c"} Nov 24 20:20:22 crc kubenswrapper[4812]: I1124 20:20:22.899839 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zn7b4" podStartSLOduration=2.254179195 podStartE2EDuration="4.899813915s" podCreationTimestamp="2025-11-24 20:20:18 +0000 UTC" firstStartedPulling="2025-11-24 20:20:19.836507223 +0000 UTC m=+3813.625459604" lastFinishedPulling="2025-11-24 20:20:22.482141913 +0000 UTC m=+3816.271094324" observedRunningTime="2025-11-24 20:20:22.893106936 +0000 UTC m=+3816.682059357" watchObservedRunningTime="2025-11-24 20:20:22.899813915 +0000 UTC m=+3816.688766326" Nov 24 20:20:28 crc kubenswrapper[4812]: I1124 20:20:28.695555 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zn7b4" Nov 24 20:20:28 crc kubenswrapper[4812]: I1124 20:20:28.696181 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zn7b4" Nov 24 20:20:28 crc kubenswrapper[4812]: I1124 20:20:28.763280 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zn7b4" Nov 24 20:20:29 crc kubenswrapper[4812]: I1124 20:20:29.004511 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zn7b4" Nov 24 20:20:29 crc kubenswrapper[4812]: I1124 20:20:29.068068 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zn7b4"] Nov 24 20:20:30 crc kubenswrapper[4812]: I1124 20:20:30.941093 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zn7b4" podUID="ef5df799-6882-4f04-8939-9ffd4b341808" containerName="registry-server" containerID="cri-o://7447eba660cf854d22ccff4863a9fb45594f38fa63a2406a0553250f6ce1fc5c" gracePeriod=2 Nov 24 20:20:31 crc kubenswrapper[4812]: I1124 20:20:31.948048 4812 generic.go:334] "Generic (PLEG): container finished" podID="ef5df799-6882-4f04-8939-9ffd4b341808" containerID="7447eba660cf854d22ccff4863a9fb45594f38fa63a2406a0553250f6ce1fc5c" exitCode=0 Nov 24 20:20:31 crc kubenswrapper[4812]: I1124 20:20:31.948172 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zn7b4" event={"ID":"ef5df799-6882-4f04-8939-9ffd4b341808","Type":"ContainerDied","Data":"7447eba660cf854d22ccff4863a9fb45594f38fa63a2406a0553250f6ce1fc5c"} Nov 24 20:20:32 crc kubenswrapper[4812]: I1124 20:20:32.025666 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zn7b4" Nov 24 20:20:32 crc kubenswrapper[4812]: I1124 20:20:32.121109 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef5df799-6882-4f04-8939-9ffd4b341808-utilities\") pod \"ef5df799-6882-4f04-8939-9ffd4b341808\" (UID: \"ef5df799-6882-4f04-8939-9ffd4b341808\") " Nov 24 20:20:32 crc kubenswrapper[4812]: I1124 20:20:32.121292 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef5df799-6882-4f04-8939-9ffd4b341808-catalog-content\") pod \"ef5df799-6882-4f04-8939-9ffd4b341808\" (UID: \"ef5df799-6882-4f04-8939-9ffd4b341808\") " Nov 24 20:20:32 crc kubenswrapper[4812]: I1124 20:20:32.121376 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkqsz\" (UniqueName: \"kubernetes.io/projected/ef5df799-6882-4f04-8939-9ffd4b341808-kube-api-access-dkqsz\") pod \"ef5df799-6882-4f04-8939-9ffd4b341808\" (UID: \"ef5df799-6882-4f04-8939-9ffd4b341808\") " Nov 24 20:20:32 crc kubenswrapper[4812]: I1124 20:20:32.124638 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef5df799-6882-4f04-8939-9ffd4b341808-utilities" (OuterVolumeSpecName: "utilities") pod "ef5df799-6882-4f04-8939-9ffd4b341808" (UID: "ef5df799-6882-4f04-8939-9ffd4b341808"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:20:32 crc kubenswrapper[4812]: I1124 20:20:32.129836 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef5df799-6882-4f04-8939-9ffd4b341808-kube-api-access-dkqsz" (OuterVolumeSpecName: "kube-api-access-dkqsz") pod "ef5df799-6882-4f04-8939-9ffd4b341808" (UID: "ef5df799-6882-4f04-8939-9ffd4b341808"). InnerVolumeSpecName "kube-api-access-dkqsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:20:32 crc kubenswrapper[4812]: I1124 20:20:32.141262 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef5df799-6882-4f04-8939-9ffd4b341808-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef5df799-6882-4f04-8939-9ffd4b341808" (UID: "ef5df799-6882-4f04-8939-9ffd4b341808"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:20:32 crc kubenswrapper[4812]: I1124 20:20:32.223877 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef5df799-6882-4f04-8939-9ffd4b341808-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 20:20:32 crc kubenswrapper[4812]: I1124 20:20:32.223908 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef5df799-6882-4f04-8939-9ffd4b341808-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 20:20:32 crc kubenswrapper[4812]: I1124 20:20:32.223918 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkqsz\" (UniqueName: \"kubernetes.io/projected/ef5df799-6882-4f04-8939-9ffd4b341808-kube-api-access-dkqsz\") on node \"crc\" DevicePath \"\"" Nov 24 20:20:32 crc kubenswrapper[4812]: I1124 20:20:32.960792 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zn7b4" event={"ID":"ef5df799-6882-4f04-8939-9ffd4b341808","Type":"ContainerDied","Data":"c1d56c486e1cac2ca114659914bf96ce098de040e95653162133f1a382ba1a4f"} Nov 24 20:20:32 crc kubenswrapper[4812]: I1124 20:20:32.960844 4812 scope.go:117] "RemoveContainer" containerID="7447eba660cf854d22ccff4863a9fb45594f38fa63a2406a0553250f6ce1fc5c" Nov 24 20:20:32 crc kubenswrapper[4812]: I1124 20:20:32.962512 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zn7b4" Nov 24 20:20:32 crc kubenswrapper[4812]: I1124 20:20:32.983947 4812 scope.go:117] "RemoveContainer" containerID="9b7c16fb52ae90201c107601873da854ecb3bb092c3c267df8421f904556134f" Nov 24 20:20:32 crc kubenswrapper[4812]: I1124 20:20:32.999442 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:20:32 crc kubenswrapper[4812]: I1124 20:20:32.999542 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:20:33 crc kubenswrapper[4812]: I1124 20:20:33.012695 4812 scope.go:117] "RemoveContainer" containerID="d728a083e80bec7c45d6049b3cbae925f7e03cb96e431667e5b1a73b18ef96d4" Nov 24 20:20:33 crc kubenswrapper[4812]: I1124 20:20:33.017032 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zn7b4"] Nov 24 20:20:33 crc kubenswrapper[4812]: I1124 20:20:33.025551 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zn7b4"] Nov 24 20:20:34 crc kubenswrapper[4812]: I1124 20:20:34.979608 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef5df799-6882-4f04-8939-9ffd4b341808" path="/var/lib/kubelet/pods/ef5df799-6882-4f04-8939-9ffd4b341808/volumes" Nov 24 20:21:01 crc kubenswrapper[4812]: I1124 20:21:01.294972 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cr2pq"] Nov 24 20:21:01 crc kubenswrapper[4812]: E1124 20:21:01.296120 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef5df799-6882-4f04-8939-9ffd4b341808" containerName="extract-utilities" Nov 24 20:21:01 crc kubenswrapper[4812]: I1124 20:21:01.296144 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef5df799-6882-4f04-8939-9ffd4b341808" containerName="extract-utilities" Nov 24 20:21:01 crc kubenswrapper[4812]: E1124 20:21:01.296163 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef5df799-6882-4f04-8939-9ffd4b341808" containerName="extract-content" Nov 24 20:21:01 crc kubenswrapper[4812]: I1124 20:21:01.296176 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef5df799-6882-4f04-8939-9ffd4b341808" containerName="extract-content" Nov 24 20:21:01 crc kubenswrapper[4812]: E1124 20:21:01.296229 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef5df799-6882-4f04-8939-9ffd4b341808" containerName="registry-server" Nov 24 20:21:01 crc kubenswrapper[4812]: I1124 20:21:01.296241 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef5df799-6882-4f04-8939-9ffd4b341808" containerName="registry-server" Nov 24 20:21:01 crc kubenswrapper[4812]: I1124 20:21:01.296590 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef5df799-6882-4f04-8939-9ffd4b341808" containerName="registry-server" Nov 24 20:21:01 crc kubenswrapper[4812]: I1124 20:21:01.298648 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cr2pq" Nov 24 20:21:01 crc kubenswrapper[4812]: I1124 20:21:01.304212 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cr2pq"] Nov 24 20:21:01 crc kubenswrapper[4812]: I1124 20:21:01.389946 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbl7x\" (UniqueName: \"kubernetes.io/projected/e65e6735-f357-4f2e-92c2-7d9ca26aac62-kube-api-access-qbl7x\") pod \"community-operators-cr2pq\" (UID: \"e65e6735-f357-4f2e-92c2-7d9ca26aac62\") " pod="openshift-marketplace/community-operators-cr2pq" Nov 24 20:21:01 crc kubenswrapper[4812]: I1124 20:21:01.390185 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e65e6735-f357-4f2e-92c2-7d9ca26aac62-catalog-content\") pod \"community-operators-cr2pq\" (UID: \"e65e6735-f357-4f2e-92c2-7d9ca26aac62\") " pod="openshift-marketplace/community-operators-cr2pq" Nov 24 20:21:01 crc kubenswrapper[4812]: I1124 20:21:01.390536 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e65e6735-f357-4f2e-92c2-7d9ca26aac62-utilities\") pod \"community-operators-cr2pq\" (UID: \"e65e6735-f357-4f2e-92c2-7d9ca26aac62\") " pod="openshift-marketplace/community-operators-cr2pq" Nov 24 20:21:01 crc kubenswrapper[4812]: I1124 20:21:01.492102 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e65e6735-f357-4f2e-92c2-7d9ca26aac62-utilities\") pod \"community-operators-cr2pq\" (UID: \"e65e6735-f357-4f2e-92c2-7d9ca26aac62\") " pod="openshift-marketplace/community-operators-cr2pq" Nov 24 20:21:01 crc kubenswrapper[4812]: I1124 20:21:01.492222 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbl7x\" (UniqueName: \"kubernetes.io/projected/e65e6735-f357-4f2e-92c2-7d9ca26aac62-kube-api-access-qbl7x\") pod \"community-operators-cr2pq\" (UID: \"e65e6735-f357-4f2e-92c2-7d9ca26aac62\") " pod="openshift-marketplace/community-operators-cr2pq" Nov 24 20:21:01 crc kubenswrapper[4812]: I1124 20:21:01.492451 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e65e6735-f357-4f2e-92c2-7d9ca26aac62-catalog-content\") pod \"community-operators-cr2pq\" (UID: \"e65e6735-f357-4f2e-92c2-7d9ca26aac62\") " pod="openshift-marketplace/community-operators-cr2pq" Nov 24 20:21:01 crc kubenswrapper[4812]: I1124 20:21:01.492801 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e65e6735-f357-4f2e-92c2-7d9ca26aac62-utilities\") pod \"community-operators-cr2pq\" (UID: \"e65e6735-f357-4f2e-92c2-7d9ca26aac62\") " pod="openshift-marketplace/community-operators-cr2pq" Nov 24 20:21:01 crc kubenswrapper[4812]: I1124 20:21:01.493047 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e65e6735-f357-4f2e-92c2-7d9ca26aac62-catalog-content\") pod \"community-operators-cr2pq\" (UID: \"e65e6735-f357-4f2e-92c2-7d9ca26aac62\") " pod="openshift-marketplace/community-operators-cr2pq" Nov 24 20:21:01 crc kubenswrapper[4812]: I1124 20:21:01.514262 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbl7x\" (UniqueName: \"kubernetes.io/projected/e65e6735-f357-4f2e-92c2-7d9ca26aac62-kube-api-access-qbl7x\") pod \"community-operators-cr2pq\" (UID: \"e65e6735-f357-4f2e-92c2-7d9ca26aac62\") " pod="openshift-marketplace/community-operators-cr2pq" Nov 24 20:21:01 crc kubenswrapper[4812]: I1124 20:21:01.637533 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cr2pq" Nov 24 20:21:01 crc kubenswrapper[4812]: I1124 20:21:01.956475 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cr2pq"] Nov 24 20:21:02 crc kubenswrapper[4812]: I1124 20:21:02.256242 4812 generic.go:334] "Generic (PLEG): container finished" podID="e65e6735-f357-4f2e-92c2-7d9ca26aac62" containerID="08ab639d2defd7fb8e037243ea1e8684db2281ca9ea96608b9d10c0901fad7d7" exitCode=0 Nov 24 20:21:02 crc kubenswrapper[4812]: I1124 20:21:02.256306 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cr2pq" event={"ID":"e65e6735-f357-4f2e-92c2-7d9ca26aac62","Type":"ContainerDied","Data":"08ab639d2defd7fb8e037243ea1e8684db2281ca9ea96608b9d10c0901fad7d7"} Nov 24 20:21:02 crc kubenswrapper[4812]: I1124 20:21:02.256379 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cr2pq" event={"ID":"e65e6735-f357-4f2e-92c2-7d9ca26aac62","Type":"ContainerStarted","Data":"0f30a24d94ad90970216405f64dd97fc4775839a31193c7e86b67de7401b1eb6"} Nov 24 20:21:02 crc kubenswrapper[4812]: I1124 20:21:02.998978 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:21:03 crc kubenswrapper[4812]: I1124 20:21:02.999277 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:21:03 crc kubenswrapper[4812]: I1124 20:21:02.999360 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 20:21:03 crc kubenswrapper[4812]: I1124 20:21:03.000008 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 20:21:03 crc kubenswrapper[4812]: I1124 20:21:03.000072 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" gracePeriod=600 Nov 24 20:21:03 crc kubenswrapper[4812]: E1124 20:21:03.125288 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:21:03 crc kubenswrapper[4812]: I1124 20:21:03.267568 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cr2pq" event={"ID":"e65e6735-f357-4f2e-92c2-7d9ca26aac62","Type":"ContainerStarted","Data":"46a5ae6f9ae3ddacb583f01e9ed5ff6aa9e92d11bb72ec65346d1e5a1a42a951"} Nov 24 20:21:03 crc kubenswrapper[4812]: I1124 20:21:03.273718 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" exitCode=0 Nov 24 20:21:03 crc kubenswrapper[4812]: I1124 20:21:03.273765 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9"} Nov 24 20:21:03 crc kubenswrapper[4812]: I1124 20:21:03.273801 4812 scope.go:117] "RemoveContainer" containerID="6787f8a2133918148b0e5e0c7108a772d3f17345a49b8a77e3717faeddb53b50" Nov 24 20:21:03 crc kubenswrapper[4812]: I1124 20:21:03.274330 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:21:03 crc kubenswrapper[4812]: E1124 20:21:03.274596 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:21:04 crc kubenswrapper[4812]: I1124 20:21:04.291111 4812 generic.go:334] "Generic (PLEG): container finished" podID="e65e6735-f357-4f2e-92c2-7d9ca26aac62" containerID="46a5ae6f9ae3ddacb583f01e9ed5ff6aa9e92d11bb72ec65346d1e5a1a42a951" exitCode=0 Nov 24 20:21:04 crc kubenswrapper[4812]: I1124 20:21:04.291183 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cr2pq" event={"ID":"e65e6735-f357-4f2e-92c2-7d9ca26aac62","Type":"ContainerDied","Data":"46a5ae6f9ae3ddacb583f01e9ed5ff6aa9e92d11bb72ec65346d1e5a1a42a951"} Nov 24 20:21:05 crc kubenswrapper[4812]: I1124 20:21:05.306539 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cr2pq" event={"ID":"e65e6735-f357-4f2e-92c2-7d9ca26aac62","Type":"ContainerStarted","Data":"e718cfb142605e26dae8a4e81a52cc45ca8375d68ab9c0b759b4784a95643635"} Nov 24 20:21:05 crc kubenswrapper[4812]: I1124 20:21:05.338259 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cr2pq" podStartSLOduration=1.828883157 podStartE2EDuration="4.338232349s" podCreationTimestamp="2025-11-24 20:21:01 +0000 UTC" firstStartedPulling="2025-11-24 20:21:02.257806632 +0000 UTC m=+3856.046759033" lastFinishedPulling="2025-11-24 20:21:04.767155814 +0000 UTC m=+3858.556108225" observedRunningTime="2025-11-24 20:21:05.331415676 +0000 UTC m=+3859.120368057" watchObservedRunningTime="2025-11-24 20:21:05.338232349 +0000 UTC m=+3859.127184750" Nov 24 20:21:11 crc kubenswrapper[4812]: I1124 20:21:11.638219 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cr2pq" Nov 24 20:21:11 crc kubenswrapper[4812]: I1124 20:21:11.638646 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cr2pq" Nov 24 20:21:11 crc kubenswrapper[4812]: I1124 20:21:11.716797 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cr2pq" Nov 24 20:21:12 crc kubenswrapper[4812]: I1124 20:21:12.445409 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cr2pq" Nov 24 20:21:12 crc kubenswrapper[4812]: I1124 20:21:12.515511 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cr2pq"] Nov 24 20:21:14 crc kubenswrapper[4812]: I1124 20:21:14.386453 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cr2pq" podUID="e65e6735-f357-4f2e-92c2-7d9ca26aac62" containerName="registry-server" containerID="cri-o://e718cfb142605e26dae8a4e81a52cc45ca8375d68ab9c0b759b4784a95643635" gracePeriod=2 Nov 24 20:21:14 crc kubenswrapper[4812]: I1124 20:21:14.831897 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cr2pq" Nov 24 20:21:14 crc kubenswrapper[4812]: I1124 20:21:14.967183 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:21:14 crc kubenswrapper[4812]: E1124 20:21:14.973944 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.031281 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e65e6735-f357-4f2e-92c2-7d9ca26aac62-catalog-content\") pod \"e65e6735-f357-4f2e-92c2-7d9ca26aac62\" (UID: \"e65e6735-f357-4f2e-92c2-7d9ca26aac62\") " Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.032087 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbl7x\" (UniqueName: \"kubernetes.io/projected/e65e6735-f357-4f2e-92c2-7d9ca26aac62-kube-api-access-qbl7x\") pod \"e65e6735-f357-4f2e-92c2-7d9ca26aac62\" (UID: \"e65e6735-f357-4f2e-92c2-7d9ca26aac62\") " Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.032238 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e65e6735-f357-4f2e-92c2-7d9ca26aac62-utilities\") pod \"e65e6735-f357-4f2e-92c2-7d9ca26aac62\" (UID: \"e65e6735-f357-4f2e-92c2-7d9ca26aac62\") " Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.033354 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e65e6735-f357-4f2e-92c2-7d9ca26aac62-utilities" (OuterVolumeSpecName: "utilities") pod "e65e6735-f357-4f2e-92c2-7d9ca26aac62" (UID: "e65e6735-f357-4f2e-92c2-7d9ca26aac62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.034670 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e65e6735-f357-4f2e-92c2-7d9ca26aac62-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.053022 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e65e6735-f357-4f2e-92c2-7d9ca26aac62-kube-api-access-qbl7x" (OuterVolumeSpecName: "kube-api-access-qbl7x") pod "e65e6735-f357-4f2e-92c2-7d9ca26aac62" (UID: "e65e6735-f357-4f2e-92c2-7d9ca26aac62"). InnerVolumeSpecName "kube-api-access-qbl7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.082046 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e65e6735-f357-4f2e-92c2-7d9ca26aac62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e65e6735-f357-4f2e-92c2-7d9ca26aac62" (UID: "e65e6735-f357-4f2e-92c2-7d9ca26aac62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.136224 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbl7x\" (UniqueName: \"kubernetes.io/projected/e65e6735-f357-4f2e-92c2-7d9ca26aac62-kube-api-access-qbl7x\") on node \"crc\" DevicePath \"\"" Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.136286 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e65e6735-f357-4f2e-92c2-7d9ca26aac62-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.402274 4812 generic.go:334] "Generic (PLEG): container finished" podID="e65e6735-f357-4f2e-92c2-7d9ca26aac62" containerID="e718cfb142605e26dae8a4e81a52cc45ca8375d68ab9c0b759b4784a95643635" exitCode=0 Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.402383 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cr2pq" event={"ID":"e65e6735-f357-4f2e-92c2-7d9ca26aac62","Type":"ContainerDied","Data":"e718cfb142605e26dae8a4e81a52cc45ca8375d68ab9c0b759b4784a95643635"} Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.402436 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cr2pq" event={"ID":"e65e6735-f357-4f2e-92c2-7d9ca26aac62","Type":"ContainerDied","Data":"0f30a24d94ad90970216405f64dd97fc4775839a31193c7e86b67de7401b1eb6"} Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.402467 4812 scope.go:117] "RemoveContainer" containerID="e718cfb142605e26dae8a4e81a52cc45ca8375d68ab9c0b759b4784a95643635" Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.402623 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cr2pq" Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.460496 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cr2pq"] Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.466978 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cr2pq"] Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.468637 4812 scope.go:117] "RemoveContainer" containerID="46a5ae6f9ae3ddacb583f01e9ed5ff6aa9e92d11bb72ec65346d1e5a1a42a951" Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.506582 4812 scope.go:117] "RemoveContainer" containerID="08ab639d2defd7fb8e037243ea1e8684db2281ca9ea96608b9d10c0901fad7d7" Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.528325 4812 scope.go:117] "RemoveContainer" containerID="e718cfb142605e26dae8a4e81a52cc45ca8375d68ab9c0b759b4784a95643635" Nov 24 20:21:15 crc kubenswrapper[4812]: E1124 20:21:15.529064 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e718cfb142605e26dae8a4e81a52cc45ca8375d68ab9c0b759b4784a95643635\": container with ID starting with e718cfb142605e26dae8a4e81a52cc45ca8375d68ab9c0b759b4784a95643635 not found: ID does not exist" containerID="e718cfb142605e26dae8a4e81a52cc45ca8375d68ab9c0b759b4784a95643635" Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.529126 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e718cfb142605e26dae8a4e81a52cc45ca8375d68ab9c0b759b4784a95643635"} err="failed to get container status \"e718cfb142605e26dae8a4e81a52cc45ca8375d68ab9c0b759b4784a95643635\": rpc error: code = NotFound desc = could not find container \"e718cfb142605e26dae8a4e81a52cc45ca8375d68ab9c0b759b4784a95643635\": container with ID starting with e718cfb142605e26dae8a4e81a52cc45ca8375d68ab9c0b759b4784a95643635 not found: ID does not exist" Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.529154 4812 scope.go:117] "RemoveContainer" containerID="46a5ae6f9ae3ddacb583f01e9ed5ff6aa9e92d11bb72ec65346d1e5a1a42a951" Nov 24 20:21:15 crc kubenswrapper[4812]: E1124 20:21:15.529689 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46a5ae6f9ae3ddacb583f01e9ed5ff6aa9e92d11bb72ec65346d1e5a1a42a951\": container with ID starting with 46a5ae6f9ae3ddacb583f01e9ed5ff6aa9e92d11bb72ec65346d1e5a1a42a951 not found: ID does not exist" containerID="46a5ae6f9ae3ddacb583f01e9ed5ff6aa9e92d11bb72ec65346d1e5a1a42a951" Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.529717 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a5ae6f9ae3ddacb583f01e9ed5ff6aa9e92d11bb72ec65346d1e5a1a42a951"} err="failed to get container status \"46a5ae6f9ae3ddacb583f01e9ed5ff6aa9e92d11bb72ec65346d1e5a1a42a951\": rpc error: code = NotFound desc = could not find container \"46a5ae6f9ae3ddacb583f01e9ed5ff6aa9e92d11bb72ec65346d1e5a1a42a951\": container with ID starting with 46a5ae6f9ae3ddacb583f01e9ed5ff6aa9e92d11bb72ec65346d1e5a1a42a951 not found: ID does not exist" Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.529735 4812 scope.go:117] "RemoveContainer" containerID="08ab639d2defd7fb8e037243ea1e8684db2281ca9ea96608b9d10c0901fad7d7" Nov 24 20:21:15 crc kubenswrapper[4812]: E1124 20:21:15.530129 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08ab639d2defd7fb8e037243ea1e8684db2281ca9ea96608b9d10c0901fad7d7\": container with ID starting with 08ab639d2defd7fb8e037243ea1e8684db2281ca9ea96608b9d10c0901fad7d7 not found: ID does not exist" containerID="08ab639d2defd7fb8e037243ea1e8684db2281ca9ea96608b9d10c0901fad7d7" Nov 24 20:21:15 crc kubenswrapper[4812]: I1124 20:21:15.530179 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08ab639d2defd7fb8e037243ea1e8684db2281ca9ea96608b9d10c0901fad7d7"} err="failed to get container status \"08ab639d2defd7fb8e037243ea1e8684db2281ca9ea96608b9d10c0901fad7d7\": rpc error: code = NotFound desc = could not find container \"08ab639d2defd7fb8e037243ea1e8684db2281ca9ea96608b9d10c0901fad7d7\": container with ID starting with 08ab639d2defd7fb8e037243ea1e8684db2281ca9ea96608b9d10c0901fad7d7 not found: ID does not exist" Nov 24 20:21:16 crc kubenswrapper[4812]: I1124 20:21:16.982465 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e65e6735-f357-4f2e-92c2-7d9ca26aac62" path="/var/lib/kubelet/pods/e65e6735-f357-4f2e-92c2-7d9ca26aac62/volumes" Nov 24 20:21:28 crc kubenswrapper[4812]: I1124 20:21:28.965809 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:21:28 crc kubenswrapper[4812]: E1124 20:21:28.968083 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:21:41 crc kubenswrapper[4812]: I1124 20:21:41.966271 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:21:41 crc kubenswrapper[4812]: E1124 20:21:41.967538 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:21:54 crc kubenswrapper[4812]: I1124 20:21:54.966556 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:21:54 crc kubenswrapper[4812]: E1124 20:21:54.967695 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:22:02 crc kubenswrapper[4812]: I1124 20:22:02.330062 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rvt8f"] Nov 24 20:22:02 crc kubenswrapper[4812]: E1124 20:22:02.330743 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65e6735-f357-4f2e-92c2-7d9ca26aac62" containerName="extract-utilities" Nov 24 20:22:02 crc kubenswrapper[4812]: I1124 20:22:02.330754 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65e6735-f357-4f2e-92c2-7d9ca26aac62" containerName="extract-utilities" Nov 24 20:22:02 crc kubenswrapper[4812]: E1124 20:22:02.330769 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65e6735-f357-4f2e-92c2-7d9ca26aac62" containerName="registry-server" Nov 24 20:22:02 crc kubenswrapper[4812]: I1124 20:22:02.330775 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65e6735-f357-4f2e-92c2-7d9ca26aac62" containerName="registry-server" Nov 24 20:22:02 crc kubenswrapper[4812]: E1124 20:22:02.330785 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65e6735-f357-4f2e-92c2-7d9ca26aac62" containerName="extract-content" Nov 24 20:22:02 crc kubenswrapper[4812]: I1124 20:22:02.330791 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65e6735-f357-4f2e-92c2-7d9ca26aac62" containerName="extract-content" Nov 24 20:22:02 crc kubenswrapper[4812]: I1124 20:22:02.330943 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e65e6735-f357-4f2e-92c2-7d9ca26aac62" containerName="registry-server" Nov 24 20:22:02 crc kubenswrapper[4812]: I1124 20:22:02.331865 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvt8f" Nov 24 20:22:02 crc kubenswrapper[4812]: I1124 20:22:02.350910 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvt8f"] Nov 24 20:22:02 crc kubenswrapper[4812]: I1124 20:22:02.369904 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8txzt\" (UniqueName: \"kubernetes.io/projected/00e9d0b7-e26a-493c-a555-e03a8faaa937-kube-api-access-8txzt\") pod \"certified-operators-rvt8f\" (UID: \"00e9d0b7-e26a-493c-a555-e03a8faaa937\") " pod="openshift-marketplace/certified-operators-rvt8f" Nov 24 20:22:02 crc kubenswrapper[4812]: I1124 20:22:02.369994 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00e9d0b7-e26a-493c-a555-e03a8faaa937-utilities\") pod \"certified-operators-rvt8f\" (UID: \"00e9d0b7-e26a-493c-a555-e03a8faaa937\") " pod="openshift-marketplace/certified-operators-rvt8f" Nov 24 20:22:02 crc kubenswrapper[4812]: I1124 20:22:02.370105 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00e9d0b7-e26a-493c-a555-e03a8faaa937-catalog-content\") pod \"certified-operators-rvt8f\" (UID: \"00e9d0b7-e26a-493c-a555-e03a8faaa937\") " pod="openshift-marketplace/certified-operators-rvt8f" Nov 24 20:22:02 crc kubenswrapper[4812]: I1124 20:22:02.471138 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00e9d0b7-e26a-493c-a555-e03a8faaa937-utilities\") pod \"certified-operators-rvt8f\" (UID: \"00e9d0b7-e26a-493c-a555-e03a8faaa937\") " pod="openshift-marketplace/certified-operators-rvt8f" Nov 24 20:22:02 crc kubenswrapper[4812]: I1124 20:22:02.471198 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00e9d0b7-e26a-493c-a555-e03a8faaa937-catalog-content\") pod \"certified-operators-rvt8f\" (UID: \"00e9d0b7-e26a-493c-a555-e03a8faaa937\") " pod="openshift-marketplace/certified-operators-rvt8f" Nov 24 20:22:02 crc kubenswrapper[4812]: I1124 20:22:02.471266 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8txzt\" (UniqueName: \"kubernetes.io/projected/00e9d0b7-e26a-493c-a555-e03a8faaa937-kube-api-access-8txzt\") pod \"certified-operators-rvt8f\" (UID: \"00e9d0b7-e26a-493c-a555-e03a8faaa937\") " pod="openshift-marketplace/certified-operators-rvt8f" Nov 24 20:22:02 crc kubenswrapper[4812]: I1124 20:22:02.471901 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00e9d0b7-e26a-493c-a555-e03a8faaa937-utilities\") pod \"certified-operators-rvt8f\" (UID: \"00e9d0b7-e26a-493c-a555-e03a8faaa937\") " pod="openshift-marketplace/certified-operators-rvt8f" Nov 24 20:22:02 crc kubenswrapper[4812]: I1124 20:22:02.471951 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00e9d0b7-e26a-493c-a555-e03a8faaa937-catalog-content\") pod \"certified-operators-rvt8f\" (UID: \"00e9d0b7-e26a-493c-a555-e03a8faaa937\") " pod="openshift-marketplace/certified-operators-rvt8f" Nov 24 20:22:02 crc kubenswrapper[4812]: I1124 20:22:02.506056 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8txzt\" (UniqueName: \"kubernetes.io/projected/00e9d0b7-e26a-493c-a555-e03a8faaa937-kube-api-access-8txzt\") pod \"certified-operators-rvt8f\" (UID: \"00e9d0b7-e26a-493c-a555-e03a8faaa937\") " pod="openshift-marketplace/certified-operators-rvt8f" Nov 24 20:22:02 crc kubenswrapper[4812]: I1124 20:22:02.646451 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rvt8f" Nov 24 20:22:03 crc kubenswrapper[4812]: I1124 20:22:03.119982 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvt8f"] Nov 24 20:22:03 crc kubenswrapper[4812]: I1124 20:22:03.895831 4812 generic.go:334] "Generic (PLEG): container finished" podID="00e9d0b7-e26a-493c-a555-e03a8faaa937" containerID="97a9db99936ef069e44f5494db8022b230b027efce9263f5fe8bf0c50d8903ef" exitCode=0 Nov 24 20:22:03 crc kubenswrapper[4812]: I1124 20:22:03.895920 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvt8f" event={"ID":"00e9d0b7-e26a-493c-a555-e03a8faaa937","Type":"ContainerDied","Data":"97a9db99936ef069e44f5494db8022b230b027efce9263f5fe8bf0c50d8903ef"} Nov 24 20:22:03 crc kubenswrapper[4812]: I1124 20:22:03.896144 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvt8f" event={"ID":"00e9d0b7-e26a-493c-a555-e03a8faaa937","Type":"ContainerStarted","Data":"d20241851543474ba7cda72406f38ebaa1e6fcbb24e2c4e80530fb1223050b11"} Nov 24 20:22:09 crc kubenswrapper[4812]: I1124 20:22:09.966067 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:22:09 crc kubenswrapper[4812]: E1124 20:22:09.966807 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:22:12 crc kubenswrapper[4812]: I1124 20:22:12.988756 4812 generic.go:334] "Generic (PLEG): container finished" podID="00e9d0b7-e26a-493c-a555-e03a8faaa937" containerID="4526605dec22236a6bf7cae018aed07c3e519b3fb165a89f09762b2c86653e41" exitCode=0 Nov 24 20:22:12 crc kubenswrapper[4812]: I1124 20:22:12.988933 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvt8f" event={"ID":"00e9d0b7-e26a-493c-a555-e03a8faaa937","Type":"ContainerDied","Data":"4526605dec22236a6bf7cae018aed07c3e519b3fb165a89f09762b2c86653e41"} Nov 24 20:22:20 crc kubenswrapper[4812]: I1124 20:22:20.966645 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:22:20 crc kubenswrapper[4812]: E1124 20:22:20.967942 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:22:22 crc kubenswrapper[4812]: I1124 20:22:22.107758 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rvt8f" event={"ID":"00e9d0b7-e26a-493c-a555-e03a8faaa937","Type":"ContainerStarted","Data":"616aae155a9bb609bcb2e4761df91605ec13968751ce0c079b3c4746be7ce156"} Nov 24 20:22:22 crc kubenswrapper[4812]: I1124 20:22:22.145522 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rvt8f" podStartSLOduration=3.324448607 podStartE2EDuration="20.145413933s" podCreationTimestamp="2025-11-24 20:22:02 +0000 UTC" firstStartedPulling="2025-11-24 20:22:03.899332366 +0000 UTC m=+3917.688284767" lastFinishedPulling="2025-11-24 20:22:20.720297672 +0000 UTC m=+3934.509250093" observedRunningTime="2025-11-24 20:22:22.139471694 +0000 UTC m=+3935.928424085" watchObservedRunningTime="2025-11-24 20:22:22.145413933 +0000 UTC m=+3935.934366314" Nov 24 20:22:22 crc kubenswrapper[4812]: I1124 20:22:22.648074 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rvt8f" Nov 24 20:22:22 crc kubenswrapper[4812]: I1124 20:22:22.648592 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rvt8f" Nov 24 20:22:23 crc kubenswrapper[4812]: I1124 20:22:23.726362 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-rvt8f" podUID="00e9d0b7-e26a-493c-a555-e03a8faaa937" containerName="registry-server" probeResult="failure" output=< Nov 24 20:22:23 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 20:22:23 crc kubenswrapper[4812]: > Nov 24 20:22:25 crc kubenswrapper[4812]: I1124 20:22:25.865973 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9xmp9"] Nov 24 20:22:25 crc kubenswrapper[4812]: I1124 20:22:25.869675 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9xmp9" Nov 24 20:22:25 crc kubenswrapper[4812]: I1124 20:22:25.886637 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9xmp9"] Nov 24 20:22:25 crc kubenswrapper[4812]: I1124 20:22:25.967323 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51b7c3c0-9a02-41b8-ac09-c4b511623255-utilities\") pod \"redhat-operators-9xmp9\" (UID: \"51b7c3c0-9a02-41b8-ac09-c4b511623255\") " pod="openshift-marketplace/redhat-operators-9xmp9" Nov 24 20:22:25 crc kubenswrapper[4812]: I1124 20:22:25.967435 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkzbd\" (UniqueName: \"kubernetes.io/projected/51b7c3c0-9a02-41b8-ac09-c4b511623255-kube-api-access-qkzbd\") pod \"redhat-operators-9xmp9\" (UID: \"51b7c3c0-9a02-41b8-ac09-c4b511623255\") " pod="openshift-marketplace/redhat-operators-9xmp9" Nov 24 20:22:25 crc kubenswrapper[4812]: I1124 20:22:25.967494 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51b7c3c0-9a02-41b8-ac09-c4b511623255-catalog-content\") pod \"redhat-operators-9xmp9\" (UID: \"51b7c3c0-9a02-41b8-ac09-c4b511623255\") " pod="openshift-marketplace/redhat-operators-9xmp9" Nov 24 20:22:26 crc kubenswrapper[4812]: I1124 20:22:26.069317 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51b7c3c0-9a02-41b8-ac09-c4b511623255-utilities\") pod \"redhat-operators-9xmp9\" (UID: \"51b7c3c0-9a02-41b8-ac09-c4b511623255\") " pod="openshift-marketplace/redhat-operators-9xmp9" Nov 24 20:22:26 crc kubenswrapper[4812]: I1124 20:22:26.069501 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkzbd\" (UniqueName: \"kubernetes.io/projected/51b7c3c0-9a02-41b8-ac09-c4b511623255-kube-api-access-qkzbd\") pod \"redhat-operators-9xmp9\" (UID: \"51b7c3c0-9a02-41b8-ac09-c4b511623255\") " pod="openshift-marketplace/redhat-operators-9xmp9" Nov 24 20:22:26 crc kubenswrapper[4812]: I1124 20:22:26.069622 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51b7c3c0-9a02-41b8-ac09-c4b511623255-catalog-content\") pod \"redhat-operators-9xmp9\" (UID: \"51b7c3c0-9a02-41b8-ac09-c4b511623255\") " pod="openshift-marketplace/redhat-operators-9xmp9" Nov 24 20:22:26 crc kubenswrapper[4812]: I1124 20:22:26.069896 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51b7c3c0-9a02-41b8-ac09-c4b511623255-utilities\") pod \"redhat-operators-9xmp9\" (UID: \"51b7c3c0-9a02-41b8-ac09-c4b511623255\") " pod="openshift-marketplace/redhat-operators-9xmp9" Nov 24 20:22:26 crc kubenswrapper[4812]: I1124 20:22:26.070066 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51b7c3c0-9a02-41b8-ac09-c4b511623255-catalog-content\") pod \"redhat-operators-9xmp9\" (UID: \"51b7c3c0-9a02-41b8-ac09-c4b511623255\") " pod="openshift-marketplace/redhat-operators-9xmp9" Nov 24 20:22:26 crc kubenswrapper[4812]: I1124 20:22:26.090232 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkzbd\" (UniqueName: \"kubernetes.io/projected/51b7c3c0-9a02-41b8-ac09-c4b511623255-kube-api-access-qkzbd\") pod \"redhat-operators-9xmp9\" (UID: \"51b7c3c0-9a02-41b8-ac09-c4b511623255\") " pod="openshift-marketplace/redhat-operators-9xmp9" Nov 24 20:22:26 crc kubenswrapper[4812]: I1124 20:22:26.251061 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9xmp9" Nov 24 20:22:26 crc kubenswrapper[4812]: I1124 20:22:26.462812 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9xmp9"] Nov 24 20:22:27 crc kubenswrapper[4812]: I1124 20:22:27.155047 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xmp9" event={"ID":"51b7c3c0-9a02-41b8-ac09-c4b511623255","Type":"ContainerStarted","Data":"05818d3bfe2554309a7a4bc42a60296995d3517dbabfcc8732f37ae51cf89cb5"} Nov 24 20:22:29 crc kubenswrapper[4812]: I1124 20:22:29.176149 4812 generic.go:334] "Generic (PLEG): container finished" podID="51b7c3c0-9a02-41b8-ac09-c4b511623255" containerID="e19ee97884ba9ab40e9ffa7f413ac608f9bdbbd835d79c01886c976482cf78c9" exitCode=0 Nov 24 20:22:29 crc kubenswrapper[4812]: I1124 20:22:29.176463 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xmp9" event={"ID":"51b7c3c0-9a02-41b8-ac09-c4b511623255","Type":"ContainerDied","Data":"e19ee97884ba9ab40e9ffa7f413ac608f9bdbbd835d79c01886c976482cf78c9"} Nov 24 20:22:31 crc kubenswrapper[4812]: I1124 20:22:31.966154 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:22:31 crc kubenswrapper[4812]: E1124 20:22:31.966872 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:22:32 crc kubenswrapper[4812]: I1124 20:22:32.224102 4812 generic.go:334] "Generic (PLEG): container finished" podID="51b7c3c0-9a02-41b8-ac09-c4b511623255" containerID="d17e5e960a510172f2d7f1da5a3e61d435f8b21ae8e2b7d3a4e25284e5536060" exitCode=0 Nov 24 20:22:32 crc kubenswrapper[4812]: I1124 20:22:32.224181 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xmp9" event={"ID":"51b7c3c0-9a02-41b8-ac09-c4b511623255","Type":"ContainerDied","Data":"d17e5e960a510172f2d7f1da5a3e61d435f8b21ae8e2b7d3a4e25284e5536060"} Nov 24 20:22:32 crc kubenswrapper[4812]: I1124 20:22:32.715844 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rvt8f" Nov 24 20:22:32 crc kubenswrapper[4812]: I1124 20:22:32.797785 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rvt8f" Nov 24 20:22:33 crc kubenswrapper[4812]: I1124 20:22:33.309732 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rvt8f"] Nov 24 20:22:33 crc kubenswrapper[4812]: I1124 20:22:33.470492 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7zz5v"] Nov 24 20:22:33 crc kubenswrapper[4812]: I1124 20:22:33.470748 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7zz5v" podUID="8114203a-6cba-4533-b7d5-7379db397421" containerName="registry-server" containerID="cri-o://5144499394f8e032427dc8c346888fa30685ee7e43fe7a70cf9d6e442b6071f3" gracePeriod=2 Nov 24 20:22:34 crc kubenswrapper[4812]: I1124 20:22:34.241877 4812 generic.go:334] "Generic (PLEG): container finished" podID="8114203a-6cba-4533-b7d5-7379db397421" containerID="5144499394f8e032427dc8c346888fa30685ee7e43fe7a70cf9d6e442b6071f3" exitCode=0 Nov 24 20:22:34 crc kubenswrapper[4812]: I1124 20:22:34.241938 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zz5v" event={"ID":"8114203a-6cba-4533-b7d5-7379db397421","Type":"ContainerDied","Data":"5144499394f8e032427dc8c346888fa30685ee7e43fe7a70cf9d6e442b6071f3"} Nov 24 20:22:34 crc kubenswrapper[4812]: I1124 20:22:34.246862 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xmp9" event={"ID":"51b7c3c0-9a02-41b8-ac09-c4b511623255","Type":"ContainerStarted","Data":"c169b2aab1297bbdce47089e7abf26cc5bb42fe6642659333dfc86671fdf5e95"} Nov 24 20:22:34 crc kubenswrapper[4812]: I1124 20:22:34.280292 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9xmp9" podStartSLOduration=5.221374596 podStartE2EDuration="9.28026286s" podCreationTimestamp="2025-11-24 20:22:25 +0000 UTC" firstStartedPulling="2025-11-24 20:22:29.178141516 +0000 UTC m=+3942.967093897" lastFinishedPulling="2025-11-24 20:22:33.23702975 +0000 UTC m=+3947.025982161" observedRunningTime="2025-11-24 20:22:34.271719249 +0000 UTC m=+3948.060671660" watchObservedRunningTime="2025-11-24 20:22:34.28026286 +0000 UTC m=+3948.069215251" Nov 24 20:22:34 crc kubenswrapper[4812]: I1124 20:22:34.392411 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zz5v" Nov 24 20:22:34 crc kubenswrapper[4812]: I1124 20:22:34.512162 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4bc7\" (UniqueName: \"kubernetes.io/projected/8114203a-6cba-4533-b7d5-7379db397421-kube-api-access-v4bc7\") pod \"8114203a-6cba-4533-b7d5-7379db397421\" (UID: \"8114203a-6cba-4533-b7d5-7379db397421\") " Nov 24 20:22:34 crc kubenswrapper[4812]: I1124 20:22:34.512400 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8114203a-6cba-4533-b7d5-7379db397421-catalog-content\") pod \"8114203a-6cba-4533-b7d5-7379db397421\" (UID: \"8114203a-6cba-4533-b7d5-7379db397421\") " Nov 24 20:22:34 crc kubenswrapper[4812]: I1124 20:22:34.512737 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8114203a-6cba-4533-b7d5-7379db397421-utilities\") pod \"8114203a-6cba-4533-b7d5-7379db397421\" (UID: \"8114203a-6cba-4533-b7d5-7379db397421\") " Nov 24 20:22:34 crc kubenswrapper[4812]: I1124 20:22:34.513967 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8114203a-6cba-4533-b7d5-7379db397421-utilities" (OuterVolumeSpecName: "utilities") pod "8114203a-6cba-4533-b7d5-7379db397421" (UID: "8114203a-6cba-4533-b7d5-7379db397421"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:22:34 crc kubenswrapper[4812]: I1124 20:22:34.521589 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8114203a-6cba-4533-b7d5-7379db397421-kube-api-access-v4bc7" (OuterVolumeSpecName: "kube-api-access-v4bc7") pod "8114203a-6cba-4533-b7d5-7379db397421" (UID: "8114203a-6cba-4533-b7d5-7379db397421"). InnerVolumeSpecName "kube-api-access-v4bc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:22:34 crc kubenswrapper[4812]: I1124 20:22:34.587137 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8114203a-6cba-4533-b7d5-7379db397421-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8114203a-6cba-4533-b7d5-7379db397421" (UID: "8114203a-6cba-4533-b7d5-7379db397421"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:22:34 crc kubenswrapper[4812]: I1124 20:22:34.615154 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8114203a-6cba-4533-b7d5-7379db397421-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 20:22:34 crc kubenswrapper[4812]: I1124 20:22:34.615197 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4bc7\" (UniqueName: \"kubernetes.io/projected/8114203a-6cba-4533-b7d5-7379db397421-kube-api-access-v4bc7\") on node \"crc\" DevicePath \"\"" Nov 24 20:22:34 crc kubenswrapper[4812]: I1124 20:22:34.615212 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8114203a-6cba-4533-b7d5-7379db397421-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 20:22:35 crc kubenswrapper[4812]: I1124 20:22:35.254753 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7zz5v" event={"ID":"8114203a-6cba-4533-b7d5-7379db397421","Type":"ContainerDied","Data":"42e85340d25deec1db994ef1498c74c18f1b809d2ae846c9242ba2ba393580a7"} Nov 24 20:22:35 crc kubenswrapper[4812]: I1124 20:22:35.254835 4812 scope.go:117] "RemoveContainer" containerID="5144499394f8e032427dc8c346888fa30685ee7e43fe7a70cf9d6e442b6071f3" Nov 24 20:22:35 crc kubenswrapper[4812]: I1124 20:22:35.254852 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7zz5v" Nov 24 20:22:35 crc kubenswrapper[4812]: I1124 20:22:35.281995 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7zz5v"] Nov 24 20:22:35 crc kubenswrapper[4812]: I1124 20:22:35.286532 4812 scope.go:117] "RemoveContainer" containerID="5126243c6243f164f9b0e13e295778a92d1413c5e2c77bfa4c33ba5c7d3809de" Nov 24 20:22:35 crc kubenswrapper[4812]: I1124 20:22:35.293097 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7zz5v"] Nov 24 20:22:35 crc kubenswrapper[4812]: I1124 20:22:35.335035 4812 scope.go:117] "RemoveContainer" containerID="e6688398ac818e298dcea0f0fa2556f3a7e0230aced1e5471a93e24f15899b82" Nov 24 20:22:36 crc kubenswrapper[4812]: I1124 20:22:36.253460 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9xmp9" Nov 24 20:22:36 crc kubenswrapper[4812]: I1124 20:22:36.253525 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9xmp9" Nov 24 20:22:36 crc kubenswrapper[4812]: I1124 20:22:36.974832 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8114203a-6cba-4533-b7d5-7379db397421" path="/var/lib/kubelet/pods/8114203a-6cba-4533-b7d5-7379db397421/volumes" Nov 24 20:22:37 crc kubenswrapper[4812]: I1124 20:22:37.305220 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9xmp9" podUID="51b7c3c0-9a02-41b8-ac09-c4b511623255" containerName="registry-server" probeResult="failure" output=< Nov 24 20:22:37 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 20:22:37 crc kubenswrapper[4812]: > Nov 24 20:22:45 crc kubenswrapper[4812]: I1124 20:22:45.966064 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:22:45 crc kubenswrapper[4812]: E1124 20:22:45.966911 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:22:46 crc kubenswrapper[4812]: I1124 20:22:46.343603 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9xmp9" Nov 24 20:22:46 crc kubenswrapper[4812]: I1124 20:22:46.414592 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9xmp9" Nov 24 20:22:46 crc kubenswrapper[4812]: I1124 20:22:46.585551 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9xmp9"] Nov 24 20:22:47 crc kubenswrapper[4812]: I1124 20:22:47.363582 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9xmp9" podUID="51b7c3c0-9a02-41b8-ac09-c4b511623255" containerName="registry-server" containerID="cri-o://c169b2aab1297bbdce47089e7abf26cc5bb42fe6642659333dfc86671fdf5e95" gracePeriod=2 Nov 24 20:22:47 crc kubenswrapper[4812]: I1124 20:22:47.816658 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9xmp9" Nov 24 20:22:47 crc kubenswrapper[4812]: I1124 20:22:47.926329 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkzbd\" (UniqueName: \"kubernetes.io/projected/51b7c3c0-9a02-41b8-ac09-c4b511623255-kube-api-access-qkzbd\") pod \"51b7c3c0-9a02-41b8-ac09-c4b511623255\" (UID: \"51b7c3c0-9a02-41b8-ac09-c4b511623255\") " Nov 24 20:22:47 crc kubenswrapper[4812]: I1124 20:22:47.926675 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51b7c3c0-9a02-41b8-ac09-c4b511623255-utilities\") pod \"51b7c3c0-9a02-41b8-ac09-c4b511623255\" (UID: \"51b7c3c0-9a02-41b8-ac09-c4b511623255\") " Nov 24 20:22:47 crc kubenswrapper[4812]: I1124 20:22:47.926740 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51b7c3c0-9a02-41b8-ac09-c4b511623255-catalog-content\") pod \"51b7c3c0-9a02-41b8-ac09-c4b511623255\" (UID: \"51b7c3c0-9a02-41b8-ac09-c4b511623255\") " Nov 24 20:22:47 crc kubenswrapper[4812]: I1124 20:22:47.932350 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51b7c3c0-9a02-41b8-ac09-c4b511623255-kube-api-access-qkzbd" (OuterVolumeSpecName: "kube-api-access-qkzbd") pod "51b7c3c0-9a02-41b8-ac09-c4b511623255" (UID: "51b7c3c0-9a02-41b8-ac09-c4b511623255"). InnerVolumeSpecName "kube-api-access-qkzbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:22:47 crc kubenswrapper[4812]: I1124 20:22:47.933904 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51b7c3c0-9a02-41b8-ac09-c4b511623255-utilities" (OuterVolumeSpecName: "utilities") pod "51b7c3c0-9a02-41b8-ac09-c4b511623255" (UID: "51b7c3c0-9a02-41b8-ac09-c4b511623255"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:22:48 crc kubenswrapper[4812]: I1124 20:22:48.029003 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkzbd\" (UniqueName: \"kubernetes.io/projected/51b7c3c0-9a02-41b8-ac09-c4b511623255-kube-api-access-qkzbd\") on node \"crc\" DevicePath \"\"" Nov 24 20:22:48 crc kubenswrapper[4812]: I1124 20:22:48.029049 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51b7c3c0-9a02-41b8-ac09-c4b511623255-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 20:22:48 crc kubenswrapper[4812]: I1124 20:22:48.064004 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51b7c3c0-9a02-41b8-ac09-c4b511623255-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51b7c3c0-9a02-41b8-ac09-c4b511623255" (UID: "51b7c3c0-9a02-41b8-ac09-c4b511623255"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:22:48 crc kubenswrapper[4812]: I1124 20:22:48.129666 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51b7c3c0-9a02-41b8-ac09-c4b511623255-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 20:22:48 crc kubenswrapper[4812]: I1124 20:22:48.381674 4812 generic.go:334] "Generic (PLEG): container finished" podID="51b7c3c0-9a02-41b8-ac09-c4b511623255" containerID="c169b2aab1297bbdce47089e7abf26cc5bb42fe6642659333dfc86671fdf5e95" exitCode=0 Nov 24 20:22:48 crc kubenswrapper[4812]: I1124 20:22:48.381736 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xmp9" event={"ID":"51b7c3c0-9a02-41b8-ac09-c4b511623255","Type":"ContainerDied","Data":"c169b2aab1297bbdce47089e7abf26cc5bb42fe6642659333dfc86671fdf5e95"} Nov 24 20:22:48 crc kubenswrapper[4812]: I1124 20:22:48.381797 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9xmp9" event={"ID":"51b7c3c0-9a02-41b8-ac09-c4b511623255","Type":"ContainerDied","Data":"05818d3bfe2554309a7a4bc42a60296995d3517dbabfcc8732f37ae51cf89cb5"} Nov 24 20:22:48 crc kubenswrapper[4812]: I1124 20:22:48.381757 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9xmp9" Nov 24 20:22:48 crc kubenswrapper[4812]: I1124 20:22:48.381826 4812 scope.go:117] "RemoveContainer" containerID="c169b2aab1297bbdce47089e7abf26cc5bb42fe6642659333dfc86671fdf5e95" Nov 24 20:22:48 crc kubenswrapper[4812]: I1124 20:22:48.424266 4812 scope.go:117] "RemoveContainer" containerID="d17e5e960a510172f2d7f1da5a3e61d435f8b21ae8e2b7d3a4e25284e5536060" Nov 24 20:22:48 crc kubenswrapper[4812]: I1124 20:22:48.430266 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9xmp9"] Nov 24 20:22:48 crc kubenswrapper[4812]: I1124 20:22:48.449801 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9xmp9"] Nov 24 20:22:48 crc kubenswrapper[4812]: I1124 20:22:48.466248 4812 scope.go:117] "RemoveContainer" containerID="e19ee97884ba9ab40e9ffa7f413ac608f9bdbbd835d79c01886c976482cf78c9" Nov 24 20:22:48 crc kubenswrapper[4812]: I1124 20:22:48.486068 4812 scope.go:117] "RemoveContainer" containerID="c169b2aab1297bbdce47089e7abf26cc5bb42fe6642659333dfc86671fdf5e95" Nov 24 20:22:48 crc kubenswrapper[4812]: E1124 20:22:48.486818 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c169b2aab1297bbdce47089e7abf26cc5bb42fe6642659333dfc86671fdf5e95\": container with ID starting with c169b2aab1297bbdce47089e7abf26cc5bb42fe6642659333dfc86671fdf5e95 not found: ID does not exist" containerID="c169b2aab1297bbdce47089e7abf26cc5bb42fe6642659333dfc86671fdf5e95" Nov 24 20:22:48 crc kubenswrapper[4812]: I1124 20:22:48.486865 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c169b2aab1297bbdce47089e7abf26cc5bb42fe6642659333dfc86671fdf5e95"} err="failed to get container status \"c169b2aab1297bbdce47089e7abf26cc5bb42fe6642659333dfc86671fdf5e95\": rpc error: code = NotFound desc = could not find container \"c169b2aab1297bbdce47089e7abf26cc5bb42fe6642659333dfc86671fdf5e95\": container with ID starting with c169b2aab1297bbdce47089e7abf26cc5bb42fe6642659333dfc86671fdf5e95 not found: ID does not exist" Nov 24 20:22:48 crc kubenswrapper[4812]: I1124 20:22:48.486892 4812 scope.go:117] "RemoveContainer" containerID="d17e5e960a510172f2d7f1da5a3e61d435f8b21ae8e2b7d3a4e25284e5536060" Nov 24 20:22:48 crc kubenswrapper[4812]: E1124 20:22:48.487312 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d17e5e960a510172f2d7f1da5a3e61d435f8b21ae8e2b7d3a4e25284e5536060\": container with ID starting with d17e5e960a510172f2d7f1da5a3e61d435f8b21ae8e2b7d3a4e25284e5536060 not found: ID does not exist" containerID="d17e5e960a510172f2d7f1da5a3e61d435f8b21ae8e2b7d3a4e25284e5536060" Nov 24 20:22:48 crc kubenswrapper[4812]: I1124 20:22:48.487430 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d17e5e960a510172f2d7f1da5a3e61d435f8b21ae8e2b7d3a4e25284e5536060"} err="failed to get container status \"d17e5e960a510172f2d7f1da5a3e61d435f8b21ae8e2b7d3a4e25284e5536060\": rpc error: code = NotFound desc = could not find container \"d17e5e960a510172f2d7f1da5a3e61d435f8b21ae8e2b7d3a4e25284e5536060\": container with ID starting with d17e5e960a510172f2d7f1da5a3e61d435f8b21ae8e2b7d3a4e25284e5536060 not found: ID does not exist" Nov 24 20:22:48 crc kubenswrapper[4812]: I1124 20:22:48.487489 4812 scope.go:117] "RemoveContainer" containerID="e19ee97884ba9ab40e9ffa7f413ac608f9bdbbd835d79c01886c976482cf78c9" Nov 24 20:22:48 crc kubenswrapper[4812]: E1124 20:22:48.487891 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e19ee97884ba9ab40e9ffa7f413ac608f9bdbbd835d79c01886c976482cf78c9\": container with ID starting with e19ee97884ba9ab40e9ffa7f413ac608f9bdbbd835d79c01886c976482cf78c9 not found: ID does not exist" containerID="e19ee97884ba9ab40e9ffa7f413ac608f9bdbbd835d79c01886c976482cf78c9" Nov 24 20:22:48 crc kubenswrapper[4812]: I1124 20:22:48.487920 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e19ee97884ba9ab40e9ffa7f413ac608f9bdbbd835d79c01886c976482cf78c9"} err="failed to get container status \"e19ee97884ba9ab40e9ffa7f413ac608f9bdbbd835d79c01886c976482cf78c9\": rpc error: code = NotFound desc = could not find container \"e19ee97884ba9ab40e9ffa7f413ac608f9bdbbd835d79c01886c976482cf78c9\": container with ID starting with e19ee97884ba9ab40e9ffa7f413ac608f9bdbbd835d79c01886c976482cf78c9 not found: ID does not exist" Nov 24 20:22:48 crc kubenswrapper[4812]: I1124 20:22:48.983373 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51b7c3c0-9a02-41b8-ac09-c4b511623255" path="/var/lib/kubelet/pods/51b7c3c0-9a02-41b8-ac09-c4b511623255/volumes" Nov 24 20:22:57 crc kubenswrapper[4812]: I1124 20:22:57.966398 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:22:57 crc kubenswrapper[4812]: E1124 20:22:57.967653 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:23:12 crc kubenswrapper[4812]: I1124 20:23:12.965535 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:23:12 crc kubenswrapper[4812]: E1124 20:23:12.966516 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:23:27 crc kubenswrapper[4812]: I1124 20:23:27.966913 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:23:27 crc kubenswrapper[4812]: E1124 20:23:27.968510 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:23:40 crc kubenswrapper[4812]: I1124 20:23:40.965851 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:23:40 crc kubenswrapper[4812]: E1124 20:23:40.966844 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:23:51 crc kubenswrapper[4812]: I1124 20:23:51.965835 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:23:51 crc kubenswrapper[4812]: E1124 20:23:51.967176 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:24:06 crc kubenswrapper[4812]: I1124 20:24:06.973210 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:24:06 crc kubenswrapper[4812]: E1124 20:24:06.974416 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:24:21 crc kubenswrapper[4812]: I1124 20:24:21.965796 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:24:21 crc kubenswrapper[4812]: E1124 20:24:21.967021 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:24:32 crc kubenswrapper[4812]: I1124 20:24:32.966460 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:24:32 crc kubenswrapper[4812]: E1124 20:24:32.967721 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:24:44 crc kubenswrapper[4812]: I1124 20:24:44.965861 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:24:44 crc kubenswrapper[4812]: E1124 20:24:44.966789 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:24:57 crc kubenswrapper[4812]: I1124 20:24:57.966020 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:24:57 crc kubenswrapper[4812]: E1124 20:24:57.970528 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:25:09 crc kubenswrapper[4812]: I1124 20:25:09.966109 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:25:09 crc kubenswrapper[4812]: E1124 20:25:09.967281 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:25:22 crc kubenswrapper[4812]: I1124 20:25:22.966697 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:25:22 crc kubenswrapper[4812]: E1124 20:25:22.968215 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:25:35 crc kubenswrapper[4812]: I1124 20:25:35.966396 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:25:35 crc kubenswrapper[4812]: E1124 20:25:35.967491 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:25:48 crc kubenswrapper[4812]: I1124 20:25:48.965643 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:25:48 crc kubenswrapper[4812]: E1124 20:25:48.968545 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:25:59 crc kubenswrapper[4812]: I1124 20:25:59.966476 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:25:59 crc kubenswrapper[4812]: E1124 20:25:59.967186 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:26:14 crc kubenswrapper[4812]: I1124 20:26:14.966300 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:26:16 crc kubenswrapper[4812]: I1124 20:26:16.362230 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"36ad40c48562b05eb9b464d40849bcfd97cee08a4e1f753bbbb2f77cd7452f53"} Nov 24 20:28:32 crc kubenswrapper[4812]: I1124 20:28:32.998366 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:28:33 crc kubenswrapper[4812]: I1124 20:28:32.999327 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:29:02 crc kubenswrapper[4812]: I1124 20:29:02.998505 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:29:02 crc kubenswrapper[4812]: I1124 20:29:02.999265 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:29:32 crc kubenswrapper[4812]: I1124 20:29:32.998572 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:29:33 crc kubenswrapper[4812]: I1124 20:29:33.000157 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:29:33 crc kubenswrapper[4812]: I1124 20:29:33.000318 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 20:29:33 crc kubenswrapper[4812]: I1124 20:29:33.001134 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"36ad40c48562b05eb9b464d40849bcfd97cee08a4e1f753bbbb2f77cd7452f53"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 20:29:33 crc kubenswrapper[4812]: I1124 20:29:33.001338 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://36ad40c48562b05eb9b464d40849bcfd97cee08a4e1f753bbbb2f77cd7452f53" gracePeriod=600 Nov 24 20:29:33 crc kubenswrapper[4812]: I1124 20:29:33.216560 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="36ad40c48562b05eb9b464d40849bcfd97cee08a4e1f753bbbb2f77cd7452f53" exitCode=0 Nov 24 20:29:33 crc kubenswrapper[4812]: I1124 20:29:33.216623 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"36ad40c48562b05eb9b464d40849bcfd97cee08a4e1f753bbbb2f77cd7452f53"} Nov 24 20:29:33 crc kubenswrapper[4812]: I1124 20:29:33.216678 4812 scope.go:117] "RemoveContainer" containerID="922826250ac046a4c7f1974bbb38d428ce78a7fd3f163eb3d6a15953945d2cf9" Nov 24 20:29:34 crc kubenswrapper[4812]: I1124 20:29:34.256017 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f"} Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.180234 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400270-qz6hm"] Nov 24 20:30:00 crc kubenswrapper[4812]: E1124 20:30:00.181075 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b7c3c0-9a02-41b8-ac09-c4b511623255" containerName="extract-content" Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.181091 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b7c3c0-9a02-41b8-ac09-c4b511623255" containerName="extract-content" Nov 24 20:30:00 crc kubenswrapper[4812]: E1124 20:30:00.181113 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8114203a-6cba-4533-b7d5-7379db397421" containerName="extract-content" Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.181121 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8114203a-6cba-4533-b7d5-7379db397421" containerName="extract-content" Nov 24 20:30:00 crc kubenswrapper[4812]: E1124 20:30:00.181139 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b7c3c0-9a02-41b8-ac09-c4b511623255" containerName="registry-server" Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.181147 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b7c3c0-9a02-41b8-ac09-c4b511623255" containerName="registry-server" Nov 24 20:30:00 crc kubenswrapper[4812]: E1124 20:30:00.181163 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8114203a-6cba-4533-b7d5-7379db397421" containerName="registry-server" Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.181170 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8114203a-6cba-4533-b7d5-7379db397421" containerName="registry-server" Nov 24 20:30:00 crc kubenswrapper[4812]: E1124 20:30:00.181189 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b7c3c0-9a02-41b8-ac09-c4b511623255" containerName="extract-utilities" Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.181199 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b7c3c0-9a02-41b8-ac09-c4b511623255" containerName="extract-utilities" Nov 24 20:30:00 crc kubenswrapper[4812]: E1124 20:30:00.181219 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8114203a-6cba-4533-b7d5-7379db397421" containerName="extract-utilities" Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.181226 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8114203a-6cba-4533-b7d5-7379db397421" containerName="extract-utilities" Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.181452 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="8114203a-6cba-4533-b7d5-7379db397421" containerName="registry-server" Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.181476 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="51b7c3c0-9a02-41b8-ac09-c4b511623255" containerName="registry-server" Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.182017 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400270-qz6hm" Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.185276 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.187505 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.196033 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400270-qz6hm"] Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.281553 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cswh9\" (UniqueName: \"kubernetes.io/projected/76a68241-269a-450a-9b60-90ff42711616-kube-api-access-cswh9\") pod \"collect-profiles-29400270-qz6hm\" (UID: \"76a68241-269a-450a-9b60-90ff42711616\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400270-qz6hm" Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.281909 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76a68241-269a-450a-9b60-90ff42711616-secret-volume\") pod \"collect-profiles-29400270-qz6hm\" (UID: \"76a68241-269a-450a-9b60-90ff42711616\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400270-qz6hm" Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.282030 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76a68241-269a-450a-9b60-90ff42711616-config-volume\") pod \"collect-profiles-29400270-qz6hm\" (UID: \"76a68241-269a-450a-9b60-90ff42711616\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400270-qz6hm" Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.383132 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76a68241-269a-450a-9b60-90ff42711616-secret-volume\") pod \"collect-profiles-29400270-qz6hm\" (UID: \"76a68241-269a-450a-9b60-90ff42711616\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400270-qz6hm" Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.383184 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76a68241-269a-450a-9b60-90ff42711616-config-volume\") pod \"collect-profiles-29400270-qz6hm\" (UID: \"76a68241-269a-450a-9b60-90ff42711616\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400270-qz6hm" Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.383225 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cswh9\" (UniqueName: \"kubernetes.io/projected/76a68241-269a-450a-9b60-90ff42711616-kube-api-access-cswh9\") pod \"collect-profiles-29400270-qz6hm\" (UID: \"76a68241-269a-450a-9b60-90ff42711616\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400270-qz6hm" Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.384370 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76a68241-269a-450a-9b60-90ff42711616-config-volume\") pod \"collect-profiles-29400270-qz6hm\" (UID: \"76a68241-269a-450a-9b60-90ff42711616\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400270-qz6hm" Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.390944 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76a68241-269a-450a-9b60-90ff42711616-secret-volume\") pod \"collect-profiles-29400270-qz6hm\" (UID: \"76a68241-269a-450a-9b60-90ff42711616\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400270-qz6hm" Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.405159 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cswh9\" (UniqueName: \"kubernetes.io/projected/76a68241-269a-450a-9b60-90ff42711616-kube-api-access-cswh9\") pod \"collect-profiles-29400270-qz6hm\" (UID: \"76a68241-269a-450a-9b60-90ff42711616\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400270-qz6hm" Nov 24 20:30:00 crc kubenswrapper[4812]: I1124 20:30:00.512759 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400270-qz6hm" Nov 24 20:30:01 crc kubenswrapper[4812]: I1124 20:30:01.004269 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400270-qz6hm"] Nov 24 20:30:01 crc kubenswrapper[4812]: I1124 20:30:01.574541 4812 generic.go:334] "Generic (PLEG): container finished" podID="76a68241-269a-450a-9b60-90ff42711616" containerID="5133d57387cea94020b839a33999afaf60ebc70a772f179235f80b89559d78c1" exitCode=0 Nov 24 20:30:01 crc kubenswrapper[4812]: I1124 20:30:01.574757 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400270-qz6hm" event={"ID":"76a68241-269a-450a-9b60-90ff42711616","Type":"ContainerDied","Data":"5133d57387cea94020b839a33999afaf60ebc70a772f179235f80b89559d78c1"} Nov 24 20:30:01 crc kubenswrapper[4812]: I1124 20:30:01.574958 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400270-qz6hm" event={"ID":"76a68241-269a-450a-9b60-90ff42711616","Type":"ContainerStarted","Data":"adbe7293247c924e6cbe8fbbe84ca11c74db575b8151440c672dcb930dd3b9fa"} Nov 24 20:30:03 crc kubenswrapper[4812]: I1124 20:30:03.015918 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400270-qz6hm" Nov 24 20:30:03 crc kubenswrapper[4812]: I1124 20:30:03.148399 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76a68241-269a-450a-9b60-90ff42711616-secret-volume\") pod \"76a68241-269a-450a-9b60-90ff42711616\" (UID: \"76a68241-269a-450a-9b60-90ff42711616\") " Nov 24 20:30:03 crc kubenswrapper[4812]: I1124 20:30:03.148545 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cswh9\" (UniqueName: \"kubernetes.io/projected/76a68241-269a-450a-9b60-90ff42711616-kube-api-access-cswh9\") pod \"76a68241-269a-450a-9b60-90ff42711616\" (UID: \"76a68241-269a-450a-9b60-90ff42711616\") " Nov 24 20:30:03 crc kubenswrapper[4812]: I1124 20:30:03.148630 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76a68241-269a-450a-9b60-90ff42711616-config-volume\") pod \"76a68241-269a-450a-9b60-90ff42711616\" (UID: \"76a68241-269a-450a-9b60-90ff42711616\") " Nov 24 20:30:03 crc kubenswrapper[4812]: I1124 20:30:03.149299 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76a68241-269a-450a-9b60-90ff42711616-config-volume" (OuterVolumeSpecName: "config-volume") pod "76a68241-269a-450a-9b60-90ff42711616" (UID: "76a68241-269a-450a-9b60-90ff42711616"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:30:03 crc kubenswrapper[4812]: I1124 20:30:03.154611 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76a68241-269a-450a-9b60-90ff42711616-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "76a68241-269a-450a-9b60-90ff42711616" (UID: "76a68241-269a-450a-9b60-90ff42711616"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:30:03 crc kubenswrapper[4812]: I1124 20:30:03.155999 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76a68241-269a-450a-9b60-90ff42711616-kube-api-access-cswh9" (OuterVolumeSpecName: "kube-api-access-cswh9") pod "76a68241-269a-450a-9b60-90ff42711616" (UID: "76a68241-269a-450a-9b60-90ff42711616"). InnerVolumeSpecName "kube-api-access-cswh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:30:03 crc kubenswrapper[4812]: I1124 20:30:03.250724 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76a68241-269a-450a-9b60-90ff42711616-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 20:30:03 crc kubenswrapper[4812]: I1124 20:30:03.251005 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76a68241-269a-450a-9b60-90ff42711616-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 20:30:03 crc kubenswrapper[4812]: I1124 20:30:03.251023 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cswh9\" (UniqueName: \"kubernetes.io/projected/76a68241-269a-450a-9b60-90ff42711616-kube-api-access-cswh9\") on node \"crc\" DevicePath \"\"" Nov 24 20:30:03 crc kubenswrapper[4812]: I1124 20:30:03.596194 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400270-qz6hm" event={"ID":"76a68241-269a-450a-9b60-90ff42711616","Type":"ContainerDied","Data":"adbe7293247c924e6cbe8fbbe84ca11c74db575b8151440c672dcb930dd3b9fa"} Nov 24 20:30:03 crc kubenswrapper[4812]: I1124 20:30:03.596258 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adbe7293247c924e6cbe8fbbe84ca11c74db575b8151440c672dcb930dd3b9fa" Nov 24 20:30:03 crc kubenswrapper[4812]: I1124 20:30:03.596291 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400270-qz6hm" Nov 24 20:30:04 crc kubenswrapper[4812]: I1124 20:30:04.109385 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400225-klf82"] Nov 24 20:30:04 crc kubenswrapper[4812]: I1124 20:30:04.113460 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400225-klf82"] Nov 24 20:30:04 crc kubenswrapper[4812]: I1124 20:30:04.982681 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e00398ad-d448-40a8-b21c-886e79d6bc1e" path="/var/lib/kubelet/pods/e00398ad-d448-40a8-b21c-886e79d6bc1e/volumes" Nov 24 20:30:28 crc kubenswrapper[4812]: I1124 20:30:28.485168 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-98wjk"] Nov 24 20:30:28 crc kubenswrapper[4812]: E1124 20:30:28.487413 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76a68241-269a-450a-9b60-90ff42711616" containerName="collect-profiles" Nov 24 20:30:28 crc kubenswrapper[4812]: I1124 20:30:28.487437 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="76a68241-269a-450a-9b60-90ff42711616" containerName="collect-profiles" Nov 24 20:30:28 crc kubenswrapper[4812]: I1124 20:30:28.487797 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="76a68241-269a-450a-9b60-90ff42711616" containerName="collect-profiles" Nov 24 20:30:28 crc kubenswrapper[4812]: I1124 20:30:28.492293 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98wjk" Nov 24 20:30:28 crc kubenswrapper[4812]: I1124 20:30:28.498288 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-98wjk"] Nov 24 20:30:28 crc kubenswrapper[4812]: I1124 20:30:28.625736 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a11725b-961c-4c44-b78c-c7a748ebf49b-catalog-content\") pod \"redhat-marketplace-98wjk\" (UID: \"2a11725b-961c-4c44-b78c-c7a748ebf49b\") " pod="openshift-marketplace/redhat-marketplace-98wjk" Nov 24 20:30:28 crc kubenswrapper[4812]: I1124 20:30:28.626225 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a11725b-961c-4c44-b78c-c7a748ebf49b-utilities\") pod \"redhat-marketplace-98wjk\" (UID: \"2a11725b-961c-4c44-b78c-c7a748ebf49b\") " pod="openshift-marketplace/redhat-marketplace-98wjk" Nov 24 20:30:28 crc kubenswrapper[4812]: I1124 20:30:28.626307 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phbs9\" (UniqueName: \"kubernetes.io/projected/2a11725b-961c-4c44-b78c-c7a748ebf49b-kube-api-access-phbs9\") pod \"redhat-marketplace-98wjk\" (UID: \"2a11725b-961c-4c44-b78c-c7a748ebf49b\") " pod="openshift-marketplace/redhat-marketplace-98wjk" Nov 24 20:30:28 crc kubenswrapper[4812]: I1124 20:30:28.727429 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phbs9\" (UniqueName: \"kubernetes.io/projected/2a11725b-961c-4c44-b78c-c7a748ebf49b-kube-api-access-phbs9\") pod \"redhat-marketplace-98wjk\" (UID: \"2a11725b-961c-4c44-b78c-c7a748ebf49b\") " pod="openshift-marketplace/redhat-marketplace-98wjk" Nov 24 20:30:28 crc kubenswrapper[4812]: I1124 20:30:28.727732 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a11725b-961c-4c44-b78c-c7a748ebf49b-catalog-content\") pod \"redhat-marketplace-98wjk\" (UID: \"2a11725b-961c-4c44-b78c-c7a748ebf49b\") " pod="openshift-marketplace/redhat-marketplace-98wjk" Nov 24 20:30:28 crc kubenswrapper[4812]: I1124 20:30:28.727817 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a11725b-961c-4c44-b78c-c7a748ebf49b-utilities\") pod \"redhat-marketplace-98wjk\" (UID: \"2a11725b-961c-4c44-b78c-c7a748ebf49b\") " pod="openshift-marketplace/redhat-marketplace-98wjk" Nov 24 20:30:28 crc kubenswrapper[4812]: I1124 20:30:28.728750 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a11725b-961c-4c44-b78c-c7a748ebf49b-catalog-content\") pod \"redhat-marketplace-98wjk\" (UID: \"2a11725b-961c-4c44-b78c-c7a748ebf49b\") " pod="openshift-marketplace/redhat-marketplace-98wjk" Nov 24 20:30:28 crc kubenswrapper[4812]: I1124 20:30:28.728773 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a11725b-961c-4c44-b78c-c7a748ebf49b-utilities\") pod \"redhat-marketplace-98wjk\" (UID: \"2a11725b-961c-4c44-b78c-c7a748ebf49b\") " pod="openshift-marketplace/redhat-marketplace-98wjk" Nov 24 20:30:28 crc kubenswrapper[4812]: I1124 20:30:28.757078 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phbs9\" (UniqueName: \"kubernetes.io/projected/2a11725b-961c-4c44-b78c-c7a748ebf49b-kube-api-access-phbs9\") pod \"redhat-marketplace-98wjk\" (UID: \"2a11725b-961c-4c44-b78c-c7a748ebf49b\") " pod="openshift-marketplace/redhat-marketplace-98wjk" Nov 24 20:30:28 crc kubenswrapper[4812]: I1124 20:30:28.824648 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98wjk" Nov 24 20:30:29 crc kubenswrapper[4812]: I1124 20:30:29.365045 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-98wjk"] Nov 24 20:30:29 crc kubenswrapper[4812]: I1124 20:30:29.860102 4812 generic.go:334] "Generic (PLEG): container finished" podID="2a11725b-961c-4c44-b78c-c7a748ebf49b" containerID="eb30b3b8abdff5afc80d6d6f82f58c4a5ea2c7a4223b0c321530365c4978d9ff" exitCode=0 Nov 24 20:30:29 crc kubenswrapper[4812]: I1124 20:30:29.860164 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98wjk" event={"ID":"2a11725b-961c-4c44-b78c-c7a748ebf49b","Type":"ContainerDied","Data":"eb30b3b8abdff5afc80d6d6f82f58c4a5ea2c7a4223b0c321530365c4978d9ff"} Nov 24 20:30:29 crc kubenswrapper[4812]: I1124 20:30:29.860200 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98wjk" event={"ID":"2a11725b-961c-4c44-b78c-c7a748ebf49b","Type":"ContainerStarted","Data":"4418d99f43a92f42089517733d3364cb771c68eb3f971c65827830093e9df65a"} Nov 24 20:30:29 crc kubenswrapper[4812]: I1124 20:30:29.863765 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 20:30:31 crc kubenswrapper[4812]: I1124 20:30:31.882619 4812 generic.go:334] "Generic (PLEG): container finished" podID="2a11725b-961c-4c44-b78c-c7a748ebf49b" containerID="84a79d3f31ea24e4a8c52b9bfd5700658d201e390e8aca1278bb457712ba3d12" exitCode=0 Nov 24 20:30:31 crc kubenswrapper[4812]: I1124 20:30:31.882721 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98wjk" event={"ID":"2a11725b-961c-4c44-b78c-c7a748ebf49b","Type":"ContainerDied","Data":"84a79d3f31ea24e4a8c52b9bfd5700658d201e390e8aca1278bb457712ba3d12"} Nov 24 20:30:32 crc kubenswrapper[4812]: I1124 20:30:32.897951 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98wjk" event={"ID":"2a11725b-961c-4c44-b78c-c7a748ebf49b","Type":"ContainerStarted","Data":"96e0b828c7a1b0f05a84a0a952dab834015ca6a1b79f50738e8ef8f079b93b03"} Nov 24 20:30:32 crc kubenswrapper[4812]: I1124 20:30:32.929224 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-98wjk" podStartSLOduration=2.508920712 podStartE2EDuration="4.929188557s" podCreationTimestamp="2025-11-24 20:30:28 +0000 UTC" firstStartedPulling="2025-11-24 20:30:29.863446021 +0000 UTC m=+4423.652398412" lastFinishedPulling="2025-11-24 20:30:32.283713856 +0000 UTC m=+4426.072666257" observedRunningTime="2025-11-24 20:30:32.92468468 +0000 UTC m=+4426.713637061" watchObservedRunningTime="2025-11-24 20:30:32.929188557 +0000 UTC m=+4426.718140968" Nov 24 20:30:38 crc kubenswrapper[4812]: I1124 20:30:38.825104 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-98wjk" Nov 24 20:30:38 crc kubenswrapper[4812]: I1124 20:30:38.825579 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-98wjk" Nov 24 20:30:38 crc kubenswrapper[4812]: I1124 20:30:38.873686 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-98wjk" Nov 24 20:30:39 crc kubenswrapper[4812]: I1124 20:30:39.026827 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-98wjk" Nov 24 20:30:39 crc kubenswrapper[4812]: I1124 20:30:39.107159 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-98wjk"] Nov 24 20:30:40 crc kubenswrapper[4812]: I1124 20:30:40.989718 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-98wjk" podUID="2a11725b-961c-4c44-b78c-c7a748ebf49b" containerName="registry-server" containerID="cri-o://96e0b828c7a1b0f05a84a0a952dab834015ca6a1b79f50738e8ef8f079b93b03" gracePeriod=2 Nov 24 20:30:42 crc kubenswrapper[4812]: I1124 20:30:41.999721 4812 generic.go:334] "Generic (PLEG): container finished" podID="2a11725b-961c-4c44-b78c-c7a748ebf49b" containerID="96e0b828c7a1b0f05a84a0a952dab834015ca6a1b79f50738e8ef8f079b93b03" exitCode=0 Nov 24 20:30:42 crc kubenswrapper[4812]: I1124 20:30:41.999848 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98wjk" event={"ID":"2a11725b-961c-4c44-b78c-c7a748ebf49b","Type":"ContainerDied","Data":"96e0b828c7a1b0f05a84a0a952dab834015ca6a1b79f50738e8ef8f079b93b03"} Nov 24 20:30:42 crc kubenswrapper[4812]: I1124 20:30:42.000117 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-98wjk" event={"ID":"2a11725b-961c-4c44-b78c-c7a748ebf49b","Type":"ContainerDied","Data":"4418d99f43a92f42089517733d3364cb771c68eb3f971c65827830093e9df65a"} Nov 24 20:30:42 crc kubenswrapper[4812]: I1124 20:30:42.000131 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4418d99f43a92f42089517733d3364cb771c68eb3f971c65827830093e9df65a" Nov 24 20:30:42 crc kubenswrapper[4812]: I1124 20:30:42.001838 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98wjk" Nov 24 20:30:42 crc kubenswrapper[4812]: I1124 20:30:42.046968 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phbs9\" (UniqueName: \"kubernetes.io/projected/2a11725b-961c-4c44-b78c-c7a748ebf49b-kube-api-access-phbs9\") pod \"2a11725b-961c-4c44-b78c-c7a748ebf49b\" (UID: \"2a11725b-961c-4c44-b78c-c7a748ebf49b\") " Nov 24 20:30:42 crc kubenswrapper[4812]: I1124 20:30:42.047011 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a11725b-961c-4c44-b78c-c7a748ebf49b-catalog-content\") pod \"2a11725b-961c-4c44-b78c-c7a748ebf49b\" (UID: \"2a11725b-961c-4c44-b78c-c7a748ebf49b\") " Nov 24 20:30:42 crc kubenswrapper[4812]: I1124 20:30:42.047078 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a11725b-961c-4c44-b78c-c7a748ebf49b-utilities\") pod \"2a11725b-961c-4c44-b78c-c7a748ebf49b\" (UID: \"2a11725b-961c-4c44-b78c-c7a748ebf49b\") " Nov 24 20:30:42 crc kubenswrapper[4812]: I1124 20:30:42.049264 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a11725b-961c-4c44-b78c-c7a748ebf49b-utilities" (OuterVolumeSpecName: "utilities") pod "2a11725b-961c-4c44-b78c-c7a748ebf49b" (UID: "2a11725b-961c-4c44-b78c-c7a748ebf49b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:30:42 crc kubenswrapper[4812]: I1124 20:30:42.059577 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a11725b-961c-4c44-b78c-c7a748ebf49b-kube-api-access-phbs9" (OuterVolumeSpecName: "kube-api-access-phbs9") pod "2a11725b-961c-4c44-b78c-c7a748ebf49b" (UID: "2a11725b-961c-4c44-b78c-c7a748ebf49b"). InnerVolumeSpecName "kube-api-access-phbs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:30:42 crc kubenswrapper[4812]: I1124 20:30:42.085035 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a11725b-961c-4c44-b78c-c7a748ebf49b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a11725b-961c-4c44-b78c-c7a748ebf49b" (UID: "2a11725b-961c-4c44-b78c-c7a748ebf49b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:30:42 crc kubenswrapper[4812]: I1124 20:30:42.148460 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phbs9\" (UniqueName: \"kubernetes.io/projected/2a11725b-961c-4c44-b78c-c7a748ebf49b-kube-api-access-phbs9\") on node \"crc\" DevicePath \"\"" Nov 24 20:30:42 crc kubenswrapper[4812]: I1124 20:30:42.148746 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a11725b-961c-4c44-b78c-c7a748ebf49b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 20:30:42 crc kubenswrapper[4812]: I1124 20:30:42.148883 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a11725b-961c-4c44-b78c-c7a748ebf49b-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 20:30:43 crc kubenswrapper[4812]: I1124 20:30:43.010409 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-98wjk" Nov 24 20:30:43 crc kubenswrapper[4812]: I1124 20:30:43.046042 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-98wjk"] Nov 24 20:30:43 crc kubenswrapper[4812]: I1124 20:30:43.056554 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-98wjk"] Nov 24 20:30:44 crc kubenswrapper[4812]: I1124 20:30:44.982982 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a11725b-961c-4c44-b78c-c7a748ebf49b" path="/var/lib/kubelet/pods/2a11725b-961c-4c44-b78c-c7a748ebf49b/volumes" Nov 24 20:30:52 crc kubenswrapper[4812]: I1124 20:30:52.368296 4812 scope.go:117] "RemoveContainer" containerID="86792e34ba6bbffa1140ac701b5536e2abc532813c9d47b332511cc4d5708217" Nov 24 20:31:04 crc kubenswrapper[4812]: I1124 20:31:04.316150 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xqg4k"] Nov 24 20:31:04 crc kubenswrapper[4812]: E1124 20:31:04.316955 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a11725b-961c-4c44-b78c-c7a748ebf49b" containerName="extract-utilities" Nov 24 20:31:04 crc kubenswrapper[4812]: I1124 20:31:04.316970 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a11725b-961c-4c44-b78c-c7a748ebf49b" containerName="extract-utilities" Nov 24 20:31:04 crc kubenswrapper[4812]: E1124 20:31:04.316983 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a11725b-961c-4c44-b78c-c7a748ebf49b" containerName="registry-server" Nov 24 20:31:04 crc kubenswrapper[4812]: I1124 20:31:04.316989 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a11725b-961c-4c44-b78c-c7a748ebf49b" containerName="registry-server" Nov 24 20:31:04 crc kubenswrapper[4812]: E1124 20:31:04.317000 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a11725b-961c-4c44-b78c-c7a748ebf49b" containerName="extract-content" Nov 24 20:31:04 crc kubenswrapper[4812]: I1124 20:31:04.317006 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a11725b-961c-4c44-b78c-c7a748ebf49b" containerName="extract-content" Nov 24 20:31:04 crc kubenswrapper[4812]: I1124 20:31:04.317162 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a11725b-961c-4c44-b78c-c7a748ebf49b" containerName="registry-server" Nov 24 20:31:04 crc kubenswrapper[4812]: I1124 20:31:04.318238 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xqg4k" Nov 24 20:31:04 crc kubenswrapper[4812]: I1124 20:31:04.328513 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xqg4k"] Nov 24 20:31:04 crc kubenswrapper[4812]: I1124 20:31:04.386273 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jx7x\" (UniqueName: \"kubernetes.io/projected/faf9d8b5-54e8-4e1b-93dd-5b047c0857c8-kube-api-access-7jx7x\") pod \"community-operators-xqg4k\" (UID: \"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8\") " pod="openshift-marketplace/community-operators-xqg4k" Nov 24 20:31:04 crc kubenswrapper[4812]: I1124 20:31:04.386363 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf9d8b5-54e8-4e1b-93dd-5b047c0857c8-catalog-content\") pod \"community-operators-xqg4k\" (UID: \"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8\") " pod="openshift-marketplace/community-operators-xqg4k" Nov 24 20:31:04 crc kubenswrapper[4812]: I1124 20:31:04.386444 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf9d8b5-54e8-4e1b-93dd-5b047c0857c8-utilities\") pod \"community-operators-xqg4k\" (UID: \"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8\") " pod="openshift-marketplace/community-operators-xqg4k" Nov 24 20:31:04 crc kubenswrapper[4812]: I1124 20:31:04.487573 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf9d8b5-54e8-4e1b-93dd-5b047c0857c8-utilities\") pod \"community-operators-xqg4k\" (UID: \"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8\") " pod="openshift-marketplace/community-operators-xqg4k" Nov 24 20:31:04 crc kubenswrapper[4812]: I1124 20:31:04.487665 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jx7x\" (UniqueName: \"kubernetes.io/projected/faf9d8b5-54e8-4e1b-93dd-5b047c0857c8-kube-api-access-7jx7x\") pod \"community-operators-xqg4k\" (UID: \"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8\") " pod="openshift-marketplace/community-operators-xqg4k" Nov 24 20:31:04 crc kubenswrapper[4812]: I1124 20:31:04.487719 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf9d8b5-54e8-4e1b-93dd-5b047c0857c8-catalog-content\") pod \"community-operators-xqg4k\" (UID: \"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8\") " pod="openshift-marketplace/community-operators-xqg4k" Nov 24 20:31:04 crc kubenswrapper[4812]: I1124 20:31:04.488374 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf9d8b5-54e8-4e1b-93dd-5b047c0857c8-catalog-content\") pod \"community-operators-xqg4k\" (UID: \"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8\") " pod="openshift-marketplace/community-operators-xqg4k" Nov 24 20:31:04 crc kubenswrapper[4812]: I1124 20:31:04.488549 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf9d8b5-54e8-4e1b-93dd-5b047c0857c8-utilities\") pod \"community-operators-xqg4k\" (UID: \"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8\") " pod="openshift-marketplace/community-operators-xqg4k" Nov 24 20:31:04 crc kubenswrapper[4812]: I1124 20:31:04.505901 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jx7x\" (UniqueName: \"kubernetes.io/projected/faf9d8b5-54e8-4e1b-93dd-5b047c0857c8-kube-api-access-7jx7x\") pod \"community-operators-xqg4k\" (UID: \"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8\") " pod="openshift-marketplace/community-operators-xqg4k" Nov 24 20:31:04 crc kubenswrapper[4812]: I1124 20:31:04.637266 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xqg4k" Nov 24 20:31:05 crc kubenswrapper[4812]: I1124 20:31:05.132602 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xqg4k"] Nov 24 20:31:05 crc kubenswrapper[4812]: I1124 20:31:05.210182 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xqg4k" event={"ID":"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8","Type":"ContainerStarted","Data":"129f05229a02da2a63ced504d0276ef308a591fb83d325c055348f3687bbf5b0"} Nov 24 20:31:06 crc kubenswrapper[4812]: I1124 20:31:06.222099 4812 generic.go:334] "Generic (PLEG): container finished" podID="faf9d8b5-54e8-4e1b-93dd-5b047c0857c8" containerID="28b8fdc8e7dcc8fc1bcf92c9d0d12f8559c906cf4b4638a719a934c278264497" exitCode=0 Nov 24 20:31:06 crc kubenswrapper[4812]: I1124 20:31:06.222154 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xqg4k" event={"ID":"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8","Type":"ContainerDied","Data":"28b8fdc8e7dcc8fc1bcf92c9d0d12f8559c906cf4b4638a719a934c278264497"} Nov 24 20:31:07 crc kubenswrapper[4812]: I1124 20:31:07.232160 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xqg4k" event={"ID":"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8","Type":"ContainerStarted","Data":"7ec482820ccccdc980ba3283fa639eec72f1e26b95bbddc2b94f873f2eacd205"} Nov 24 20:31:07 crc kubenswrapper[4812]: E1124 20:31:07.501403 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfaf9d8b5_54e8_4e1b_93dd_5b047c0857c8.slice/crio-conmon-7ec482820ccccdc980ba3283fa639eec72f1e26b95bbddc2b94f873f2eacd205.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfaf9d8b5_54e8_4e1b_93dd_5b047c0857c8.slice/crio-7ec482820ccccdc980ba3283fa639eec72f1e26b95bbddc2b94f873f2eacd205.scope\": RecentStats: unable to find data in memory cache]" Nov 24 20:31:08 crc kubenswrapper[4812]: I1124 20:31:08.243008 4812 generic.go:334] "Generic (PLEG): container finished" podID="faf9d8b5-54e8-4e1b-93dd-5b047c0857c8" containerID="7ec482820ccccdc980ba3283fa639eec72f1e26b95bbddc2b94f873f2eacd205" exitCode=0 Nov 24 20:31:08 crc kubenswrapper[4812]: I1124 20:31:08.243047 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xqg4k" event={"ID":"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8","Type":"ContainerDied","Data":"7ec482820ccccdc980ba3283fa639eec72f1e26b95bbddc2b94f873f2eacd205"} Nov 24 20:31:09 crc kubenswrapper[4812]: I1124 20:31:09.254491 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xqg4k" event={"ID":"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8","Type":"ContainerStarted","Data":"3a75758faebea5d4ad051edd3967d20033fd02a364fe4166a822318a0724f1ca"} Nov 24 20:31:09 crc kubenswrapper[4812]: I1124 20:31:09.276562 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xqg4k" podStartSLOduration=2.7815207060000002 podStartE2EDuration="5.276543904s" podCreationTimestamp="2025-11-24 20:31:04 +0000 UTC" firstStartedPulling="2025-11-24 20:31:06.224552377 +0000 UTC m=+4460.013504758" lastFinishedPulling="2025-11-24 20:31:08.719575585 +0000 UTC m=+4462.508527956" observedRunningTime="2025-11-24 20:31:09.273152928 +0000 UTC m=+4463.062105329" watchObservedRunningTime="2025-11-24 20:31:09.276543904 +0000 UTC m=+4463.065496285" Nov 24 20:31:14 crc kubenswrapper[4812]: I1124 20:31:14.637558 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xqg4k" Nov 24 20:31:14 crc kubenswrapper[4812]: I1124 20:31:14.638248 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xqg4k" Nov 24 20:31:14 crc kubenswrapper[4812]: I1124 20:31:14.705696 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xqg4k" Nov 24 20:31:15 crc kubenswrapper[4812]: I1124 20:31:15.363622 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xqg4k" Nov 24 20:31:15 crc kubenswrapper[4812]: I1124 20:31:15.430106 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xqg4k"] Nov 24 20:31:17 crc kubenswrapper[4812]: I1124 20:31:17.337790 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xqg4k" podUID="faf9d8b5-54e8-4e1b-93dd-5b047c0857c8" containerName="registry-server" containerID="cri-o://3a75758faebea5d4ad051edd3967d20033fd02a364fe4166a822318a0724f1ca" gracePeriod=2 Nov 24 20:31:17 crc kubenswrapper[4812]: E1124 20:31:17.765163 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfaf9d8b5_54e8_4e1b_93dd_5b047c0857c8.slice/crio-conmon-3a75758faebea5d4ad051edd3967d20033fd02a364fe4166a822318a0724f1ca.scope\": RecentStats: unable to find data in memory cache]" Nov 24 20:31:18 crc kubenswrapper[4812]: I1124 20:31:18.303524 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xqg4k" Nov 24 20:31:18 crc kubenswrapper[4812]: I1124 20:31:18.346448 4812 generic.go:334] "Generic (PLEG): container finished" podID="faf9d8b5-54e8-4e1b-93dd-5b047c0857c8" containerID="3a75758faebea5d4ad051edd3967d20033fd02a364fe4166a822318a0724f1ca" exitCode=0 Nov 24 20:31:18 crc kubenswrapper[4812]: I1124 20:31:18.346548 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xqg4k" event={"ID":"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8","Type":"ContainerDied","Data":"3a75758faebea5d4ad051edd3967d20033fd02a364fe4166a822318a0724f1ca"} Nov 24 20:31:18 crc kubenswrapper[4812]: I1124 20:31:18.347392 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xqg4k" event={"ID":"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8","Type":"ContainerDied","Data":"129f05229a02da2a63ced504d0276ef308a591fb83d325c055348f3687bbf5b0"} Nov 24 20:31:18 crc kubenswrapper[4812]: I1124 20:31:18.347511 4812 scope.go:117] "RemoveContainer" containerID="3a75758faebea5d4ad051edd3967d20033fd02a364fe4166a822318a0724f1ca" Nov 24 20:31:18 crc kubenswrapper[4812]: I1124 20:31:18.346562 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xqg4k" Nov 24 20:31:18 crc kubenswrapper[4812]: I1124 20:31:18.371378 4812 scope.go:117] "RemoveContainer" containerID="7ec482820ccccdc980ba3283fa639eec72f1e26b95bbddc2b94f873f2eacd205" Nov 24 20:31:18 crc kubenswrapper[4812]: I1124 20:31:18.389585 4812 scope.go:117] "RemoveContainer" containerID="28b8fdc8e7dcc8fc1bcf92c9d0d12f8559c906cf4b4638a719a934c278264497" Nov 24 20:31:18 crc kubenswrapper[4812]: I1124 20:31:18.406855 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jx7x\" (UniqueName: \"kubernetes.io/projected/faf9d8b5-54e8-4e1b-93dd-5b047c0857c8-kube-api-access-7jx7x\") pod \"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8\" (UID: \"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8\") " Nov 24 20:31:18 crc kubenswrapper[4812]: I1124 20:31:18.407019 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf9d8b5-54e8-4e1b-93dd-5b047c0857c8-catalog-content\") pod \"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8\" (UID: \"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8\") " Nov 24 20:31:18 crc kubenswrapper[4812]: I1124 20:31:18.407067 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf9d8b5-54e8-4e1b-93dd-5b047c0857c8-utilities\") pod \"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8\" (UID: \"faf9d8b5-54e8-4e1b-93dd-5b047c0857c8\") " Nov 24 20:31:18 crc kubenswrapper[4812]: I1124 20:31:18.408099 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faf9d8b5-54e8-4e1b-93dd-5b047c0857c8-utilities" (OuterVolumeSpecName: "utilities") pod "faf9d8b5-54e8-4e1b-93dd-5b047c0857c8" (UID: "faf9d8b5-54e8-4e1b-93dd-5b047c0857c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:31:18 crc kubenswrapper[4812]: I1124 20:31:18.414759 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf9d8b5-54e8-4e1b-93dd-5b047c0857c8-kube-api-access-7jx7x" (OuterVolumeSpecName: "kube-api-access-7jx7x") pod "faf9d8b5-54e8-4e1b-93dd-5b047c0857c8" (UID: "faf9d8b5-54e8-4e1b-93dd-5b047c0857c8"). InnerVolumeSpecName "kube-api-access-7jx7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:31:18 crc kubenswrapper[4812]: I1124 20:31:18.427098 4812 scope.go:117] "RemoveContainer" containerID="3a75758faebea5d4ad051edd3967d20033fd02a364fe4166a822318a0724f1ca" Nov 24 20:31:18 crc kubenswrapper[4812]: E1124 20:31:18.427704 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a75758faebea5d4ad051edd3967d20033fd02a364fe4166a822318a0724f1ca\": container with ID starting with 3a75758faebea5d4ad051edd3967d20033fd02a364fe4166a822318a0724f1ca not found: ID does not exist" containerID="3a75758faebea5d4ad051edd3967d20033fd02a364fe4166a822318a0724f1ca" Nov 24 20:31:18 crc kubenswrapper[4812]: I1124 20:31:18.427745 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a75758faebea5d4ad051edd3967d20033fd02a364fe4166a822318a0724f1ca"} err="failed to get container status \"3a75758faebea5d4ad051edd3967d20033fd02a364fe4166a822318a0724f1ca\": rpc error: code = NotFound desc = could not find container \"3a75758faebea5d4ad051edd3967d20033fd02a364fe4166a822318a0724f1ca\": container with ID starting with 3a75758faebea5d4ad051edd3967d20033fd02a364fe4166a822318a0724f1ca not found: ID does not exist" Nov 24 20:31:18 crc kubenswrapper[4812]: I1124 20:31:18.427770 4812 scope.go:117] "RemoveContainer" containerID="7ec482820ccccdc980ba3283fa639eec72f1e26b95bbddc2b94f873f2eacd205" Nov 24 20:31:18 crc kubenswrapper[4812]: E1124 20:31:18.427990 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ec482820ccccdc980ba3283fa639eec72f1e26b95bbddc2b94f873f2eacd205\": container with ID starting with 7ec482820ccccdc980ba3283fa639eec72f1e26b95bbddc2b94f873f2eacd205 not found: ID does not exist" containerID="7ec482820ccccdc980ba3283fa639eec72f1e26b95bbddc2b94f873f2eacd205" Nov 24 20:31:18 crc kubenswrapper[4812]: I1124 20:31:18.428022 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ec482820ccccdc980ba3283fa639eec72f1e26b95bbddc2b94f873f2eacd205"} err="failed to get container status \"7ec482820ccccdc980ba3283fa639eec72f1e26b95bbddc2b94f873f2eacd205\": rpc error: code = NotFound desc = could not find container \"7ec482820ccccdc980ba3283fa639eec72f1e26b95bbddc2b94f873f2eacd205\": container with ID starting with 7ec482820ccccdc980ba3283fa639eec72f1e26b95bbddc2b94f873f2eacd205 not found: ID does not exist" Nov 24 20:31:18 crc kubenswrapper[4812]: I1124 20:31:18.428039 4812 scope.go:117] "RemoveContainer" containerID="28b8fdc8e7dcc8fc1bcf92c9d0d12f8559c906cf4b4638a719a934c278264497" Nov 24 20:31:18 crc kubenswrapper[4812]: E1124 20:31:18.428480 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28b8fdc8e7dcc8fc1bcf92c9d0d12f8559c906cf4b4638a719a934c278264497\": container with ID starting with 28b8fdc8e7dcc8fc1bcf92c9d0d12f8559c906cf4b4638a719a934c278264497 not found: ID does not exist" containerID="28b8fdc8e7dcc8fc1bcf92c9d0d12f8559c906cf4b4638a719a934c278264497" Nov 24 20:31:18 crc kubenswrapper[4812]: I1124 20:31:18.428514 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28b8fdc8e7dcc8fc1bcf92c9d0d12f8559c906cf4b4638a719a934c278264497"} err="failed to get container status \"28b8fdc8e7dcc8fc1bcf92c9d0d12f8559c906cf4b4638a719a934c278264497\": rpc error: code = NotFound desc = could not find container \"28b8fdc8e7dcc8fc1bcf92c9d0d12f8559c906cf4b4638a719a934c278264497\": container with ID starting with 28b8fdc8e7dcc8fc1bcf92c9d0d12f8559c906cf4b4638a719a934c278264497 not found: ID does not exist" Nov 24 20:31:18 crc kubenswrapper[4812]: I1124 20:31:18.508552 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/faf9d8b5-54e8-4e1b-93dd-5b047c0857c8-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 20:31:18 crc kubenswrapper[4812]: I1124 20:31:18.509110 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jx7x\" (UniqueName: \"kubernetes.io/projected/faf9d8b5-54e8-4e1b-93dd-5b047c0857c8-kube-api-access-7jx7x\") on node \"crc\" DevicePath \"\"" Nov 24 20:31:19 crc kubenswrapper[4812]: I1124 20:31:19.205462 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faf9d8b5-54e8-4e1b-93dd-5b047c0857c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "faf9d8b5-54e8-4e1b-93dd-5b047c0857c8" (UID: "faf9d8b5-54e8-4e1b-93dd-5b047c0857c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:31:19 crc kubenswrapper[4812]: I1124 20:31:19.222612 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/faf9d8b5-54e8-4e1b-93dd-5b047c0857c8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 20:31:19 crc kubenswrapper[4812]: I1124 20:31:19.288757 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xqg4k"] Nov 24 20:31:19 crc kubenswrapper[4812]: I1124 20:31:19.297616 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xqg4k"] Nov 24 20:31:20 crc kubenswrapper[4812]: I1124 20:31:20.974509 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faf9d8b5-54e8-4e1b-93dd-5b047c0857c8" path="/var/lib/kubelet/pods/faf9d8b5-54e8-4e1b-93dd-5b047c0857c8/volumes" Nov 24 20:32:02 crc kubenswrapper[4812]: I1124 20:32:02.998539 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:32:03 crc kubenswrapper[4812]: I1124 20:32:02.999233 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:32:30 crc kubenswrapper[4812]: I1124 20:32:30.777253 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fzm4c"] Nov 24 20:32:30 crc kubenswrapper[4812]: E1124 20:32:30.778835 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf9d8b5-54e8-4e1b-93dd-5b047c0857c8" containerName="extract-utilities" Nov 24 20:32:30 crc kubenswrapper[4812]: I1124 20:32:30.778868 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf9d8b5-54e8-4e1b-93dd-5b047c0857c8" containerName="extract-utilities" Nov 24 20:32:30 crc kubenswrapper[4812]: E1124 20:32:30.778901 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf9d8b5-54e8-4e1b-93dd-5b047c0857c8" containerName="registry-server" Nov 24 20:32:30 crc kubenswrapper[4812]: I1124 20:32:30.778918 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf9d8b5-54e8-4e1b-93dd-5b047c0857c8" containerName="registry-server" Nov 24 20:32:30 crc kubenswrapper[4812]: E1124 20:32:30.778977 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf9d8b5-54e8-4e1b-93dd-5b047c0857c8" containerName="extract-content" Nov 24 20:32:30 crc kubenswrapper[4812]: I1124 20:32:30.778994 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf9d8b5-54e8-4e1b-93dd-5b047c0857c8" containerName="extract-content" Nov 24 20:32:30 crc kubenswrapper[4812]: I1124 20:32:30.779461 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="faf9d8b5-54e8-4e1b-93dd-5b047c0857c8" containerName="registry-server" Nov 24 20:32:30 crc kubenswrapper[4812]: I1124 20:32:30.781580 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzm4c" Nov 24 20:32:30 crc kubenswrapper[4812]: I1124 20:32:30.798788 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fzm4c"] Nov 24 20:32:30 crc kubenswrapper[4812]: I1124 20:32:30.939274 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n2df\" (UniqueName: \"kubernetes.io/projected/361d060e-4a55-445d-a386-a073d8b7dcbb-kube-api-access-6n2df\") pod \"redhat-operators-fzm4c\" (UID: \"361d060e-4a55-445d-a386-a073d8b7dcbb\") " pod="openshift-marketplace/redhat-operators-fzm4c" Nov 24 20:32:30 crc kubenswrapper[4812]: I1124 20:32:30.939574 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/361d060e-4a55-445d-a386-a073d8b7dcbb-utilities\") pod \"redhat-operators-fzm4c\" (UID: \"361d060e-4a55-445d-a386-a073d8b7dcbb\") " pod="openshift-marketplace/redhat-operators-fzm4c" Nov 24 20:32:30 crc kubenswrapper[4812]: I1124 20:32:30.939636 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/361d060e-4a55-445d-a386-a073d8b7dcbb-catalog-content\") pod \"redhat-operators-fzm4c\" (UID: \"361d060e-4a55-445d-a386-a073d8b7dcbb\") " pod="openshift-marketplace/redhat-operators-fzm4c" Nov 24 20:32:31 crc kubenswrapper[4812]: I1124 20:32:31.040434 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/361d060e-4a55-445d-a386-a073d8b7dcbb-utilities\") pod \"redhat-operators-fzm4c\" (UID: \"361d060e-4a55-445d-a386-a073d8b7dcbb\") " pod="openshift-marketplace/redhat-operators-fzm4c" Nov 24 20:32:31 crc kubenswrapper[4812]: I1124 20:32:31.040539 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/361d060e-4a55-445d-a386-a073d8b7dcbb-catalog-content\") pod \"redhat-operators-fzm4c\" (UID: \"361d060e-4a55-445d-a386-a073d8b7dcbb\") " pod="openshift-marketplace/redhat-operators-fzm4c" Nov 24 20:32:31 crc kubenswrapper[4812]: I1124 20:32:31.040591 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n2df\" (UniqueName: \"kubernetes.io/projected/361d060e-4a55-445d-a386-a073d8b7dcbb-kube-api-access-6n2df\") pod \"redhat-operators-fzm4c\" (UID: \"361d060e-4a55-445d-a386-a073d8b7dcbb\") " pod="openshift-marketplace/redhat-operators-fzm4c" Nov 24 20:32:31 crc kubenswrapper[4812]: I1124 20:32:31.041614 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/361d060e-4a55-445d-a386-a073d8b7dcbb-catalog-content\") pod \"redhat-operators-fzm4c\" (UID: \"361d060e-4a55-445d-a386-a073d8b7dcbb\") " pod="openshift-marketplace/redhat-operators-fzm4c" Nov 24 20:32:31 crc kubenswrapper[4812]: I1124 20:32:31.043560 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/361d060e-4a55-445d-a386-a073d8b7dcbb-utilities\") pod \"redhat-operators-fzm4c\" (UID: \"361d060e-4a55-445d-a386-a073d8b7dcbb\") " pod="openshift-marketplace/redhat-operators-fzm4c" Nov 24 20:32:31 crc kubenswrapper[4812]: I1124 20:32:31.064035 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n2df\" (UniqueName: \"kubernetes.io/projected/361d060e-4a55-445d-a386-a073d8b7dcbb-kube-api-access-6n2df\") pod \"redhat-operators-fzm4c\" (UID: \"361d060e-4a55-445d-a386-a073d8b7dcbb\") " pod="openshift-marketplace/redhat-operators-fzm4c" Nov 24 20:32:31 crc kubenswrapper[4812]: I1124 20:32:31.119096 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzm4c" Nov 24 20:32:31 crc kubenswrapper[4812]: I1124 20:32:31.335995 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fzm4c"] Nov 24 20:32:32 crc kubenswrapper[4812]: I1124 20:32:32.055275 4812 generic.go:334] "Generic (PLEG): container finished" podID="361d060e-4a55-445d-a386-a073d8b7dcbb" containerID="864ce14e3957bcbb70ed03bc0f13f6697b4793626c6aa563f947c63d217eddb8" exitCode=0 Nov 24 20:32:32 crc kubenswrapper[4812]: I1124 20:32:32.055393 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzm4c" event={"ID":"361d060e-4a55-445d-a386-a073d8b7dcbb","Type":"ContainerDied","Data":"864ce14e3957bcbb70ed03bc0f13f6697b4793626c6aa563f947c63d217eddb8"} Nov 24 20:32:32 crc kubenswrapper[4812]: I1124 20:32:32.055696 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzm4c" event={"ID":"361d060e-4a55-445d-a386-a073d8b7dcbb","Type":"ContainerStarted","Data":"2efab5aa3c557ad63313866104c1699a6691019318dad48aa1b79cead5336e52"} Nov 24 20:32:32 crc kubenswrapper[4812]: I1124 20:32:32.998398 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:32:32 crc kubenswrapper[4812]: I1124 20:32:32.998498 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:32:34 crc kubenswrapper[4812]: I1124 20:32:34.076630 4812 generic.go:334] "Generic (PLEG): container finished" podID="361d060e-4a55-445d-a386-a073d8b7dcbb" containerID="f8972737c42a6e30ef06682471e98079de068edaf8bbd3fb84e48d819c45ec4b" exitCode=0 Nov 24 20:32:34 crc kubenswrapper[4812]: I1124 20:32:34.076884 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzm4c" event={"ID":"361d060e-4a55-445d-a386-a073d8b7dcbb","Type":"ContainerDied","Data":"f8972737c42a6e30ef06682471e98079de068edaf8bbd3fb84e48d819c45ec4b"} Nov 24 20:32:35 crc kubenswrapper[4812]: I1124 20:32:35.085772 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzm4c" event={"ID":"361d060e-4a55-445d-a386-a073d8b7dcbb","Type":"ContainerStarted","Data":"9654b1795955b6eb68291272f413eee87452713a12f19f86b398a06a11cd5ec9"} Nov 24 20:32:35 crc kubenswrapper[4812]: I1124 20:32:35.105706 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fzm4c" podStartSLOduration=2.670027861 podStartE2EDuration="5.105686751s" podCreationTimestamp="2025-11-24 20:32:30 +0000 UTC" firstStartedPulling="2025-11-24 20:32:32.05815377 +0000 UTC m=+4545.847106181" lastFinishedPulling="2025-11-24 20:32:34.49381267 +0000 UTC m=+4548.282765071" observedRunningTime="2025-11-24 20:32:35.103214391 +0000 UTC m=+4548.892166792" watchObservedRunningTime="2025-11-24 20:32:35.105686751 +0000 UTC m=+4548.894639122" Nov 24 20:32:41 crc kubenswrapper[4812]: I1124 20:32:41.119622 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fzm4c" Nov 24 20:32:41 crc kubenswrapper[4812]: I1124 20:32:41.120156 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fzm4c" Nov 24 20:32:41 crc kubenswrapper[4812]: I1124 20:32:41.170601 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fzm4c" Nov 24 20:32:41 crc kubenswrapper[4812]: I1124 20:32:41.219419 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fzm4c" Nov 24 20:32:41 crc kubenswrapper[4812]: I1124 20:32:41.400889 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fzm4c"] Nov 24 20:32:43 crc kubenswrapper[4812]: I1124 20:32:43.168948 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fzm4c" podUID="361d060e-4a55-445d-a386-a073d8b7dcbb" containerName="registry-server" containerID="cri-o://9654b1795955b6eb68291272f413eee87452713a12f19f86b398a06a11cd5ec9" gracePeriod=2 Nov 24 20:32:43 crc kubenswrapper[4812]: I1124 20:32:43.558222 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzm4c" Nov 24 20:32:43 crc kubenswrapper[4812]: I1124 20:32:43.756877 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n2df\" (UniqueName: \"kubernetes.io/projected/361d060e-4a55-445d-a386-a073d8b7dcbb-kube-api-access-6n2df\") pod \"361d060e-4a55-445d-a386-a073d8b7dcbb\" (UID: \"361d060e-4a55-445d-a386-a073d8b7dcbb\") " Nov 24 20:32:43 crc kubenswrapper[4812]: I1124 20:32:43.756940 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/361d060e-4a55-445d-a386-a073d8b7dcbb-utilities\") pod \"361d060e-4a55-445d-a386-a073d8b7dcbb\" (UID: \"361d060e-4a55-445d-a386-a073d8b7dcbb\") " Nov 24 20:32:43 crc kubenswrapper[4812]: I1124 20:32:43.757071 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/361d060e-4a55-445d-a386-a073d8b7dcbb-catalog-content\") pod \"361d060e-4a55-445d-a386-a073d8b7dcbb\" (UID: \"361d060e-4a55-445d-a386-a073d8b7dcbb\") " Nov 24 20:32:43 crc kubenswrapper[4812]: I1124 20:32:43.758169 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/361d060e-4a55-445d-a386-a073d8b7dcbb-utilities" (OuterVolumeSpecName: "utilities") pod "361d060e-4a55-445d-a386-a073d8b7dcbb" (UID: "361d060e-4a55-445d-a386-a073d8b7dcbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:32:43 crc kubenswrapper[4812]: I1124 20:32:43.765546 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/361d060e-4a55-445d-a386-a073d8b7dcbb-kube-api-access-6n2df" (OuterVolumeSpecName: "kube-api-access-6n2df") pod "361d060e-4a55-445d-a386-a073d8b7dcbb" (UID: "361d060e-4a55-445d-a386-a073d8b7dcbb"). InnerVolumeSpecName "kube-api-access-6n2df". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:32:43 crc kubenswrapper[4812]: I1124 20:32:43.859376 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n2df\" (UniqueName: \"kubernetes.io/projected/361d060e-4a55-445d-a386-a073d8b7dcbb-kube-api-access-6n2df\") on node \"crc\" DevicePath \"\"" Nov 24 20:32:43 crc kubenswrapper[4812]: I1124 20:32:43.859427 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/361d060e-4a55-445d-a386-a073d8b7dcbb-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 20:32:44 crc kubenswrapper[4812]: I1124 20:32:44.183496 4812 generic.go:334] "Generic (PLEG): container finished" podID="361d060e-4a55-445d-a386-a073d8b7dcbb" containerID="9654b1795955b6eb68291272f413eee87452713a12f19f86b398a06a11cd5ec9" exitCode=0 Nov 24 20:32:44 crc kubenswrapper[4812]: I1124 20:32:44.183563 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzm4c" event={"ID":"361d060e-4a55-445d-a386-a073d8b7dcbb","Type":"ContainerDied","Data":"9654b1795955b6eb68291272f413eee87452713a12f19f86b398a06a11cd5ec9"} Nov 24 20:32:44 crc kubenswrapper[4812]: I1124 20:32:44.183584 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzm4c" Nov 24 20:32:44 crc kubenswrapper[4812]: I1124 20:32:44.183604 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzm4c" event={"ID":"361d060e-4a55-445d-a386-a073d8b7dcbb","Type":"ContainerDied","Data":"2efab5aa3c557ad63313866104c1699a6691019318dad48aa1b79cead5336e52"} Nov 24 20:32:44 crc kubenswrapper[4812]: I1124 20:32:44.183633 4812 scope.go:117] "RemoveContainer" containerID="9654b1795955b6eb68291272f413eee87452713a12f19f86b398a06a11cd5ec9" Nov 24 20:32:44 crc kubenswrapper[4812]: I1124 20:32:44.227924 4812 scope.go:117] "RemoveContainer" containerID="f8972737c42a6e30ef06682471e98079de068edaf8bbd3fb84e48d819c45ec4b" Nov 24 20:32:44 crc kubenswrapper[4812]: I1124 20:32:44.256169 4812 scope.go:117] "RemoveContainer" containerID="864ce14e3957bcbb70ed03bc0f13f6697b4793626c6aa563f947c63d217eddb8" Nov 24 20:32:44 crc kubenswrapper[4812]: I1124 20:32:44.303600 4812 scope.go:117] "RemoveContainer" containerID="9654b1795955b6eb68291272f413eee87452713a12f19f86b398a06a11cd5ec9" Nov 24 20:32:44 crc kubenswrapper[4812]: E1124 20:32:44.304330 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9654b1795955b6eb68291272f413eee87452713a12f19f86b398a06a11cd5ec9\": container with ID starting with 9654b1795955b6eb68291272f413eee87452713a12f19f86b398a06a11cd5ec9 not found: ID does not exist" containerID="9654b1795955b6eb68291272f413eee87452713a12f19f86b398a06a11cd5ec9" Nov 24 20:32:44 crc kubenswrapper[4812]: I1124 20:32:44.304434 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9654b1795955b6eb68291272f413eee87452713a12f19f86b398a06a11cd5ec9"} err="failed to get container status \"9654b1795955b6eb68291272f413eee87452713a12f19f86b398a06a11cd5ec9\": rpc error: code = NotFound desc = could not find container \"9654b1795955b6eb68291272f413eee87452713a12f19f86b398a06a11cd5ec9\": container with ID starting with 9654b1795955b6eb68291272f413eee87452713a12f19f86b398a06a11cd5ec9 not found: ID does not exist" Nov 24 20:32:44 crc kubenswrapper[4812]: I1124 20:32:44.304478 4812 scope.go:117] "RemoveContainer" containerID="f8972737c42a6e30ef06682471e98079de068edaf8bbd3fb84e48d819c45ec4b" Nov 24 20:32:44 crc kubenswrapper[4812]: E1124 20:32:44.304975 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8972737c42a6e30ef06682471e98079de068edaf8bbd3fb84e48d819c45ec4b\": container with ID starting with f8972737c42a6e30ef06682471e98079de068edaf8bbd3fb84e48d819c45ec4b not found: ID does not exist" containerID="f8972737c42a6e30ef06682471e98079de068edaf8bbd3fb84e48d819c45ec4b" Nov 24 20:32:44 crc kubenswrapper[4812]: I1124 20:32:44.305023 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8972737c42a6e30ef06682471e98079de068edaf8bbd3fb84e48d819c45ec4b"} err="failed to get container status \"f8972737c42a6e30ef06682471e98079de068edaf8bbd3fb84e48d819c45ec4b\": rpc error: code = NotFound desc = could not find container \"f8972737c42a6e30ef06682471e98079de068edaf8bbd3fb84e48d819c45ec4b\": container with ID starting with f8972737c42a6e30ef06682471e98079de068edaf8bbd3fb84e48d819c45ec4b not found: ID does not exist" Nov 24 20:32:44 crc kubenswrapper[4812]: I1124 20:32:44.305057 4812 scope.go:117] "RemoveContainer" containerID="864ce14e3957bcbb70ed03bc0f13f6697b4793626c6aa563f947c63d217eddb8" Nov 24 20:32:44 crc kubenswrapper[4812]: E1124 20:32:44.305578 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"864ce14e3957bcbb70ed03bc0f13f6697b4793626c6aa563f947c63d217eddb8\": container with ID starting with 864ce14e3957bcbb70ed03bc0f13f6697b4793626c6aa563f947c63d217eddb8 not found: ID does not exist" containerID="864ce14e3957bcbb70ed03bc0f13f6697b4793626c6aa563f947c63d217eddb8" Nov 24 20:32:44 crc kubenswrapper[4812]: I1124 20:32:44.305627 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"864ce14e3957bcbb70ed03bc0f13f6697b4793626c6aa563f947c63d217eddb8"} err="failed to get container status \"864ce14e3957bcbb70ed03bc0f13f6697b4793626c6aa563f947c63d217eddb8\": rpc error: code = NotFound desc = could not find container \"864ce14e3957bcbb70ed03bc0f13f6697b4793626c6aa563f947c63d217eddb8\": container with ID starting with 864ce14e3957bcbb70ed03bc0f13f6697b4793626c6aa563f947c63d217eddb8 not found: ID does not exist" Nov 24 20:32:45 crc kubenswrapper[4812]: I1124 20:32:45.592738 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/361d060e-4a55-445d-a386-a073d8b7dcbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "361d060e-4a55-445d-a386-a073d8b7dcbb" (UID: "361d060e-4a55-445d-a386-a073d8b7dcbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:32:45 crc kubenswrapper[4812]: I1124 20:32:45.687518 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/361d060e-4a55-445d-a386-a073d8b7dcbb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 20:32:45 crc kubenswrapper[4812]: I1124 20:32:45.737595 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fzm4c"] Nov 24 20:32:45 crc kubenswrapper[4812]: I1124 20:32:45.748196 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fzm4c"] Nov 24 20:32:46 crc kubenswrapper[4812]: I1124 20:32:46.983777 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="361d060e-4a55-445d-a386-a073d8b7dcbb" path="/var/lib/kubelet/pods/361d060e-4a55-445d-a386-a073d8b7dcbb/volumes" Nov 24 20:33:02 crc kubenswrapper[4812]: I1124 20:33:02.999150 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:33:03 crc kubenswrapper[4812]: I1124 20:33:02.999826 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:33:03 crc kubenswrapper[4812]: I1124 20:33:02.999904 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 20:33:03 crc kubenswrapper[4812]: I1124 20:33:03.001760 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 20:33:03 crc kubenswrapper[4812]: I1124 20:33:03.001877 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" gracePeriod=600 Nov 24 20:33:03 crc kubenswrapper[4812]: E1124 20:33:03.128507 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:33:03 crc kubenswrapper[4812]: I1124 20:33:03.393389 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" exitCode=0 Nov 24 20:33:03 crc kubenswrapper[4812]: I1124 20:33:03.393445 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f"} Nov 24 20:33:03 crc kubenswrapper[4812]: I1124 20:33:03.393484 4812 scope.go:117] "RemoveContainer" containerID="36ad40c48562b05eb9b464d40849bcfd97cee08a4e1f753bbbb2f77cd7452f53" Nov 24 20:33:03 crc kubenswrapper[4812]: I1124 20:33:03.394690 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:33:03 crc kubenswrapper[4812]: E1124 20:33:03.395259 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:33:17 crc kubenswrapper[4812]: I1124 20:33:17.966379 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:33:17 crc kubenswrapper[4812]: E1124 20:33:17.967309 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:33:32 crc kubenswrapper[4812]: I1124 20:33:32.966479 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:33:32 crc kubenswrapper[4812]: E1124 20:33:32.967578 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:33:43 crc kubenswrapper[4812]: I1124 20:33:43.966577 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:33:43 crc kubenswrapper[4812]: E1124 20:33:43.967900 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:33:48 crc kubenswrapper[4812]: I1124 20:33:48.165591 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bhlqj"] Nov 24 20:33:48 crc kubenswrapper[4812]: E1124 20:33:48.166215 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="361d060e-4a55-445d-a386-a073d8b7dcbb" containerName="registry-server" Nov 24 20:33:48 crc kubenswrapper[4812]: I1124 20:33:48.166230 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="361d060e-4a55-445d-a386-a073d8b7dcbb" containerName="registry-server" Nov 24 20:33:48 crc kubenswrapper[4812]: E1124 20:33:48.166242 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="361d060e-4a55-445d-a386-a073d8b7dcbb" containerName="extract-utilities" Nov 24 20:33:48 crc kubenswrapper[4812]: I1124 20:33:48.166250 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="361d060e-4a55-445d-a386-a073d8b7dcbb" containerName="extract-utilities" Nov 24 20:33:48 crc kubenswrapper[4812]: E1124 20:33:48.166261 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="361d060e-4a55-445d-a386-a073d8b7dcbb" containerName="extract-content" Nov 24 20:33:48 crc kubenswrapper[4812]: I1124 20:33:48.166268 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="361d060e-4a55-445d-a386-a073d8b7dcbb" containerName="extract-content" Nov 24 20:33:48 crc kubenswrapper[4812]: I1124 20:33:48.166489 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="361d060e-4a55-445d-a386-a073d8b7dcbb" containerName="registry-server" Nov 24 20:33:48 crc kubenswrapper[4812]: I1124 20:33:48.167688 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhlqj" Nov 24 20:33:48 crc kubenswrapper[4812]: I1124 20:33:48.183904 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bhlqj"] Nov 24 20:33:48 crc kubenswrapper[4812]: I1124 20:33:48.288811 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75266344-a7b6-425d-a843-eb7666398c51-catalog-content\") pod \"certified-operators-bhlqj\" (UID: \"75266344-a7b6-425d-a843-eb7666398c51\") " pod="openshift-marketplace/certified-operators-bhlqj" Nov 24 20:33:48 crc kubenswrapper[4812]: I1124 20:33:48.288943 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75266344-a7b6-425d-a843-eb7666398c51-utilities\") pod \"certified-operators-bhlqj\" (UID: \"75266344-a7b6-425d-a843-eb7666398c51\") " pod="openshift-marketplace/certified-operators-bhlqj" Nov 24 20:33:48 crc kubenswrapper[4812]: I1124 20:33:48.289002 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g28s2\" (UniqueName: \"kubernetes.io/projected/75266344-a7b6-425d-a843-eb7666398c51-kube-api-access-g28s2\") pod \"certified-operators-bhlqj\" (UID: \"75266344-a7b6-425d-a843-eb7666398c51\") " pod="openshift-marketplace/certified-operators-bhlqj" Nov 24 20:33:48 crc kubenswrapper[4812]: I1124 20:33:48.390244 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75266344-a7b6-425d-a843-eb7666398c51-utilities\") pod \"certified-operators-bhlqj\" (UID: \"75266344-a7b6-425d-a843-eb7666398c51\") " pod="openshift-marketplace/certified-operators-bhlqj" Nov 24 20:33:48 crc kubenswrapper[4812]: I1124 20:33:48.390311 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g28s2\" (UniqueName: \"kubernetes.io/projected/75266344-a7b6-425d-a843-eb7666398c51-kube-api-access-g28s2\") pod \"certified-operators-bhlqj\" (UID: \"75266344-a7b6-425d-a843-eb7666398c51\") " pod="openshift-marketplace/certified-operators-bhlqj" Nov 24 20:33:48 crc kubenswrapper[4812]: I1124 20:33:48.390461 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75266344-a7b6-425d-a843-eb7666398c51-catalog-content\") pod \"certified-operators-bhlqj\" (UID: \"75266344-a7b6-425d-a843-eb7666398c51\") " pod="openshift-marketplace/certified-operators-bhlqj" Nov 24 20:33:48 crc kubenswrapper[4812]: I1124 20:33:48.390763 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75266344-a7b6-425d-a843-eb7666398c51-utilities\") pod \"certified-operators-bhlqj\" (UID: \"75266344-a7b6-425d-a843-eb7666398c51\") " pod="openshift-marketplace/certified-operators-bhlqj" Nov 24 20:33:48 crc kubenswrapper[4812]: I1124 20:33:48.390939 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75266344-a7b6-425d-a843-eb7666398c51-catalog-content\") pod \"certified-operators-bhlqj\" (UID: \"75266344-a7b6-425d-a843-eb7666398c51\") " pod="openshift-marketplace/certified-operators-bhlqj" Nov 24 20:33:48 crc kubenswrapper[4812]: I1124 20:33:48.413725 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g28s2\" (UniqueName: \"kubernetes.io/projected/75266344-a7b6-425d-a843-eb7666398c51-kube-api-access-g28s2\") pod \"certified-operators-bhlqj\" (UID: \"75266344-a7b6-425d-a843-eb7666398c51\") " pod="openshift-marketplace/certified-operators-bhlqj" Nov 24 20:33:48 crc kubenswrapper[4812]: I1124 20:33:48.522289 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhlqj" Nov 24 20:33:49 crc kubenswrapper[4812]: I1124 20:33:49.026218 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bhlqj"] Nov 24 20:33:49 crc kubenswrapper[4812]: W1124 20:33:49.031108 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75266344_a7b6_425d_a843_eb7666398c51.slice/crio-358acea70ce2a102b33a02456ad6a61d57d25971107b827e6f95252ee9c6ca9f WatchSource:0}: Error finding container 358acea70ce2a102b33a02456ad6a61d57d25971107b827e6f95252ee9c6ca9f: Status 404 returned error can't find the container with id 358acea70ce2a102b33a02456ad6a61d57d25971107b827e6f95252ee9c6ca9f Nov 24 20:33:49 crc kubenswrapper[4812]: I1124 20:33:49.848580 4812 generic.go:334] "Generic (PLEG): container finished" podID="75266344-a7b6-425d-a843-eb7666398c51" containerID="6f4b85daad9e54f91843190b3f46e421d6add0111f913a51c72e563ac829ae88" exitCode=0 Nov 24 20:33:49 crc kubenswrapper[4812]: I1124 20:33:49.848872 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhlqj" event={"ID":"75266344-a7b6-425d-a843-eb7666398c51","Type":"ContainerDied","Data":"6f4b85daad9e54f91843190b3f46e421d6add0111f913a51c72e563ac829ae88"} Nov 24 20:33:49 crc kubenswrapper[4812]: I1124 20:33:49.849160 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhlqj" event={"ID":"75266344-a7b6-425d-a843-eb7666398c51","Type":"ContainerStarted","Data":"358acea70ce2a102b33a02456ad6a61d57d25971107b827e6f95252ee9c6ca9f"} Nov 24 20:33:50 crc kubenswrapper[4812]: I1124 20:33:50.860105 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhlqj" event={"ID":"75266344-a7b6-425d-a843-eb7666398c51","Type":"ContainerStarted","Data":"0cde13d898f4f32f3963ae9791900334f6e59af9f6e9b197a7042ff5b66e2bdf"} Nov 24 20:33:51 crc kubenswrapper[4812]: I1124 20:33:51.873413 4812 generic.go:334] "Generic (PLEG): container finished" podID="75266344-a7b6-425d-a843-eb7666398c51" containerID="0cde13d898f4f32f3963ae9791900334f6e59af9f6e9b197a7042ff5b66e2bdf" exitCode=0 Nov 24 20:33:51 crc kubenswrapper[4812]: I1124 20:33:51.873520 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhlqj" event={"ID":"75266344-a7b6-425d-a843-eb7666398c51","Type":"ContainerDied","Data":"0cde13d898f4f32f3963ae9791900334f6e59af9f6e9b197a7042ff5b66e2bdf"} Nov 24 20:33:52 crc kubenswrapper[4812]: I1124 20:33:52.892574 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhlqj" event={"ID":"75266344-a7b6-425d-a843-eb7666398c51","Type":"ContainerStarted","Data":"8282b79087e81626c99562ec146c693b96f1aae7569ae48112d8ea3a75a207ca"} Nov 24 20:33:52 crc kubenswrapper[4812]: I1124 20:33:52.915647 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bhlqj" podStartSLOduration=2.4795147650000002 podStartE2EDuration="4.915613347s" podCreationTimestamp="2025-11-24 20:33:48 +0000 UTC" firstStartedPulling="2025-11-24 20:33:49.852789334 +0000 UTC m=+4623.641741745" lastFinishedPulling="2025-11-24 20:33:52.288887946 +0000 UTC m=+4626.077840327" observedRunningTime="2025-11-24 20:33:52.913817736 +0000 UTC m=+4626.702770117" watchObservedRunningTime="2025-11-24 20:33:52.915613347 +0000 UTC m=+4626.704565728" Nov 24 20:33:58 crc kubenswrapper[4812]: I1124 20:33:58.523066 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bhlqj" Nov 24 20:33:58 crc kubenswrapper[4812]: I1124 20:33:58.523590 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bhlqj" Nov 24 20:33:58 crc kubenswrapper[4812]: I1124 20:33:58.591699 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bhlqj" Nov 24 20:33:58 crc kubenswrapper[4812]: I1124 20:33:58.966271 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:33:58 crc kubenswrapper[4812]: E1124 20:33:58.966992 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:33:59 crc kubenswrapper[4812]: I1124 20:33:59.036063 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bhlqj" Nov 24 20:33:59 crc kubenswrapper[4812]: I1124 20:33:59.099278 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bhlqj"] Nov 24 20:34:01 crc kubenswrapper[4812]: I1124 20:34:01.009212 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bhlqj" podUID="75266344-a7b6-425d-a843-eb7666398c51" containerName="registry-server" containerID="cri-o://8282b79087e81626c99562ec146c693b96f1aae7569ae48112d8ea3a75a207ca" gracePeriod=2 Nov 24 20:34:02 crc kubenswrapper[4812]: I1124 20:34:02.023379 4812 generic.go:334] "Generic (PLEG): container finished" podID="75266344-a7b6-425d-a843-eb7666398c51" containerID="8282b79087e81626c99562ec146c693b96f1aae7569ae48112d8ea3a75a207ca" exitCode=0 Nov 24 20:34:02 crc kubenswrapper[4812]: I1124 20:34:02.023676 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhlqj" event={"ID":"75266344-a7b6-425d-a843-eb7666398c51","Type":"ContainerDied","Data":"8282b79087e81626c99562ec146c693b96f1aae7569ae48112d8ea3a75a207ca"} Nov 24 20:34:02 crc kubenswrapper[4812]: I1124 20:34:02.857540 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhlqj" Nov 24 20:34:02 crc kubenswrapper[4812]: I1124 20:34:02.962842 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g28s2\" (UniqueName: \"kubernetes.io/projected/75266344-a7b6-425d-a843-eb7666398c51-kube-api-access-g28s2\") pod \"75266344-a7b6-425d-a843-eb7666398c51\" (UID: \"75266344-a7b6-425d-a843-eb7666398c51\") " Nov 24 20:34:02 crc kubenswrapper[4812]: I1124 20:34:02.963058 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75266344-a7b6-425d-a843-eb7666398c51-utilities\") pod \"75266344-a7b6-425d-a843-eb7666398c51\" (UID: \"75266344-a7b6-425d-a843-eb7666398c51\") " Nov 24 20:34:02 crc kubenswrapper[4812]: I1124 20:34:02.963213 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75266344-a7b6-425d-a843-eb7666398c51-catalog-content\") pod \"75266344-a7b6-425d-a843-eb7666398c51\" (UID: \"75266344-a7b6-425d-a843-eb7666398c51\") " Nov 24 20:34:02 crc kubenswrapper[4812]: I1124 20:34:02.964665 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75266344-a7b6-425d-a843-eb7666398c51-utilities" (OuterVolumeSpecName: "utilities") pod "75266344-a7b6-425d-a843-eb7666398c51" (UID: "75266344-a7b6-425d-a843-eb7666398c51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:34:02 crc kubenswrapper[4812]: I1124 20:34:02.971924 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75266344-a7b6-425d-a843-eb7666398c51-kube-api-access-g28s2" (OuterVolumeSpecName: "kube-api-access-g28s2") pod "75266344-a7b6-425d-a843-eb7666398c51" (UID: "75266344-a7b6-425d-a843-eb7666398c51"). InnerVolumeSpecName "kube-api-access-g28s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:34:03 crc kubenswrapper[4812]: I1124 20:34:03.033783 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75266344-a7b6-425d-a843-eb7666398c51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75266344-a7b6-425d-a843-eb7666398c51" (UID: "75266344-a7b6-425d-a843-eb7666398c51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:34:03 crc kubenswrapper[4812]: I1124 20:34:03.039804 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bhlqj" event={"ID":"75266344-a7b6-425d-a843-eb7666398c51","Type":"ContainerDied","Data":"358acea70ce2a102b33a02456ad6a61d57d25971107b827e6f95252ee9c6ca9f"} Nov 24 20:34:03 crc kubenswrapper[4812]: I1124 20:34:03.039856 4812 scope.go:117] "RemoveContainer" containerID="8282b79087e81626c99562ec146c693b96f1aae7569ae48112d8ea3a75a207ca" Nov 24 20:34:03 crc kubenswrapper[4812]: I1124 20:34:03.040047 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bhlqj" Nov 24 20:34:03 crc kubenswrapper[4812]: I1124 20:34:03.064861 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75266344-a7b6-425d-a843-eb7666398c51-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 20:34:03 crc kubenswrapper[4812]: I1124 20:34:03.064892 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g28s2\" (UniqueName: \"kubernetes.io/projected/75266344-a7b6-425d-a843-eb7666398c51-kube-api-access-g28s2\") on node \"crc\" DevicePath \"\"" Nov 24 20:34:03 crc kubenswrapper[4812]: I1124 20:34:03.064906 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75266344-a7b6-425d-a843-eb7666398c51-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 20:34:03 crc kubenswrapper[4812]: I1124 20:34:03.082595 4812 scope.go:117] "RemoveContainer" containerID="0cde13d898f4f32f3963ae9791900334f6e59af9f6e9b197a7042ff5b66e2bdf" Nov 24 20:34:03 crc kubenswrapper[4812]: I1124 20:34:03.082677 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bhlqj"] Nov 24 20:34:03 crc kubenswrapper[4812]: I1124 20:34:03.087229 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bhlqj"] Nov 24 20:34:03 crc kubenswrapper[4812]: I1124 20:34:03.106438 4812 scope.go:117] "RemoveContainer" containerID="6f4b85daad9e54f91843190b3f46e421d6add0111f913a51c72e563ac829ae88" Nov 24 20:34:04 crc kubenswrapper[4812]: I1124 20:34:04.983485 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75266344-a7b6-425d-a843-eb7666398c51" path="/var/lib/kubelet/pods/75266344-a7b6-425d-a843-eb7666398c51/volumes" Nov 24 20:34:09 crc kubenswrapper[4812]: I1124 20:34:09.966420 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:34:09 crc kubenswrapper[4812]: E1124 20:34:09.966882 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:34:21 crc kubenswrapper[4812]: I1124 20:34:21.965239 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:34:21 crc kubenswrapper[4812]: E1124 20:34:21.966258 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:34:34 crc kubenswrapper[4812]: I1124 20:34:34.966063 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:34:34 crc kubenswrapper[4812]: E1124 20:34:34.966829 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:34:49 crc kubenswrapper[4812]: I1124 20:34:49.966921 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:34:49 crc kubenswrapper[4812]: E1124 20:34:49.968087 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:35:02 crc kubenswrapper[4812]: I1124 20:35:02.967440 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:35:02 crc kubenswrapper[4812]: E1124 20:35:02.967989 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:35:14 crc kubenswrapper[4812]: I1124 20:35:14.966034 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:35:14 crc kubenswrapper[4812]: E1124 20:35:14.967028 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:35:29 crc kubenswrapper[4812]: I1124 20:35:29.965915 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:35:29 crc kubenswrapper[4812]: E1124 20:35:29.966849 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:35:43 crc kubenswrapper[4812]: I1124 20:35:43.969988 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:35:43 crc kubenswrapper[4812]: E1124 20:35:43.971214 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:35:55 crc kubenswrapper[4812]: I1124 20:35:55.966146 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:35:55 crc kubenswrapper[4812]: E1124 20:35:55.966967 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:36:10 crc kubenswrapper[4812]: I1124 20:36:10.965982 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:36:10 crc kubenswrapper[4812]: E1124 20:36:10.966998 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:36:22 crc kubenswrapper[4812]: I1124 20:36:22.966554 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:36:22 crc kubenswrapper[4812]: E1124 20:36:22.967475 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:36:35 crc kubenswrapper[4812]: I1124 20:36:35.966302 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:36:35 crc kubenswrapper[4812]: E1124 20:36:35.967538 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:36:48 crc kubenswrapper[4812]: I1124 20:36:48.965928 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:36:48 crc kubenswrapper[4812]: E1124 20:36:48.966837 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.353626 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-lr8tv"] Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.358806 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-lr8tv"] Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.499561 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-c7w6t"] Nov 24 20:36:51 crc kubenswrapper[4812]: E1124 20:36:51.500046 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75266344-a7b6-425d-a843-eb7666398c51" containerName="extract-utilities" Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.500078 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="75266344-a7b6-425d-a843-eb7666398c51" containerName="extract-utilities" Nov 24 20:36:51 crc kubenswrapper[4812]: E1124 20:36:51.500109 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75266344-a7b6-425d-a843-eb7666398c51" containerName="extract-content" Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.500123 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="75266344-a7b6-425d-a843-eb7666398c51" containerName="extract-content" Nov 24 20:36:51 crc kubenswrapper[4812]: E1124 20:36:51.500169 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75266344-a7b6-425d-a843-eb7666398c51" containerName="registry-server" Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.500181 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="75266344-a7b6-425d-a843-eb7666398c51" containerName="registry-server" Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.500521 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="75266344-a7b6-425d-a843-eb7666398c51" containerName="registry-server" Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.501576 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-c7w6t" Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.505107 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.505560 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.506109 4812 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-sg6gf" Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.506386 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.510850 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-c7w6t"] Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.627417 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxfp7\" (UniqueName: \"kubernetes.io/projected/1b278c52-b670-4253-9240-412d03817730-kube-api-access-cxfp7\") pod \"crc-storage-crc-c7w6t\" (UID: \"1b278c52-b670-4253-9240-412d03817730\") " pod="crc-storage/crc-storage-crc-c7w6t" Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.627659 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/1b278c52-b670-4253-9240-412d03817730-crc-storage\") pod \"crc-storage-crc-c7w6t\" (UID: \"1b278c52-b670-4253-9240-412d03817730\") " pod="crc-storage/crc-storage-crc-c7w6t" Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.627743 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/1b278c52-b670-4253-9240-412d03817730-node-mnt\") pod \"crc-storage-crc-c7w6t\" (UID: \"1b278c52-b670-4253-9240-412d03817730\") " pod="crc-storage/crc-storage-crc-c7w6t" Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.729272 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxfp7\" (UniqueName: \"kubernetes.io/projected/1b278c52-b670-4253-9240-412d03817730-kube-api-access-cxfp7\") pod \"crc-storage-crc-c7w6t\" (UID: \"1b278c52-b670-4253-9240-412d03817730\") " pod="crc-storage/crc-storage-crc-c7w6t" Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.729778 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/1b278c52-b670-4253-9240-412d03817730-crc-storage\") pod \"crc-storage-crc-c7w6t\" (UID: \"1b278c52-b670-4253-9240-412d03817730\") " pod="crc-storage/crc-storage-crc-c7w6t" Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.730444 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/1b278c52-b670-4253-9240-412d03817730-node-mnt\") pod \"crc-storage-crc-c7w6t\" (UID: \"1b278c52-b670-4253-9240-412d03817730\") " pod="crc-storage/crc-storage-crc-c7w6t" Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.730961 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/1b278c52-b670-4253-9240-412d03817730-crc-storage\") pod \"crc-storage-crc-c7w6t\" (UID: \"1b278c52-b670-4253-9240-412d03817730\") " pod="crc-storage/crc-storage-crc-c7w6t" Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.731030 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/1b278c52-b670-4253-9240-412d03817730-node-mnt\") pod \"crc-storage-crc-c7w6t\" (UID: \"1b278c52-b670-4253-9240-412d03817730\") " pod="crc-storage/crc-storage-crc-c7w6t" Nov 24 20:36:51 crc kubenswrapper[4812]: I1124 20:36:51.922055 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxfp7\" (UniqueName: \"kubernetes.io/projected/1b278c52-b670-4253-9240-412d03817730-kube-api-access-cxfp7\") pod \"crc-storage-crc-c7w6t\" (UID: \"1b278c52-b670-4253-9240-412d03817730\") " pod="crc-storage/crc-storage-crc-c7w6t" Nov 24 20:36:52 crc kubenswrapper[4812]: I1124 20:36:52.159788 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-c7w6t" Nov 24 20:36:52 crc kubenswrapper[4812]: I1124 20:36:52.379823 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-c7w6t"] Nov 24 20:36:52 crc kubenswrapper[4812]: I1124 20:36:52.389942 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 20:36:52 crc kubenswrapper[4812]: I1124 20:36:52.599525 4812 scope.go:117] "RemoveContainer" containerID="eb30b3b8abdff5afc80d6d6f82f58c4a5ea2c7a4223b0c321530365c4978d9ff" Nov 24 20:36:52 crc kubenswrapper[4812]: I1124 20:36:52.634466 4812 scope.go:117] "RemoveContainer" containerID="84a79d3f31ea24e4a8c52b9bfd5700658d201e390e8aca1278bb457712ba3d12" Nov 24 20:36:52 crc kubenswrapper[4812]: I1124 20:36:52.674246 4812 scope.go:117] "RemoveContainer" containerID="96e0b828c7a1b0f05a84a0a952dab834015ca6a1b79f50738e8ef8f079b93b03" Nov 24 20:36:52 crc kubenswrapper[4812]: I1124 20:36:52.717740 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-c7w6t" event={"ID":"1b278c52-b670-4253-9240-412d03817730","Type":"ContainerStarted","Data":"036380a4b42bca98c570a6fd2b4f0ea6e9de396b24dd64dbf5c4ae160b50cd32"} Nov 24 20:36:52 crc kubenswrapper[4812]: I1124 20:36:52.988280 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="740086f8-f1a4-484e-946c-03e70a2a55fa" path="/var/lib/kubelet/pods/740086f8-f1a4-484e-946c-03e70a2a55fa/volumes" Nov 24 20:36:53 crc kubenswrapper[4812]: I1124 20:36:53.730744 4812 generic.go:334] "Generic (PLEG): container finished" podID="1b278c52-b670-4253-9240-412d03817730" containerID="0eba4e002acca2167ab08e84f811c8c4ee86538c7a5a5594cd5903171a8eb80e" exitCode=0 Nov 24 20:36:53 crc kubenswrapper[4812]: I1124 20:36:53.730823 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-c7w6t" event={"ID":"1b278c52-b670-4253-9240-412d03817730","Type":"ContainerDied","Data":"0eba4e002acca2167ab08e84f811c8c4ee86538c7a5a5594cd5903171a8eb80e"} Nov 24 20:36:55 crc kubenswrapper[4812]: I1124 20:36:55.178935 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-c7w6t" Nov 24 20:36:55 crc kubenswrapper[4812]: I1124 20:36:55.202065 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/1b278c52-b670-4253-9240-412d03817730-crc-storage\") pod \"1b278c52-b670-4253-9240-412d03817730\" (UID: \"1b278c52-b670-4253-9240-412d03817730\") " Nov 24 20:36:55 crc kubenswrapper[4812]: I1124 20:36:55.202124 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/1b278c52-b670-4253-9240-412d03817730-node-mnt\") pod \"1b278c52-b670-4253-9240-412d03817730\" (UID: \"1b278c52-b670-4253-9240-412d03817730\") " Nov 24 20:36:55 crc kubenswrapper[4812]: I1124 20:36:55.202187 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxfp7\" (UniqueName: \"kubernetes.io/projected/1b278c52-b670-4253-9240-412d03817730-kube-api-access-cxfp7\") pod \"1b278c52-b670-4253-9240-412d03817730\" (UID: \"1b278c52-b670-4253-9240-412d03817730\") " Nov 24 20:36:55 crc kubenswrapper[4812]: I1124 20:36:55.202410 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1b278c52-b670-4253-9240-412d03817730-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "1b278c52-b670-4253-9240-412d03817730" (UID: "1b278c52-b670-4253-9240-412d03817730"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 20:36:55 crc kubenswrapper[4812]: I1124 20:36:55.217692 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b278c52-b670-4253-9240-412d03817730-kube-api-access-cxfp7" (OuterVolumeSpecName: "kube-api-access-cxfp7") pod "1b278c52-b670-4253-9240-412d03817730" (UID: "1b278c52-b670-4253-9240-412d03817730"). InnerVolumeSpecName "kube-api-access-cxfp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:36:55 crc kubenswrapper[4812]: I1124 20:36:55.238236 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b278c52-b670-4253-9240-412d03817730-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "1b278c52-b670-4253-9240-412d03817730" (UID: "1b278c52-b670-4253-9240-412d03817730"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:36:55 crc kubenswrapper[4812]: I1124 20:36:55.303165 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxfp7\" (UniqueName: \"kubernetes.io/projected/1b278c52-b670-4253-9240-412d03817730-kube-api-access-cxfp7\") on node \"crc\" DevicePath \"\"" Nov 24 20:36:55 crc kubenswrapper[4812]: I1124 20:36:55.303195 4812 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/1b278c52-b670-4253-9240-412d03817730-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 24 20:36:55 crc kubenswrapper[4812]: I1124 20:36:55.303204 4812 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/1b278c52-b670-4253-9240-412d03817730-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 24 20:36:55 crc kubenswrapper[4812]: I1124 20:36:55.753576 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-c7w6t" event={"ID":"1b278c52-b670-4253-9240-412d03817730","Type":"ContainerDied","Data":"036380a4b42bca98c570a6fd2b4f0ea6e9de396b24dd64dbf5c4ae160b50cd32"} Nov 24 20:36:55 crc kubenswrapper[4812]: I1124 20:36:55.753934 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="036380a4b42bca98c570a6fd2b4f0ea6e9de396b24dd64dbf5c4ae160b50cd32" Nov 24 20:36:55 crc kubenswrapper[4812]: I1124 20:36:55.753674 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-c7w6t" Nov 24 20:36:57 crc kubenswrapper[4812]: I1124 20:36:57.472432 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-c7w6t"] Nov 24 20:36:57 crc kubenswrapper[4812]: I1124 20:36:57.482295 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-c7w6t"] Nov 24 20:36:57 crc kubenswrapper[4812]: I1124 20:36:57.617862 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-ffnb8"] Nov 24 20:36:57 crc kubenswrapper[4812]: E1124 20:36:57.618476 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b278c52-b670-4253-9240-412d03817730" containerName="storage" Nov 24 20:36:57 crc kubenswrapper[4812]: I1124 20:36:57.618523 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b278c52-b670-4253-9240-412d03817730" containerName="storage" Nov 24 20:36:57 crc kubenswrapper[4812]: I1124 20:36:57.618813 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b278c52-b670-4253-9240-412d03817730" containerName="storage" Nov 24 20:36:57 crc kubenswrapper[4812]: I1124 20:36:57.619719 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ffnb8" Nov 24 20:36:57 crc kubenswrapper[4812]: I1124 20:36:57.623962 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 24 20:36:57 crc kubenswrapper[4812]: I1124 20:36:57.624189 4812 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-sg6gf" Nov 24 20:36:57 crc kubenswrapper[4812]: I1124 20:36:57.624463 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 24 20:36:57 crc kubenswrapper[4812]: I1124 20:36:57.624949 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 24 20:36:57 crc kubenswrapper[4812]: I1124 20:36:57.630758 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-ffnb8"] Nov 24 20:36:57 crc kubenswrapper[4812]: I1124 20:36:57.644777 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b76059f5-c013-4587-a2b4-0fd55d43abe1-node-mnt\") pod \"crc-storage-crc-ffnb8\" (UID: \"b76059f5-c013-4587-a2b4-0fd55d43abe1\") " pod="crc-storage/crc-storage-crc-ffnb8" Nov 24 20:36:57 crc kubenswrapper[4812]: I1124 20:36:57.644948 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b76059f5-c013-4587-a2b4-0fd55d43abe1-crc-storage\") pod \"crc-storage-crc-ffnb8\" (UID: \"b76059f5-c013-4587-a2b4-0fd55d43abe1\") " pod="crc-storage/crc-storage-crc-ffnb8" Nov 24 20:36:57 crc kubenswrapper[4812]: I1124 20:36:57.644990 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4p4k\" (UniqueName: \"kubernetes.io/projected/b76059f5-c013-4587-a2b4-0fd55d43abe1-kube-api-access-z4p4k\") pod \"crc-storage-crc-ffnb8\" (UID: \"b76059f5-c013-4587-a2b4-0fd55d43abe1\") " pod="crc-storage/crc-storage-crc-ffnb8" Nov 24 20:36:57 crc kubenswrapper[4812]: I1124 20:36:57.747930 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b76059f5-c013-4587-a2b4-0fd55d43abe1-node-mnt\") pod \"crc-storage-crc-ffnb8\" (UID: \"b76059f5-c013-4587-a2b4-0fd55d43abe1\") " pod="crc-storage/crc-storage-crc-ffnb8" Nov 24 20:36:57 crc kubenswrapper[4812]: I1124 20:36:57.748484 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b76059f5-c013-4587-a2b4-0fd55d43abe1-node-mnt\") pod \"crc-storage-crc-ffnb8\" (UID: \"b76059f5-c013-4587-a2b4-0fd55d43abe1\") " pod="crc-storage/crc-storage-crc-ffnb8" Nov 24 20:36:57 crc kubenswrapper[4812]: I1124 20:36:57.748520 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b76059f5-c013-4587-a2b4-0fd55d43abe1-crc-storage\") pod \"crc-storage-crc-ffnb8\" (UID: \"b76059f5-c013-4587-a2b4-0fd55d43abe1\") " pod="crc-storage/crc-storage-crc-ffnb8" Nov 24 20:36:57 crc kubenswrapper[4812]: I1124 20:36:57.748620 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4p4k\" (UniqueName: \"kubernetes.io/projected/b76059f5-c013-4587-a2b4-0fd55d43abe1-kube-api-access-z4p4k\") pod \"crc-storage-crc-ffnb8\" (UID: \"b76059f5-c013-4587-a2b4-0fd55d43abe1\") " pod="crc-storage/crc-storage-crc-ffnb8" Nov 24 20:36:57 crc kubenswrapper[4812]: I1124 20:36:57.750902 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b76059f5-c013-4587-a2b4-0fd55d43abe1-crc-storage\") pod \"crc-storage-crc-ffnb8\" (UID: \"b76059f5-c013-4587-a2b4-0fd55d43abe1\") " pod="crc-storage/crc-storage-crc-ffnb8" Nov 24 20:36:57 crc kubenswrapper[4812]: I1124 20:36:57.783759 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4p4k\" (UniqueName: \"kubernetes.io/projected/b76059f5-c013-4587-a2b4-0fd55d43abe1-kube-api-access-z4p4k\") pod \"crc-storage-crc-ffnb8\" (UID: \"b76059f5-c013-4587-a2b4-0fd55d43abe1\") " pod="crc-storage/crc-storage-crc-ffnb8" Nov 24 20:36:57 crc kubenswrapper[4812]: I1124 20:36:57.949700 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ffnb8" Nov 24 20:36:58 crc kubenswrapper[4812]: I1124 20:36:58.210585 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-ffnb8"] Nov 24 20:36:58 crc kubenswrapper[4812]: I1124 20:36:58.783207 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-ffnb8" event={"ID":"b76059f5-c013-4587-a2b4-0fd55d43abe1","Type":"ContainerStarted","Data":"2fe556b6fcc2a5deba39282253433c871be999f794730f9c5db0a1179f8fdfa2"} Nov 24 20:36:58 crc kubenswrapper[4812]: I1124 20:36:58.982697 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b278c52-b670-4253-9240-412d03817730" path="/var/lib/kubelet/pods/1b278c52-b670-4253-9240-412d03817730/volumes" Nov 24 20:36:59 crc kubenswrapper[4812]: I1124 20:36:59.798277 4812 generic.go:334] "Generic (PLEG): container finished" podID="b76059f5-c013-4587-a2b4-0fd55d43abe1" containerID="4a41fe52ed32eb28cc501db9032dbfd9c750f6b7a68d92fc76989c2cbf76cc91" exitCode=0 Nov 24 20:36:59 crc kubenswrapper[4812]: I1124 20:36:59.798383 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-ffnb8" event={"ID":"b76059f5-c013-4587-a2b4-0fd55d43abe1","Type":"ContainerDied","Data":"4a41fe52ed32eb28cc501db9032dbfd9c750f6b7a68d92fc76989c2cbf76cc91"} Nov 24 20:37:01 crc kubenswrapper[4812]: I1124 20:37:01.170081 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ffnb8" Nov 24 20:37:01 crc kubenswrapper[4812]: I1124 20:37:01.202625 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b76059f5-c013-4587-a2b4-0fd55d43abe1-crc-storage\") pod \"b76059f5-c013-4587-a2b4-0fd55d43abe1\" (UID: \"b76059f5-c013-4587-a2b4-0fd55d43abe1\") " Nov 24 20:37:01 crc kubenswrapper[4812]: I1124 20:37:01.202809 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4p4k\" (UniqueName: \"kubernetes.io/projected/b76059f5-c013-4587-a2b4-0fd55d43abe1-kube-api-access-z4p4k\") pod \"b76059f5-c013-4587-a2b4-0fd55d43abe1\" (UID: \"b76059f5-c013-4587-a2b4-0fd55d43abe1\") " Nov 24 20:37:01 crc kubenswrapper[4812]: I1124 20:37:01.202909 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b76059f5-c013-4587-a2b4-0fd55d43abe1-node-mnt\") pod \"b76059f5-c013-4587-a2b4-0fd55d43abe1\" (UID: \"b76059f5-c013-4587-a2b4-0fd55d43abe1\") " Nov 24 20:37:01 crc kubenswrapper[4812]: I1124 20:37:01.203411 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b76059f5-c013-4587-a2b4-0fd55d43abe1-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "b76059f5-c013-4587-a2b4-0fd55d43abe1" (UID: "b76059f5-c013-4587-a2b4-0fd55d43abe1"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 20:37:01 crc kubenswrapper[4812]: I1124 20:37:01.211728 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b76059f5-c013-4587-a2b4-0fd55d43abe1-kube-api-access-z4p4k" (OuterVolumeSpecName: "kube-api-access-z4p4k") pod "b76059f5-c013-4587-a2b4-0fd55d43abe1" (UID: "b76059f5-c013-4587-a2b4-0fd55d43abe1"). InnerVolumeSpecName "kube-api-access-z4p4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:37:01 crc kubenswrapper[4812]: I1124 20:37:01.231435 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b76059f5-c013-4587-a2b4-0fd55d43abe1-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "b76059f5-c013-4587-a2b4-0fd55d43abe1" (UID: "b76059f5-c013-4587-a2b4-0fd55d43abe1"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:37:01 crc kubenswrapper[4812]: I1124 20:37:01.304154 4812 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b76059f5-c013-4587-a2b4-0fd55d43abe1-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 24 20:37:01 crc kubenswrapper[4812]: I1124 20:37:01.304416 4812 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b76059f5-c013-4587-a2b4-0fd55d43abe1-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 24 20:37:01 crc kubenswrapper[4812]: I1124 20:37:01.304505 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4p4k\" (UniqueName: \"kubernetes.io/projected/b76059f5-c013-4587-a2b4-0fd55d43abe1-kube-api-access-z4p4k\") on node \"crc\" DevicePath \"\"" Nov 24 20:37:01 crc kubenswrapper[4812]: I1124 20:37:01.821431 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-ffnb8" event={"ID":"b76059f5-c013-4587-a2b4-0fd55d43abe1","Type":"ContainerDied","Data":"2fe556b6fcc2a5deba39282253433c871be999f794730f9c5db0a1179f8fdfa2"} Nov 24 20:37:01 crc kubenswrapper[4812]: I1124 20:37:01.821479 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fe556b6fcc2a5deba39282253433c871be999f794730f9c5db0a1179f8fdfa2" Nov 24 20:37:01 crc kubenswrapper[4812]: I1124 20:37:01.821517 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-ffnb8" Nov 24 20:37:02 crc kubenswrapper[4812]: I1124 20:37:02.966641 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:37:02 crc kubenswrapper[4812]: E1124 20:37:02.967603 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:37:14 crc kubenswrapper[4812]: I1124 20:37:14.966247 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:37:14 crc kubenswrapper[4812]: E1124 20:37:14.967364 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:37:29 crc kubenswrapper[4812]: I1124 20:37:29.966571 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:37:29 crc kubenswrapper[4812]: E1124 20:37:29.967568 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:37:42 crc kubenswrapper[4812]: I1124 20:37:42.966387 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:37:42 crc kubenswrapper[4812]: E1124 20:37:42.967559 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:37:52 crc kubenswrapper[4812]: I1124 20:37:52.735057 4812 scope.go:117] "RemoveContainer" containerID="0e0ff0ce14b9486e1f912226d61dd91d1d734c6454cb4c48de9d2540698b03ca" Nov 24 20:37:56 crc kubenswrapper[4812]: I1124 20:37:56.980892 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:37:56 crc kubenswrapper[4812]: E1124 20:37:56.981508 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:38:07 crc kubenswrapper[4812]: I1124 20:38:07.967660 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:38:08 crc kubenswrapper[4812]: I1124 20:38:08.488755 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"dbde7a21765fc5b63caec3c7e3d0f2beae4d92e2ef96dc251d6a1b6345e4d456"} Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.129225 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-866449bdb9-nrh6x"] Nov 24 20:39:01 crc kubenswrapper[4812]: E1124 20:39:01.130156 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b76059f5-c013-4587-a2b4-0fd55d43abe1" containerName="storage" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.130173 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b76059f5-c013-4587-a2b4-0fd55d43abe1" containerName="storage" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.130413 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b76059f5-c013-4587-a2b4-0fd55d43abe1" containerName="storage" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.131313 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-866449bdb9-nrh6x" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.133410 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.134351 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.134612 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-p8plm" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.134722 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.210448 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-866449bdb9-nrh6x"] Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.223094 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55c86457d7-blw58"] Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.224585 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55c86457d7-blw58" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.229615 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.244302 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55c86457d7-blw58"] Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.260875 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dde0d6a-f060-4501-9483-deda1a0e4648-config\") pod \"dnsmasq-dns-866449bdb9-nrh6x\" (UID: \"9dde0d6a-f060-4501-9483-deda1a0e4648\") " pod="openstack/dnsmasq-dns-866449bdb9-nrh6x" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.260974 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9d4310a-766c-41f6-9157-2fb3e5699c7f-config\") pod \"dnsmasq-dns-55c86457d7-blw58\" (UID: \"e9d4310a-766c-41f6-9157-2fb3e5699c7f\") " pod="openstack/dnsmasq-dns-55c86457d7-blw58" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.260993 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9d4310a-766c-41f6-9157-2fb3e5699c7f-dns-svc\") pod \"dnsmasq-dns-55c86457d7-blw58\" (UID: \"e9d4310a-766c-41f6-9157-2fb3e5699c7f\") " pod="openstack/dnsmasq-dns-55c86457d7-blw58" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.261025 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5pl9\" (UniqueName: \"kubernetes.io/projected/e9d4310a-766c-41f6-9157-2fb3e5699c7f-kube-api-access-p5pl9\") pod \"dnsmasq-dns-55c86457d7-blw58\" (UID: \"e9d4310a-766c-41f6-9157-2fb3e5699c7f\") " pod="openstack/dnsmasq-dns-55c86457d7-blw58" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.261065 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4vvf\" (UniqueName: \"kubernetes.io/projected/9dde0d6a-f060-4501-9483-deda1a0e4648-kube-api-access-s4vvf\") pod \"dnsmasq-dns-866449bdb9-nrh6x\" (UID: \"9dde0d6a-f060-4501-9483-deda1a0e4648\") " pod="openstack/dnsmasq-dns-866449bdb9-nrh6x" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.362378 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4vvf\" (UniqueName: \"kubernetes.io/projected/9dde0d6a-f060-4501-9483-deda1a0e4648-kube-api-access-s4vvf\") pod \"dnsmasq-dns-866449bdb9-nrh6x\" (UID: \"9dde0d6a-f060-4501-9483-deda1a0e4648\") " pod="openstack/dnsmasq-dns-866449bdb9-nrh6x" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.362448 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dde0d6a-f060-4501-9483-deda1a0e4648-config\") pod \"dnsmasq-dns-866449bdb9-nrh6x\" (UID: \"9dde0d6a-f060-4501-9483-deda1a0e4648\") " pod="openstack/dnsmasq-dns-866449bdb9-nrh6x" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.362558 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9d4310a-766c-41f6-9157-2fb3e5699c7f-config\") pod \"dnsmasq-dns-55c86457d7-blw58\" (UID: \"e9d4310a-766c-41f6-9157-2fb3e5699c7f\") " pod="openstack/dnsmasq-dns-55c86457d7-blw58" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.362586 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9d4310a-766c-41f6-9157-2fb3e5699c7f-dns-svc\") pod \"dnsmasq-dns-55c86457d7-blw58\" (UID: \"e9d4310a-766c-41f6-9157-2fb3e5699c7f\") " pod="openstack/dnsmasq-dns-55c86457d7-blw58" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.362615 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5pl9\" (UniqueName: \"kubernetes.io/projected/e9d4310a-766c-41f6-9157-2fb3e5699c7f-kube-api-access-p5pl9\") pod \"dnsmasq-dns-55c86457d7-blw58\" (UID: \"e9d4310a-766c-41f6-9157-2fb3e5699c7f\") " pod="openstack/dnsmasq-dns-55c86457d7-blw58" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.363687 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9d4310a-766c-41f6-9157-2fb3e5699c7f-dns-svc\") pod \"dnsmasq-dns-55c86457d7-blw58\" (UID: \"e9d4310a-766c-41f6-9157-2fb3e5699c7f\") " pod="openstack/dnsmasq-dns-55c86457d7-blw58" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.363687 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9d4310a-766c-41f6-9157-2fb3e5699c7f-config\") pod \"dnsmasq-dns-55c86457d7-blw58\" (UID: \"e9d4310a-766c-41f6-9157-2fb3e5699c7f\") " pod="openstack/dnsmasq-dns-55c86457d7-blw58" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.364235 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dde0d6a-f060-4501-9483-deda1a0e4648-config\") pod \"dnsmasq-dns-866449bdb9-nrh6x\" (UID: \"9dde0d6a-f060-4501-9483-deda1a0e4648\") " pod="openstack/dnsmasq-dns-866449bdb9-nrh6x" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.384028 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5pl9\" (UniqueName: \"kubernetes.io/projected/e9d4310a-766c-41f6-9157-2fb3e5699c7f-kube-api-access-p5pl9\") pod \"dnsmasq-dns-55c86457d7-blw58\" (UID: \"e9d4310a-766c-41f6-9157-2fb3e5699c7f\") " pod="openstack/dnsmasq-dns-55c86457d7-blw58" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.385129 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4vvf\" (UniqueName: \"kubernetes.io/projected/9dde0d6a-f060-4501-9483-deda1a0e4648-kube-api-access-s4vvf\") pod \"dnsmasq-dns-866449bdb9-nrh6x\" (UID: \"9dde0d6a-f060-4501-9483-deda1a0e4648\") " pod="openstack/dnsmasq-dns-866449bdb9-nrh6x" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.450720 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-866449bdb9-nrh6x" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.548150 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55c86457d7-blw58" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.574293 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55c86457d7-blw58"] Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.597570 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f4c6c447c-88llx"] Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.598715 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4c6c447c-88llx" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.625756 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f4c6c447c-88llx"] Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.667850 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pl25\" (UniqueName: \"kubernetes.io/projected/baa471f1-03e9-4da3-b2f7-836bc7413843-kube-api-access-2pl25\") pod \"dnsmasq-dns-f4c6c447c-88llx\" (UID: \"baa471f1-03e9-4da3-b2f7-836bc7413843\") " pod="openstack/dnsmasq-dns-f4c6c447c-88llx" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.668060 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa471f1-03e9-4da3-b2f7-836bc7413843-config\") pod \"dnsmasq-dns-f4c6c447c-88llx\" (UID: \"baa471f1-03e9-4da3-b2f7-836bc7413843\") " pod="openstack/dnsmasq-dns-f4c6c447c-88llx" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.668135 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/baa471f1-03e9-4da3-b2f7-836bc7413843-dns-svc\") pod \"dnsmasq-dns-f4c6c447c-88llx\" (UID: \"baa471f1-03e9-4da3-b2f7-836bc7413843\") " pod="openstack/dnsmasq-dns-f4c6c447c-88llx" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.768817 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/baa471f1-03e9-4da3-b2f7-836bc7413843-dns-svc\") pod \"dnsmasq-dns-f4c6c447c-88llx\" (UID: \"baa471f1-03e9-4da3-b2f7-836bc7413843\") " pod="openstack/dnsmasq-dns-f4c6c447c-88llx" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.768913 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pl25\" (UniqueName: \"kubernetes.io/projected/baa471f1-03e9-4da3-b2f7-836bc7413843-kube-api-access-2pl25\") pod \"dnsmasq-dns-f4c6c447c-88llx\" (UID: \"baa471f1-03e9-4da3-b2f7-836bc7413843\") " pod="openstack/dnsmasq-dns-f4c6c447c-88llx" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.768972 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa471f1-03e9-4da3-b2f7-836bc7413843-config\") pod \"dnsmasq-dns-f4c6c447c-88llx\" (UID: \"baa471f1-03e9-4da3-b2f7-836bc7413843\") " pod="openstack/dnsmasq-dns-f4c6c447c-88llx" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.769923 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/baa471f1-03e9-4da3-b2f7-836bc7413843-dns-svc\") pod \"dnsmasq-dns-f4c6c447c-88llx\" (UID: \"baa471f1-03e9-4da3-b2f7-836bc7413843\") " pod="openstack/dnsmasq-dns-f4c6c447c-88llx" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.769943 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa471f1-03e9-4da3-b2f7-836bc7413843-config\") pod \"dnsmasq-dns-f4c6c447c-88llx\" (UID: \"baa471f1-03e9-4da3-b2f7-836bc7413843\") " pod="openstack/dnsmasq-dns-f4c6c447c-88llx" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.792804 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pl25\" (UniqueName: \"kubernetes.io/projected/baa471f1-03e9-4da3-b2f7-836bc7413843-kube-api-access-2pl25\") pod \"dnsmasq-dns-f4c6c447c-88llx\" (UID: \"baa471f1-03e9-4da3-b2f7-836bc7413843\") " pod="openstack/dnsmasq-dns-f4c6c447c-88llx" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.901109 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-866449bdb9-nrh6x"] Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.925177 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59c6c64b5c-glnmb"] Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.926786 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.943093 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59c6c64b5c-glnmb"] Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.955761 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4c6c447c-88llx" Nov 24 20:39:01 crc kubenswrapper[4812]: I1124 20:39:01.999574 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-866449bdb9-nrh6x"] Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.070657 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-866449bdb9-nrh6x" event={"ID":"9dde0d6a-f060-4501-9483-deda1a0e4648","Type":"ContainerStarted","Data":"80c00ae777d4ee44cb2b0c1125e2d87b301b42a428efe9a1e8db857ccf05bfa2"} Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.073117 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f180ecb5-e534-49cd-a9a4-e12a29ecca51-config\") pod \"dnsmasq-dns-59c6c64b5c-glnmb\" (UID: \"f180ecb5-e534-49cd-a9a4-e12a29ecca51\") " pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.073244 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdznb\" (UniqueName: \"kubernetes.io/projected/f180ecb5-e534-49cd-a9a4-e12a29ecca51-kube-api-access-gdznb\") pod \"dnsmasq-dns-59c6c64b5c-glnmb\" (UID: \"f180ecb5-e534-49cd-a9a4-e12a29ecca51\") " pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.073366 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f180ecb5-e534-49cd-a9a4-e12a29ecca51-dns-svc\") pod \"dnsmasq-dns-59c6c64b5c-glnmb\" (UID: \"f180ecb5-e534-49cd-a9a4-e12a29ecca51\") " pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.105153 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55c86457d7-blw58"] Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.175084 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f180ecb5-e534-49cd-a9a4-e12a29ecca51-dns-svc\") pod \"dnsmasq-dns-59c6c64b5c-glnmb\" (UID: \"f180ecb5-e534-49cd-a9a4-e12a29ecca51\") " pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.175665 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f180ecb5-e534-49cd-a9a4-e12a29ecca51-config\") pod \"dnsmasq-dns-59c6c64b5c-glnmb\" (UID: \"f180ecb5-e534-49cd-a9a4-e12a29ecca51\") " pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.175726 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdznb\" (UniqueName: \"kubernetes.io/projected/f180ecb5-e534-49cd-a9a4-e12a29ecca51-kube-api-access-gdznb\") pod \"dnsmasq-dns-59c6c64b5c-glnmb\" (UID: \"f180ecb5-e534-49cd-a9a4-e12a29ecca51\") " pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.176039 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f180ecb5-e534-49cd-a9a4-e12a29ecca51-dns-svc\") pod \"dnsmasq-dns-59c6c64b5c-glnmb\" (UID: \"f180ecb5-e534-49cd-a9a4-e12a29ecca51\") " pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.176571 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f180ecb5-e534-49cd-a9a4-e12a29ecca51-config\") pod \"dnsmasq-dns-59c6c64b5c-glnmb\" (UID: \"f180ecb5-e534-49cd-a9a4-e12a29ecca51\") " pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.206365 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdznb\" (UniqueName: \"kubernetes.io/projected/f180ecb5-e534-49cd-a9a4-e12a29ecca51-kube-api-access-gdznb\") pod \"dnsmasq-dns-59c6c64b5c-glnmb\" (UID: \"f180ecb5-e534-49cd-a9a4-e12a29ecca51\") " pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.258519 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.456657 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f4c6c447c-88llx"] Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.463861 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59c6c64b5c-glnmb"] Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.743403 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.745637 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.750652 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.750931 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.751125 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.751391 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.754521 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-s2qdl" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.759493 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.763118 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.776057 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.890422 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.890585 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fec318d7-3a68-4aa4-8145-511d13aae323-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.890724 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fec318d7-3a68-4aa4-8145-511d13aae323-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.890759 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.890817 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-63adf78e-a6f9-47f2-a73b-780e889378eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-63adf78e-a6f9-47f2-a73b-780e889378eb\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.890846 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fec318d7-3a68-4aa4-8145-511d13aae323-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.891112 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v65z\" (UniqueName: \"kubernetes.io/projected/fec318d7-3a68-4aa4-8145-511d13aae323-kube-api-access-6v65z\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.891205 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fec318d7-3a68-4aa4-8145-511d13aae323-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.891300 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.891539 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.891619 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fec318d7-3a68-4aa4-8145-511d13aae323-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.993682 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fec318d7-3a68-4aa4-8145-511d13aae323-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.993758 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v65z\" (UniqueName: \"kubernetes.io/projected/fec318d7-3a68-4aa4-8145-511d13aae323-kube-api-access-6v65z\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.993796 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fec318d7-3a68-4aa4-8145-511d13aae323-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.993847 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.993903 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.993936 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fec318d7-3a68-4aa4-8145-511d13aae323-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.993970 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.994041 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fec318d7-3a68-4aa4-8145-511d13aae323-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.994090 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fec318d7-3a68-4aa4-8145-511d13aae323-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.994112 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.994147 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-63adf78e-a6f9-47f2-a73b-780e889378eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-63adf78e-a6f9-47f2-a73b-780e889378eb\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.995507 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.995604 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fec318d7-3a68-4aa4-8145-511d13aae323-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.995739 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fec318d7-3a68-4aa4-8145-511d13aae323-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.995987 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.998636 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fec318d7-3a68-4aa4-8145-511d13aae323-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.998682 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fec318d7-3a68-4aa4-8145-511d13aae323-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.998688 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fec318d7-3a68-4aa4-8145-511d13aae323-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.998858 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.999222 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 20:39:02 crc kubenswrapper[4812]: I1124 20:39:02.999258 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-63adf78e-a6f9-47f2-a73b-780e889378eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-63adf78e-a6f9-47f2-a73b-780e889378eb\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b7339b2a679bf59bd9d095b8c9a5f08f52f05b9abec851abbd4b77e9fd0415e3/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.000352 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.029433 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v65z\" (UniqueName: \"kubernetes.io/projected/fec318d7-3a68-4aa4-8145-511d13aae323-kube-api-access-6v65z\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.035814 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.037515 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.041708 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.041947 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.042147 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.042281 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7cm2b" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.042555 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.042688 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.042896 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.087870 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-63adf78e-a6f9-47f2-a73b-780e889378eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-63adf78e-a6f9-47f2-a73b-780e889378eb\") pod \"rabbitmq-cell1-server-0\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.094646 4812 generic.go:334] "Generic (PLEG): container finished" podID="baa471f1-03e9-4da3-b2f7-836bc7413843" containerID="0f07f3ff68580acb4c31fa102194a73ea0636fb5e96a7f002fd0dc2c123d140e" exitCode=0 Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.094767 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4c6c447c-88llx" event={"ID":"baa471f1-03e9-4da3-b2f7-836bc7413843","Type":"ContainerDied","Data":"0f07f3ff68580acb4c31fa102194a73ea0636fb5e96a7f002fd0dc2c123d140e"} Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.094813 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4c6c447c-88llx" event={"ID":"baa471f1-03e9-4da3-b2f7-836bc7413843","Type":"ContainerStarted","Data":"a87391a7a90977dfdf284a5390d2afab6b130c94b04a2e95c04ea9607827c7d4"} Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.099399 4812 generic.go:334] "Generic (PLEG): container finished" podID="e9d4310a-766c-41f6-9157-2fb3e5699c7f" containerID="78d0037b040de5703d5e7657558f9dcea395f011dd6293dac51a1c26eddba1ef" exitCode=0 Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.099478 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55c86457d7-blw58" event={"ID":"e9d4310a-766c-41f6-9157-2fb3e5699c7f","Type":"ContainerDied","Data":"78d0037b040de5703d5e7657558f9dcea395f011dd6293dac51a1c26eddba1ef"} Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.099503 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55c86457d7-blw58" event={"ID":"e9d4310a-766c-41f6-9157-2fb3e5699c7f","Type":"ContainerStarted","Data":"a766fd9a8357fb916be83fdb6b42757598250af22a7886d5a86f7e53a78cbf3e"} Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.101318 4812 generic.go:334] "Generic (PLEG): container finished" podID="9dde0d6a-f060-4501-9483-deda1a0e4648" containerID="491f0f29de65c6066538047941f7f70e221e03aa75676accbadf503b0b6badb0" exitCode=0 Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.101388 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-866449bdb9-nrh6x" event={"ID":"9dde0d6a-f060-4501-9483-deda1a0e4648","Type":"ContainerDied","Data":"491f0f29de65c6066538047941f7f70e221e03aa75676accbadf503b0b6badb0"} Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.104955 4812 generic.go:334] "Generic (PLEG): container finished" podID="f180ecb5-e534-49cd-a9a4-e12a29ecca51" containerID="db0c44173655a9d2e9e4686b3ee9b11f98ae4119a3c03ccab0164f54e1019537" exitCode=0 Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.105017 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" event={"ID":"f180ecb5-e534-49cd-a9a4-e12a29ecca51","Type":"ContainerDied","Data":"db0c44173655a9d2e9e4686b3ee9b11f98ae4119a3c03ccab0164f54e1019537"} Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.105052 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" event={"ID":"f180ecb5-e534-49cd-a9a4-e12a29ecca51","Type":"ContainerStarted","Data":"1456aa594e71fbb1bf8ca3e2dfbc1fe9f0b7ddfaa1406fee72ec8a109e593eae"} Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.125364 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.162699 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.197966 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2962d2b-d745-4923-bb91-47cf0f07eb7b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.198033 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2962d2b-d745-4923-bb91-47cf0f07eb7b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.198095 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2962d2b-d745-4923-bb91-47cf0f07eb7b-config-data\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.198122 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.198140 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f2c05e05-c5f2-4121-b965-122db19dfe52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f2c05e05-c5f2-4121-b965-122db19dfe52\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.199143 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.199201 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.199220 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2962d2b-d745-4923-bb91-47cf0f07eb7b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.200664 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.200710 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2962d2b-d745-4923-bb91-47cf0f07eb7b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.200765 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfcw5\" (UniqueName: \"kubernetes.io/projected/e2962d2b-d745-4923-bb91-47cf0f07eb7b-kube-api-access-lfcw5\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.305074 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2962d2b-d745-4923-bb91-47cf0f07eb7b-config-data\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.305319 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.305352 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f2c05e05-c5f2-4121-b965-122db19dfe52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f2c05e05-c5f2-4121-b965-122db19dfe52\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.305386 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.305406 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.305435 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2962d2b-d745-4923-bb91-47cf0f07eb7b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.305462 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.305517 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2962d2b-d745-4923-bb91-47cf0f07eb7b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.305557 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfcw5\" (UniqueName: \"kubernetes.io/projected/e2962d2b-d745-4923-bb91-47cf0f07eb7b-kube-api-access-lfcw5\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.305580 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2962d2b-d745-4923-bb91-47cf0f07eb7b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.305600 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2962d2b-d745-4923-bb91-47cf0f07eb7b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.306988 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2962d2b-d745-4923-bb91-47cf0f07eb7b-config-data\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.307397 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2962d2b-d745-4923-bb91-47cf0f07eb7b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.307745 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2962d2b-d745-4923-bb91-47cf0f07eb7b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.308169 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.309158 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.311074 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.311105 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f2c05e05-c5f2-4121-b965-122db19dfe52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f2c05e05-c5f2-4121-b965-122db19dfe52\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3ae9faa9c8565b4a9c378d06dabfd62c4b1d925f5972ce11cf26ac121ab533ae/globalmount\"" pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.315802 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.315991 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2962d2b-d745-4923-bb91-47cf0f07eb7b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.320245 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2962d2b-d745-4923-bb91-47cf0f07eb7b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.320388 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.329046 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfcw5\" (UniqueName: \"kubernetes.io/projected/e2962d2b-d745-4923-bb91-47cf0f07eb7b-kube-api-access-lfcw5\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.363192 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f2c05e05-c5f2-4121-b965-122db19dfe52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f2c05e05-c5f2-4121-b965-122db19dfe52\") pod \"rabbitmq-server-0\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: E1124 20:39:03.365069 4812 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Nov 24 20:39:03 crc kubenswrapper[4812]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/baa471f1-03e9-4da3-b2f7-836bc7413843/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 24 20:39:03 crc kubenswrapper[4812]: > podSandboxID="a87391a7a90977dfdf284a5390d2afab6b130c94b04a2e95c04ea9607827c7d4" Nov 24 20:39:03 crc kubenswrapper[4812]: E1124 20:39:03.365186 4812 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 24 20:39:03 crc kubenswrapper[4812]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb6hc5h68h68h594h659hdbh679h65ch5f6hdch6h5b9h8fh55hfhf8h57fhc7h56ch687h669h559h678h5dhc7hf7h697h5d6h9ch669h54fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pl25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-f4c6c447c-88llx_openstack(baa471f1-03e9-4da3-b2f7-836bc7413843): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/baa471f1-03e9-4da3-b2f7-836bc7413843/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 24 20:39:03 crc kubenswrapper[4812]: > logger="UnhandledError" Nov 24 20:39:03 crc kubenswrapper[4812]: E1124 20:39:03.366835 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/baa471f1-03e9-4da3-b2f7-836bc7413843/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-f4c6c447c-88llx" podUID="baa471f1-03e9-4da3-b2f7-836bc7413843" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.393669 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.510313 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-866449bdb9-nrh6x" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.600932 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55c86457d7-blw58" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.610919 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4vvf\" (UniqueName: \"kubernetes.io/projected/9dde0d6a-f060-4501-9483-deda1a0e4648-kube-api-access-s4vvf\") pod \"9dde0d6a-f060-4501-9483-deda1a0e4648\" (UID: \"9dde0d6a-f060-4501-9483-deda1a0e4648\") " Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.610998 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dde0d6a-f060-4501-9483-deda1a0e4648-config\") pod \"9dde0d6a-f060-4501-9483-deda1a0e4648\" (UID: \"9dde0d6a-f060-4501-9483-deda1a0e4648\") " Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.618362 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dde0d6a-f060-4501-9483-deda1a0e4648-kube-api-access-s4vvf" (OuterVolumeSpecName: "kube-api-access-s4vvf") pod "9dde0d6a-f060-4501-9483-deda1a0e4648" (UID: "9dde0d6a-f060-4501-9483-deda1a0e4648"). InnerVolumeSpecName "kube-api-access-s4vvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.640622 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dde0d6a-f060-4501-9483-deda1a0e4648-config" (OuterVolumeSpecName: "config") pod "9dde0d6a-f060-4501-9483-deda1a0e4648" (UID: "9dde0d6a-f060-4501-9483-deda1a0e4648"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.704910 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 20:39:03 crc kubenswrapper[4812]: W1124 20:39:03.707659 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfec318d7_3a68_4aa4_8145_511d13aae323.slice/crio-b298287db0d79b8ea92df56858db642759ce6dddc43c669a1ab4dee3d4cc0c05 WatchSource:0}: Error finding container b298287db0d79b8ea92df56858db642759ce6dddc43c669a1ab4dee3d4cc0c05: Status 404 returned error can't find the container with id b298287db0d79b8ea92df56858db642759ce6dddc43c669a1ab4dee3d4cc0c05 Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.712283 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9d4310a-766c-41f6-9157-2fb3e5699c7f-config\") pod \"e9d4310a-766c-41f6-9157-2fb3e5699c7f\" (UID: \"e9d4310a-766c-41f6-9157-2fb3e5699c7f\") " Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.712374 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9d4310a-766c-41f6-9157-2fb3e5699c7f-dns-svc\") pod \"e9d4310a-766c-41f6-9157-2fb3e5699c7f\" (UID: \"e9d4310a-766c-41f6-9157-2fb3e5699c7f\") " Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.712493 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5pl9\" (UniqueName: \"kubernetes.io/projected/e9d4310a-766c-41f6-9157-2fb3e5699c7f-kube-api-access-p5pl9\") pod \"e9d4310a-766c-41f6-9157-2fb3e5699c7f\" (UID: \"e9d4310a-766c-41f6-9157-2fb3e5699c7f\") " Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.712787 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dde0d6a-f060-4501-9483-deda1a0e4648-config\") on node \"crc\" DevicePath \"\"" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.712802 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4vvf\" (UniqueName: \"kubernetes.io/projected/9dde0d6a-f060-4501-9483-deda1a0e4648-kube-api-access-s4vvf\") on node \"crc\" DevicePath \"\"" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.716519 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9d4310a-766c-41f6-9157-2fb3e5699c7f-kube-api-access-p5pl9" (OuterVolumeSpecName: "kube-api-access-p5pl9") pod "e9d4310a-766c-41f6-9157-2fb3e5699c7f" (UID: "e9d4310a-766c-41f6-9157-2fb3e5699c7f"). InnerVolumeSpecName "kube-api-access-p5pl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.730140 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9d4310a-766c-41f6-9157-2fb3e5699c7f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e9d4310a-766c-41f6-9157-2fb3e5699c7f" (UID: "e9d4310a-766c-41f6-9157-2fb3e5699c7f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.738938 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9d4310a-766c-41f6-9157-2fb3e5699c7f-config" (OuterVolumeSpecName: "config") pod "e9d4310a-766c-41f6-9157-2fb3e5699c7f" (UID: "e9d4310a-766c-41f6-9157-2fb3e5699c7f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.813880 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9d4310a-766c-41f6-9157-2fb3e5699c7f-config\") on node \"crc\" DevicePath \"\"" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.814135 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9d4310a-766c-41f6-9157-2fb3e5699c7f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.814147 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5pl9\" (UniqueName: \"kubernetes.io/projected/e9d4310a-766c-41f6-9157-2fb3e5699c7f-kube-api-access-p5pl9\") on node \"crc\" DevicePath \"\"" Nov 24 20:39:03 crc kubenswrapper[4812]: I1124 20:39:03.862113 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 20:39:03 crc kubenswrapper[4812]: W1124 20:39:03.869031 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2962d2b_d745_4923_bb91_47cf0f07eb7b.slice/crio-adc50b143f7556749ffea279fe2f0a0711c982e81a99f521b834f6ed870c0e30 WatchSource:0}: Error finding container adc50b143f7556749ffea279fe2f0a0711c982e81a99f521b834f6ed870c0e30: Status 404 returned error can't find the container with id adc50b143f7556749ffea279fe2f0a0711c982e81a99f521b834f6ed870c0e30 Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.119158 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fec318d7-3a68-4aa4-8145-511d13aae323","Type":"ContainerStarted","Data":"b298287db0d79b8ea92df56858db642759ce6dddc43c669a1ab4dee3d4cc0c05"} Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.122288 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e2962d2b-d745-4923-bb91-47cf0f07eb7b","Type":"ContainerStarted","Data":"adc50b143f7556749ffea279fe2f0a0711c982e81a99f521b834f6ed870c0e30"} Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.133133 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55c86457d7-blw58" event={"ID":"e9d4310a-766c-41f6-9157-2fb3e5699c7f","Type":"ContainerDied","Data":"a766fd9a8357fb916be83fdb6b42757598250af22a7886d5a86f7e53a78cbf3e"} Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.133267 4812 scope.go:117] "RemoveContainer" containerID="78d0037b040de5703d5e7657558f9dcea395f011dd6293dac51a1c26eddba1ef" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.133720 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55c86457d7-blw58" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.136265 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-866449bdb9-nrh6x" event={"ID":"9dde0d6a-f060-4501-9483-deda1a0e4648","Type":"ContainerDied","Data":"80c00ae777d4ee44cb2b0c1125e2d87b301b42a428efe9a1e8db857ccf05bfa2"} Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.136444 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-866449bdb9-nrh6x" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.141696 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" event={"ID":"f180ecb5-e534-49cd-a9a4-e12a29ecca51","Type":"ContainerStarted","Data":"489c45fb674c28568c6ab437b387403d4d9e026cf2fc3eefb80b1ed473f708b1"} Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.168837 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 24 20:39:04 crc kubenswrapper[4812]: E1124 20:39:04.169281 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d4310a-766c-41f6-9157-2fb3e5699c7f" containerName="init" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.169302 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d4310a-766c-41f6-9157-2fb3e5699c7f" containerName="init" Nov 24 20:39:04 crc kubenswrapper[4812]: E1124 20:39:04.169358 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dde0d6a-f060-4501-9483-deda1a0e4648" containerName="init" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.169367 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dde0d6a-f060-4501-9483-deda1a0e4648" containerName="init" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.169534 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9d4310a-766c-41f6-9157-2fb3e5699c7f" containerName="init" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.169552 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dde0d6a-f060-4501-9483-deda1a0e4648" containerName="init" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.170463 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.175010 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.175112 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-7p6zl" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.176154 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.176913 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.182072 4812 scope.go:117] "RemoveContainer" containerID="491f0f29de65c6066538047941f7f70e221e03aa75676accbadf503b0b6badb0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.190779 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.197577 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.243713 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" podStartSLOduration=3.243692666 podStartE2EDuration="3.243692666s" podCreationTimestamp="2025-11-24 20:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:39:04.22933212 +0000 UTC m=+4938.018284531" watchObservedRunningTime="2025-11-24 20:39:04.243692666 +0000 UTC m=+4938.032645047" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.317901 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-866449bdb9-nrh6x"] Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.331400 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-866449bdb9-nrh6x"] Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.346470 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfb18991-c497-4318-abcf-de576607d11c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.346589 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8cwk\" (UniqueName: \"kubernetes.io/projected/dfb18991-c497-4318-abcf-de576607d11c-kube-api-access-z8cwk\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.346728 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dfb18991-c497-4318-abcf-de576607d11c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.346807 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-14d6bf99-c6d1-4155-9132-4a83dfbc54f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14d6bf99-c6d1-4155-9132-4a83dfbc54f5\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.346863 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dfb18991-c497-4318-abcf-de576607d11c-kolla-config\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.346893 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dfb18991-c497-4318-abcf-de576607d11c-config-data-default\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.346914 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfb18991-c497-4318-abcf-de576607d11c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.347047 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb18991-c497-4318-abcf-de576607d11c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.368686 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55c86457d7-blw58"] Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.372557 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55c86457d7-blw58"] Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.448165 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-14d6bf99-c6d1-4155-9132-4a83dfbc54f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14d6bf99-c6d1-4155-9132-4a83dfbc54f5\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.448224 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dfb18991-c497-4318-abcf-de576607d11c-kolla-config\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.448248 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dfb18991-c497-4318-abcf-de576607d11c-config-data-default\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.448262 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfb18991-c497-4318-abcf-de576607d11c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.448309 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb18991-c497-4318-abcf-de576607d11c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.448327 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfb18991-c497-4318-abcf-de576607d11c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.448364 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8cwk\" (UniqueName: \"kubernetes.io/projected/dfb18991-c497-4318-abcf-de576607d11c-kube-api-access-z8cwk\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.448408 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dfb18991-c497-4318-abcf-de576607d11c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.448745 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dfb18991-c497-4318-abcf-de576607d11c-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.449464 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dfb18991-c497-4318-abcf-de576607d11c-kolla-config\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.450034 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dfb18991-c497-4318-abcf-de576607d11c-config-data-default\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.451012 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfb18991-c497-4318-abcf-de576607d11c-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.453419 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.453442 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-14d6bf99-c6d1-4155-9132-4a83dfbc54f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14d6bf99-c6d1-4155-9132-4a83dfbc54f5\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ed0c2048b0befc0ee44807ca042adfd98011510d6a2eedd8ee6911ccd5ab4d0c/globalmount\"" pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.455097 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfb18991-c497-4318-abcf-de576607d11c-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.455643 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dfb18991-c497-4318-abcf-de576607d11c-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.483930 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8cwk\" (UniqueName: \"kubernetes.io/projected/dfb18991-c497-4318-abcf-de576607d11c-kube-api-access-z8cwk\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.486718 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-14d6bf99-c6d1-4155-9132-4a83dfbc54f5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-14d6bf99-c6d1-4155-9132-4a83dfbc54f5\") pod \"openstack-galera-0\" (UID: \"dfb18991-c497-4318-abcf-de576607d11c\") " pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.574115 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.976591 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dde0d6a-f060-4501-9483-deda1a0e4648" path="/var/lib/kubelet/pods/9dde0d6a-f060-4501-9483-deda1a0e4648/volumes" Nov 24 20:39:04 crc kubenswrapper[4812]: I1124 20:39:04.977740 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9d4310a-766c-41f6-9157-2fb3e5699c7f" path="/var/lib/kubelet/pods/e9d4310a-766c-41f6-9157-2fb3e5699c7f/volumes" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.011161 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 20:39:05 crc kubenswrapper[4812]: W1124 20:39:05.018289 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfb18991_c497_4318_abcf_de576607d11c.slice/crio-934a9ee21622ce1b3d417ae1f9575af3ab98defae77fe80b0d25fa62678f8a8d WatchSource:0}: Error finding container 934a9ee21622ce1b3d417ae1f9575af3ab98defae77fe80b0d25fa62678f8a8d: Status 404 returned error can't find the container with id 934a9ee21622ce1b3d417ae1f9575af3ab98defae77fe80b0d25fa62678f8a8d Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.152050 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fec318d7-3a68-4aa4-8145-511d13aae323","Type":"ContainerStarted","Data":"735d30c1f30577ea593cd1124cbace3b6654af2ba60c9b45af06482e0aa4b2e0"} Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.154311 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4c6c447c-88llx" event={"ID":"baa471f1-03e9-4da3-b2f7-836bc7413843","Type":"ContainerStarted","Data":"c1d3298472e137ecf5bb7a0b3fa9afe03e1280f20530f0fd95bf79f42af54e4f"} Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.155029 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f4c6c447c-88llx" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.156681 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e2962d2b-d745-4923-bb91-47cf0f07eb7b","Type":"ContainerStarted","Data":"6bbff99df52c8a970fa4e4db65e02599974521241babc0b1a998f62fad0fa637"} Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.158023 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dfb18991-c497-4318-abcf-de576607d11c","Type":"ContainerStarted","Data":"934a9ee21622ce1b3d417ae1f9575af3ab98defae77fe80b0d25fa62678f8a8d"} Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.159380 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.208819 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f4c6c447c-88llx" podStartSLOduration=4.208802177 podStartE2EDuration="4.208802177s" podCreationTimestamp="2025-11-24 20:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:39:05.202611092 +0000 UTC m=+4938.991563473" watchObservedRunningTime="2025-11-24 20:39:05.208802177 +0000 UTC m=+4938.997754548" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.586156 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.588544 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.593016 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.600544 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.600711 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.600965 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.601661 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-rrxvr" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.668060 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f5f9fd3-3b4a-4670-9daf-06e0527067ab-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.668163 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8f5f9fd3-3b4a-4670-9daf-06e0527067ab-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.668193 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f5f9fd3-3b4a-4670-9daf-06e0527067ab-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.668245 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8f5f9fd3-3b4a-4670-9daf-06e0527067ab-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.668384 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccx2t\" (UniqueName: \"kubernetes.io/projected/8f5f9fd3-3b4a-4670-9daf-06e0527067ab-kube-api-access-ccx2t\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.668473 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f5f9fd3-3b4a-4670-9daf-06e0527067ab-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.668528 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b5f38454-20f1-4634-ab81-e97327a79c45\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5f38454-20f1-4634-ab81-e97327a79c45\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.668582 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5f9fd3-3b4a-4670-9daf-06e0527067ab-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.770848 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccx2t\" (UniqueName: \"kubernetes.io/projected/8f5f9fd3-3b4a-4670-9daf-06e0527067ab-kube-api-access-ccx2t\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.770974 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f5f9fd3-3b4a-4670-9daf-06e0527067ab-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.771035 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b5f38454-20f1-4634-ab81-e97327a79c45\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5f38454-20f1-4634-ab81-e97327a79c45\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.771087 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5f9fd3-3b4a-4670-9daf-06e0527067ab-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.771159 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f5f9fd3-3b4a-4670-9daf-06e0527067ab-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.771201 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8f5f9fd3-3b4a-4670-9daf-06e0527067ab-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.771236 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f5f9fd3-3b4a-4670-9daf-06e0527067ab-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.771357 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8f5f9fd3-3b4a-4670-9daf-06e0527067ab-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.772226 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8f5f9fd3-3b4a-4670-9daf-06e0527067ab-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.772681 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8f5f9fd3-3b4a-4670-9daf-06e0527067ab-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.773376 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8f5f9fd3-3b4a-4670-9daf-06e0527067ab-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.774409 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f5f9fd3-3b4a-4670-9daf-06e0527067ab-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.776098 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 20:39:05 crc kubenswrapper[4812]: I1124 20:39:05.776197 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b5f38454-20f1-4634-ab81-e97327a79c45\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5f38454-20f1-4634-ab81-e97327a79c45\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ae5af82bef2b0d09512da1ea86c3d96e4e2280068bc4d897fe9b1a37b088d010/globalmount\"" pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.016595 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccx2t\" (UniqueName: \"kubernetes.io/projected/8f5f9fd3-3b4a-4670-9daf-06e0527067ab-kube-api-access-ccx2t\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.016897 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5f9fd3-3b4a-4670-9daf-06e0527067ab-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.018759 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f5f9fd3-3b4a-4670-9daf-06e0527067ab-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.062596 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b5f38454-20f1-4634-ab81-e97327a79c45\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5f38454-20f1-4634-ab81-e97327a79c45\") pod \"openstack-cell1-galera-0\" (UID: \"8f5f9fd3-3b4a-4670-9daf-06e0527067ab\") " pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.112219 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.113387 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.115915 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-xjkpm" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.116688 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.116822 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.131093 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.167213 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dfb18991-c497-4318-abcf-de576607d11c","Type":"ContainerStarted","Data":"a9c16250f79b68451a7eab28cea9ee6824de24d26f8d87d72be18f714c71d474"} Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.176386 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4j6t\" (UniqueName: \"kubernetes.io/projected/6007348b-cefa-4899-bdf4-bc09671bdf1e-kube-api-access-l4j6t\") pod \"memcached-0\" (UID: \"6007348b-cefa-4899-bdf4-bc09671bdf1e\") " pod="openstack/memcached-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.176446 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6007348b-cefa-4899-bdf4-bc09671bdf1e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6007348b-cefa-4899-bdf4-bc09671bdf1e\") " pod="openstack/memcached-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.176482 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6007348b-cefa-4899-bdf4-bc09671bdf1e-config-data\") pod \"memcached-0\" (UID: \"6007348b-cefa-4899-bdf4-bc09671bdf1e\") " pod="openstack/memcached-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.176630 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6007348b-cefa-4899-bdf4-bc09671bdf1e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6007348b-cefa-4899-bdf4-bc09671bdf1e\") " pod="openstack/memcached-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.176760 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6007348b-cefa-4899-bdf4-bc09671bdf1e-kolla-config\") pod \"memcached-0\" (UID: \"6007348b-cefa-4899-bdf4-bc09671bdf1e\") " pod="openstack/memcached-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.220324 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.278367 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4j6t\" (UniqueName: \"kubernetes.io/projected/6007348b-cefa-4899-bdf4-bc09671bdf1e-kube-api-access-l4j6t\") pod \"memcached-0\" (UID: \"6007348b-cefa-4899-bdf4-bc09671bdf1e\") " pod="openstack/memcached-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.278493 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6007348b-cefa-4899-bdf4-bc09671bdf1e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6007348b-cefa-4899-bdf4-bc09671bdf1e\") " pod="openstack/memcached-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.278540 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6007348b-cefa-4899-bdf4-bc09671bdf1e-config-data\") pod \"memcached-0\" (UID: \"6007348b-cefa-4899-bdf4-bc09671bdf1e\") " pod="openstack/memcached-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.278597 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6007348b-cefa-4899-bdf4-bc09671bdf1e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6007348b-cefa-4899-bdf4-bc09671bdf1e\") " pod="openstack/memcached-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.278691 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6007348b-cefa-4899-bdf4-bc09671bdf1e-kolla-config\") pod \"memcached-0\" (UID: \"6007348b-cefa-4899-bdf4-bc09671bdf1e\") " pod="openstack/memcached-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.280781 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6007348b-cefa-4899-bdf4-bc09671bdf1e-config-data\") pod \"memcached-0\" (UID: \"6007348b-cefa-4899-bdf4-bc09671bdf1e\") " pod="openstack/memcached-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.282012 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6007348b-cefa-4899-bdf4-bc09671bdf1e-kolla-config\") pod \"memcached-0\" (UID: \"6007348b-cefa-4899-bdf4-bc09671bdf1e\") " pod="openstack/memcached-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.286444 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6007348b-cefa-4899-bdf4-bc09671bdf1e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6007348b-cefa-4899-bdf4-bc09671bdf1e\") " pod="openstack/memcached-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.286529 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6007348b-cefa-4899-bdf4-bc09671bdf1e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6007348b-cefa-4899-bdf4-bc09671bdf1e\") " pod="openstack/memcached-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.296524 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4j6t\" (UniqueName: \"kubernetes.io/projected/6007348b-cefa-4899-bdf4-bc09671bdf1e-kube-api-access-l4j6t\") pod \"memcached-0\" (UID: \"6007348b-cefa-4899-bdf4-bc09671bdf1e\") " pod="openstack/memcached-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.431368 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.672435 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 20:39:06 crc kubenswrapper[4812]: I1124 20:39:06.908052 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 20:39:06 crc kubenswrapper[4812]: W1124 20:39:06.913597 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6007348b_cefa_4899_bdf4_bc09671bdf1e.slice/crio-95e714dfd04c9a00edb6bffd2d3d35fa75ff32da15135003fe5afd0018909c50 WatchSource:0}: Error finding container 95e714dfd04c9a00edb6bffd2d3d35fa75ff32da15135003fe5afd0018909c50: Status 404 returned error can't find the container with id 95e714dfd04c9a00edb6bffd2d3d35fa75ff32da15135003fe5afd0018909c50 Nov 24 20:39:07 crc kubenswrapper[4812]: I1124 20:39:07.176141 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8f5f9fd3-3b4a-4670-9daf-06e0527067ab","Type":"ContainerStarted","Data":"af7614b5b9d752ceb08c5081a0b1d802662acef30905c46154f34523c7fde087"} Nov 24 20:39:07 crc kubenswrapper[4812]: I1124 20:39:07.176456 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8f5f9fd3-3b4a-4670-9daf-06e0527067ab","Type":"ContainerStarted","Data":"33b6c4b974bfa65dce90dfd786ca43fcbdb314cc2cd3bfa0ea1a3364ab53920c"} Nov 24 20:39:07 crc kubenswrapper[4812]: I1124 20:39:07.178301 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6007348b-cefa-4899-bdf4-bc09671bdf1e","Type":"ContainerStarted","Data":"e867055ea25380c2989f6aacaf10d61181a8f553cdf7674e0263f1715aa4d665"} Nov 24 20:39:07 crc kubenswrapper[4812]: I1124 20:39:07.178390 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6007348b-cefa-4899-bdf4-bc09671bdf1e","Type":"ContainerStarted","Data":"95e714dfd04c9a00edb6bffd2d3d35fa75ff32da15135003fe5afd0018909c50"} Nov 24 20:39:07 crc kubenswrapper[4812]: I1124 20:39:07.178667 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 24 20:39:07 crc kubenswrapper[4812]: I1124 20:39:07.224362 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.224313861 podStartE2EDuration="1.224313861s" podCreationTimestamp="2025-11-24 20:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:39:07.2228868 +0000 UTC m=+4941.011839191" watchObservedRunningTime="2025-11-24 20:39:07.224313861 +0000 UTC m=+4941.013266262" Nov 24 20:39:09 crc kubenswrapper[4812]: I1124 20:39:09.199606 4812 generic.go:334] "Generic (PLEG): container finished" podID="dfb18991-c497-4318-abcf-de576607d11c" containerID="a9c16250f79b68451a7eab28cea9ee6824de24d26f8d87d72be18f714c71d474" exitCode=0 Nov 24 20:39:09 crc kubenswrapper[4812]: I1124 20:39:09.199699 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dfb18991-c497-4318-abcf-de576607d11c","Type":"ContainerDied","Data":"a9c16250f79b68451a7eab28cea9ee6824de24d26f8d87d72be18f714c71d474"} Nov 24 20:39:10 crc kubenswrapper[4812]: I1124 20:39:10.215457 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dfb18991-c497-4318-abcf-de576607d11c","Type":"ContainerStarted","Data":"ad09509dd21688e7723e4965cef1afb34a6f846d44f10ec22d28faed165b8e96"} Nov 24 20:39:10 crc kubenswrapper[4812]: I1124 20:39:10.249165 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=7.249136846 podStartE2EDuration="7.249136846s" podCreationTimestamp="2025-11-24 20:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:39:10.247204571 +0000 UTC m=+4944.036156972" watchObservedRunningTime="2025-11-24 20:39:10.249136846 +0000 UTC m=+4944.038089257" Nov 24 20:39:11 crc kubenswrapper[4812]: I1124 20:39:11.229542 4812 generic.go:334] "Generic (PLEG): container finished" podID="8f5f9fd3-3b4a-4670-9daf-06e0527067ab" containerID="af7614b5b9d752ceb08c5081a0b1d802662acef30905c46154f34523c7fde087" exitCode=0 Nov 24 20:39:11 crc kubenswrapper[4812]: I1124 20:39:11.230001 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8f5f9fd3-3b4a-4670-9daf-06e0527067ab","Type":"ContainerDied","Data":"af7614b5b9d752ceb08c5081a0b1d802662acef30905c46154f34523c7fde087"} Nov 24 20:39:11 crc kubenswrapper[4812]: I1124 20:39:11.957662 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f4c6c447c-88llx" Nov 24 20:39:12 crc kubenswrapper[4812]: I1124 20:39:12.244906 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8f5f9fd3-3b4a-4670-9daf-06e0527067ab","Type":"ContainerStarted","Data":"56b3fdcc2814e723f5f97bce44f9102e4a43d3c4aa5a49322bcf9348f0154f4b"} Nov 24 20:39:12 crc kubenswrapper[4812]: I1124 20:39:12.260829 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" Nov 24 20:39:12 crc kubenswrapper[4812]: I1124 20:39:12.274390 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.274364506 podStartE2EDuration="8.274364506s" podCreationTimestamp="2025-11-24 20:39:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:39:12.268748697 +0000 UTC m=+4946.057701118" watchObservedRunningTime="2025-11-24 20:39:12.274364506 +0000 UTC m=+4946.063316917" Nov 24 20:39:12 crc kubenswrapper[4812]: I1124 20:39:12.366505 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f4c6c447c-88llx"] Nov 24 20:39:12 crc kubenswrapper[4812]: I1124 20:39:12.367179 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f4c6c447c-88llx" podUID="baa471f1-03e9-4da3-b2f7-836bc7413843" containerName="dnsmasq-dns" containerID="cri-o://c1d3298472e137ecf5bb7a0b3fa9afe03e1280f20530f0fd95bf79f42af54e4f" gracePeriod=10 Nov 24 20:39:12 crc kubenswrapper[4812]: I1124 20:39:12.799755 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4c6c447c-88llx" Nov 24 20:39:12 crc kubenswrapper[4812]: I1124 20:39:12.900044 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pl25\" (UniqueName: \"kubernetes.io/projected/baa471f1-03e9-4da3-b2f7-836bc7413843-kube-api-access-2pl25\") pod \"baa471f1-03e9-4da3-b2f7-836bc7413843\" (UID: \"baa471f1-03e9-4da3-b2f7-836bc7413843\") " Nov 24 20:39:12 crc kubenswrapper[4812]: I1124 20:39:12.900170 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/baa471f1-03e9-4da3-b2f7-836bc7413843-dns-svc\") pod \"baa471f1-03e9-4da3-b2f7-836bc7413843\" (UID: \"baa471f1-03e9-4da3-b2f7-836bc7413843\") " Nov 24 20:39:12 crc kubenswrapper[4812]: I1124 20:39:12.900226 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa471f1-03e9-4da3-b2f7-836bc7413843-config\") pod \"baa471f1-03e9-4da3-b2f7-836bc7413843\" (UID: \"baa471f1-03e9-4da3-b2f7-836bc7413843\") " Nov 24 20:39:12 crc kubenswrapper[4812]: I1124 20:39:12.904981 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa471f1-03e9-4da3-b2f7-836bc7413843-kube-api-access-2pl25" (OuterVolumeSpecName: "kube-api-access-2pl25") pod "baa471f1-03e9-4da3-b2f7-836bc7413843" (UID: "baa471f1-03e9-4da3-b2f7-836bc7413843"). InnerVolumeSpecName "kube-api-access-2pl25". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:39:12 crc kubenswrapper[4812]: I1124 20:39:12.937648 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baa471f1-03e9-4da3-b2f7-836bc7413843-config" (OuterVolumeSpecName: "config") pod "baa471f1-03e9-4da3-b2f7-836bc7413843" (UID: "baa471f1-03e9-4da3-b2f7-836bc7413843"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:39:12 crc kubenswrapper[4812]: I1124 20:39:12.957801 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baa471f1-03e9-4da3-b2f7-836bc7413843-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "baa471f1-03e9-4da3-b2f7-836bc7413843" (UID: "baa471f1-03e9-4da3-b2f7-836bc7413843"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:39:13 crc kubenswrapper[4812]: I1124 20:39:13.001695 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa471f1-03e9-4da3-b2f7-836bc7413843-config\") on node \"crc\" DevicePath \"\"" Nov 24 20:39:13 crc kubenswrapper[4812]: I1124 20:39:13.001729 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pl25\" (UniqueName: \"kubernetes.io/projected/baa471f1-03e9-4da3-b2f7-836bc7413843-kube-api-access-2pl25\") on node \"crc\" DevicePath \"\"" Nov 24 20:39:13 crc kubenswrapper[4812]: I1124 20:39:13.001744 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/baa471f1-03e9-4da3-b2f7-836bc7413843-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 20:39:13 crc kubenswrapper[4812]: I1124 20:39:13.264585 4812 generic.go:334] "Generic (PLEG): container finished" podID="baa471f1-03e9-4da3-b2f7-836bc7413843" containerID="c1d3298472e137ecf5bb7a0b3fa9afe03e1280f20530f0fd95bf79f42af54e4f" exitCode=0 Nov 24 20:39:13 crc kubenswrapper[4812]: I1124 20:39:13.264661 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4c6c447c-88llx" event={"ID":"baa471f1-03e9-4da3-b2f7-836bc7413843","Type":"ContainerDied","Data":"c1d3298472e137ecf5bb7a0b3fa9afe03e1280f20530f0fd95bf79f42af54e4f"} Nov 24 20:39:13 crc kubenswrapper[4812]: I1124 20:39:13.264716 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f4c6c447c-88llx" event={"ID":"baa471f1-03e9-4da3-b2f7-836bc7413843","Type":"ContainerDied","Data":"a87391a7a90977dfdf284a5390d2afab6b130c94b04a2e95c04ea9607827c7d4"} Nov 24 20:39:13 crc kubenswrapper[4812]: I1124 20:39:13.264748 4812 scope.go:117] "RemoveContainer" containerID="c1d3298472e137ecf5bb7a0b3fa9afe03e1280f20530f0fd95bf79f42af54e4f" Nov 24 20:39:13 crc kubenswrapper[4812]: I1124 20:39:13.264973 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f4c6c447c-88llx" Nov 24 20:39:13 crc kubenswrapper[4812]: I1124 20:39:13.292522 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f4c6c447c-88llx"] Nov 24 20:39:13 crc kubenswrapper[4812]: I1124 20:39:13.296776 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f4c6c447c-88llx"] Nov 24 20:39:13 crc kubenswrapper[4812]: I1124 20:39:13.305636 4812 scope.go:117] "RemoveContainer" containerID="0f07f3ff68580acb4c31fa102194a73ea0636fb5e96a7f002fd0dc2c123d140e" Nov 24 20:39:13 crc kubenswrapper[4812]: I1124 20:39:13.323227 4812 scope.go:117] "RemoveContainer" containerID="c1d3298472e137ecf5bb7a0b3fa9afe03e1280f20530f0fd95bf79f42af54e4f" Nov 24 20:39:13 crc kubenswrapper[4812]: E1124 20:39:13.323694 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1d3298472e137ecf5bb7a0b3fa9afe03e1280f20530f0fd95bf79f42af54e4f\": container with ID starting with c1d3298472e137ecf5bb7a0b3fa9afe03e1280f20530f0fd95bf79f42af54e4f not found: ID does not exist" containerID="c1d3298472e137ecf5bb7a0b3fa9afe03e1280f20530f0fd95bf79f42af54e4f" Nov 24 20:39:13 crc kubenswrapper[4812]: I1124 20:39:13.323723 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1d3298472e137ecf5bb7a0b3fa9afe03e1280f20530f0fd95bf79f42af54e4f"} err="failed to get container status \"c1d3298472e137ecf5bb7a0b3fa9afe03e1280f20530f0fd95bf79f42af54e4f\": rpc error: code = NotFound desc = could not find container \"c1d3298472e137ecf5bb7a0b3fa9afe03e1280f20530f0fd95bf79f42af54e4f\": container with ID starting with c1d3298472e137ecf5bb7a0b3fa9afe03e1280f20530f0fd95bf79f42af54e4f not found: ID does not exist" Nov 24 20:39:13 crc kubenswrapper[4812]: I1124 20:39:13.323744 4812 scope.go:117] "RemoveContainer" containerID="0f07f3ff68580acb4c31fa102194a73ea0636fb5e96a7f002fd0dc2c123d140e" Nov 24 20:39:13 crc kubenswrapper[4812]: E1124 20:39:13.324111 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f07f3ff68580acb4c31fa102194a73ea0636fb5e96a7f002fd0dc2c123d140e\": container with ID starting with 0f07f3ff68580acb4c31fa102194a73ea0636fb5e96a7f002fd0dc2c123d140e not found: ID does not exist" containerID="0f07f3ff68580acb4c31fa102194a73ea0636fb5e96a7f002fd0dc2c123d140e" Nov 24 20:39:13 crc kubenswrapper[4812]: I1124 20:39:13.324150 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f07f3ff68580acb4c31fa102194a73ea0636fb5e96a7f002fd0dc2c123d140e"} err="failed to get container status \"0f07f3ff68580acb4c31fa102194a73ea0636fb5e96a7f002fd0dc2c123d140e\": rpc error: code = NotFound desc = could not find container \"0f07f3ff68580acb4c31fa102194a73ea0636fb5e96a7f002fd0dc2c123d140e\": container with ID starting with 0f07f3ff68580acb4c31fa102194a73ea0636fb5e96a7f002fd0dc2c123d140e not found: ID does not exist" Nov 24 20:39:14 crc kubenswrapper[4812]: I1124 20:39:14.575286 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 24 20:39:14 crc kubenswrapper[4812]: I1124 20:39:14.575688 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 24 20:39:14 crc kubenswrapper[4812]: I1124 20:39:14.983887 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baa471f1-03e9-4da3-b2f7-836bc7413843" path="/var/lib/kubelet/pods/baa471f1-03e9-4da3-b2f7-836bc7413843/volumes" Nov 24 20:39:16 crc kubenswrapper[4812]: I1124 20:39:16.220760 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:16 crc kubenswrapper[4812]: I1124 20:39:16.222460 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:16 crc kubenswrapper[4812]: I1124 20:39:16.325602 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:16 crc kubenswrapper[4812]: I1124 20:39:16.433933 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 24 20:39:16 crc kubenswrapper[4812]: I1124 20:39:16.443856 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 24 20:39:16 crc kubenswrapper[4812]: I1124 20:39:16.787553 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 24 20:39:16 crc kubenswrapper[4812]: I1124 20:39:16.889476 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 24 20:39:38 crc kubenswrapper[4812]: I1124 20:39:38.498056 4812 generic.go:334] "Generic (PLEG): container finished" podID="fec318d7-3a68-4aa4-8145-511d13aae323" containerID="735d30c1f30577ea593cd1124cbace3b6654af2ba60c9b45af06482e0aa4b2e0" exitCode=0 Nov 24 20:39:38 crc kubenswrapper[4812]: I1124 20:39:38.498215 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fec318d7-3a68-4aa4-8145-511d13aae323","Type":"ContainerDied","Data":"735d30c1f30577ea593cd1124cbace3b6654af2ba60c9b45af06482e0aa4b2e0"} Nov 24 20:39:38 crc kubenswrapper[4812]: I1124 20:39:38.500710 4812 generic.go:334] "Generic (PLEG): container finished" podID="e2962d2b-d745-4923-bb91-47cf0f07eb7b" containerID="6bbff99df52c8a970fa4e4db65e02599974521241babc0b1a998f62fad0fa637" exitCode=0 Nov 24 20:39:38 crc kubenswrapper[4812]: I1124 20:39:38.500758 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e2962d2b-d745-4923-bb91-47cf0f07eb7b","Type":"ContainerDied","Data":"6bbff99df52c8a970fa4e4db65e02599974521241babc0b1a998f62fad0fa637"} Nov 24 20:39:39 crc kubenswrapper[4812]: I1124 20:39:39.512737 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e2962d2b-d745-4923-bb91-47cf0f07eb7b","Type":"ContainerStarted","Data":"a3e3e8f62126ba3683a5a7b41140d73f2247b5c22b8ff12219a49651a7f11d53"} Nov 24 20:39:39 crc kubenswrapper[4812]: I1124 20:39:39.514595 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 20:39:39 crc kubenswrapper[4812]: I1124 20:39:39.514873 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fec318d7-3a68-4aa4-8145-511d13aae323","Type":"ContainerStarted","Data":"9d3fe8c1d76e9a906db9bdbc174d5bc86872fcafd8943e4a5d19f0edd8b10ca8"} Nov 24 20:39:39 crc kubenswrapper[4812]: I1124 20:39:39.515187 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:39 crc kubenswrapper[4812]: I1124 20:39:39.551021 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.550988125 podStartE2EDuration="37.550988125s" podCreationTimestamp="2025-11-24 20:39:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:39:39.538937194 +0000 UTC m=+4973.327889655" watchObservedRunningTime="2025-11-24 20:39:39.550988125 +0000 UTC m=+4973.339940536" Nov 24 20:39:39 crc kubenswrapper[4812]: I1124 20:39:39.576323 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.576298591 podStartE2EDuration="38.576298591s" podCreationTimestamp="2025-11-24 20:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:39:39.571161816 +0000 UTC m=+4973.360114247" watchObservedRunningTime="2025-11-24 20:39:39.576298591 +0000 UTC m=+4973.365250992" Nov 24 20:39:53 crc kubenswrapper[4812]: I1124 20:39:53.128602 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:39:53 crc kubenswrapper[4812]: I1124 20:39:53.397585 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 20:40:02 crc kubenswrapper[4812]: I1124 20:40:02.854964 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54564445dc-jx4gp"] Nov 24 20:40:02 crc kubenswrapper[4812]: E1124 20:40:02.856082 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa471f1-03e9-4da3-b2f7-836bc7413843" containerName="dnsmasq-dns" Nov 24 20:40:02 crc kubenswrapper[4812]: I1124 20:40:02.856109 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa471f1-03e9-4da3-b2f7-836bc7413843" containerName="dnsmasq-dns" Nov 24 20:40:02 crc kubenswrapper[4812]: E1124 20:40:02.856146 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa471f1-03e9-4da3-b2f7-836bc7413843" containerName="init" Nov 24 20:40:02 crc kubenswrapper[4812]: I1124 20:40:02.856160 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa471f1-03e9-4da3-b2f7-836bc7413843" containerName="init" Nov 24 20:40:02 crc kubenswrapper[4812]: I1124 20:40:02.857722 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="baa471f1-03e9-4da3-b2f7-836bc7413843" containerName="dnsmasq-dns" Nov 24 20:40:02 crc kubenswrapper[4812]: I1124 20:40:02.859112 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54564445dc-jx4gp" Nov 24 20:40:02 crc kubenswrapper[4812]: I1124 20:40:02.872962 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54564445dc-jx4gp"] Nov 24 20:40:03 crc kubenswrapper[4812]: I1124 20:40:03.035945 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/beccfcee-f6c2-418c-a598-40f78a06b03c-dns-svc\") pod \"dnsmasq-dns-54564445dc-jx4gp\" (UID: \"beccfcee-f6c2-418c-a598-40f78a06b03c\") " pod="openstack/dnsmasq-dns-54564445dc-jx4gp" Nov 24 20:40:03 crc kubenswrapper[4812]: I1124 20:40:03.036227 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6mwx\" (UniqueName: \"kubernetes.io/projected/beccfcee-f6c2-418c-a598-40f78a06b03c-kube-api-access-k6mwx\") pod \"dnsmasq-dns-54564445dc-jx4gp\" (UID: \"beccfcee-f6c2-418c-a598-40f78a06b03c\") " pod="openstack/dnsmasq-dns-54564445dc-jx4gp" Nov 24 20:40:03 crc kubenswrapper[4812]: I1124 20:40:03.036322 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/beccfcee-f6c2-418c-a598-40f78a06b03c-config\") pod \"dnsmasq-dns-54564445dc-jx4gp\" (UID: \"beccfcee-f6c2-418c-a598-40f78a06b03c\") " pod="openstack/dnsmasq-dns-54564445dc-jx4gp" Nov 24 20:40:03 crc kubenswrapper[4812]: I1124 20:40:03.137880 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/beccfcee-f6c2-418c-a598-40f78a06b03c-config\") pod \"dnsmasq-dns-54564445dc-jx4gp\" (UID: \"beccfcee-f6c2-418c-a598-40f78a06b03c\") " pod="openstack/dnsmasq-dns-54564445dc-jx4gp" Nov 24 20:40:03 crc kubenswrapper[4812]: I1124 20:40:03.138040 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/beccfcee-f6c2-418c-a598-40f78a06b03c-dns-svc\") pod \"dnsmasq-dns-54564445dc-jx4gp\" (UID: \"beccfcee-f6c2-418c-a598-40f78a06b03c\") " pod="openstack/dnsmasq-dns-54564445dc-jx4gp" Nov 24 20:40:03 crc kubenswrapper[4812]: I1124 20:40:03.138086 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6mwx\" (UniqueName: \"kubernetes.io/projected/beccfcee-f6c2-418c-a598-40f78a06b03c-kube-api-access-k6mwx\") pod \"dnsmasq-dns-54564445dc-jx4gp\" (UID: \"beccfcee-f6c2-418c-a598-40f78a06b03c\") " pod="openstack/dnsmasq-dns-54564445dc-jx4gp" Nov 24 20:40:03 crc kubenswrapper[4812]: I1124 20:40:03.138834 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/beccfcee-f6c2-418c-a598-40f78a06b03c-config\") pod \"dnsmasq-dns-54564445dc-jx4gp\" (UID: \"beccfcee-f6c2-418c-a598-40f78a06b03c\") " pod="openstack/dnsmasq-dns-54564445dc-jx4gp" Nov 24 20:40:03 crc kubenswrapper[4812]: I1124 20:40:03.140811 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/beccfcee-f6c2-418c-a598-40f78a06b03c-dns-svc\") pod \"dnsmasq-dns-54564445dc-jx4gp\" (UID: \"beccfcee-f6c2-418c-a598-40f78a06b03c\") " pod="openstack/dnsmasq-dns-54564445dc-jx4gp" Nov 24 20:40:03 crc kubenswrapper[4812]: I1124 20:40:03.174685 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6mwx\" (UniqueName: \"kubernetes.io/projected/beccfcee-f6c2-418c-a598-40f78a06b03c-kube-api-access-k6mwx\") pod \"dnsmasq-dns-54564445dc-jx4gp\" (UID: \"beccfcee-f6c2-418c-a598-40f78a06b03c\") " pod="openstack/dnsmasq-dns-54564445dc-jx4gp" Nov 24 20:40:03 crc kubenswrapper[4812]: I1124 20:40:03.212607 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54564445dc-jx4gp" Nov 24 20:40:03 crc kubenswrapper[4812]: I1124 20:40:03.487118 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54564445dc-jx4gp"] Nov 24 20:40:03 crc kubenswrapper[4812]: I1124 20:40:03.571199 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 20:40:03 crc kubenswrapper[4812]: I1124 20:40:03.776247 4812 generic.go:334] "Generic (PLEG): container finished" podID="beccfcee-f6c2-418c-a598-40f78a06b03c" containerID="9e740797b895af05e04286334ca8268a6ec08e25da9d4f5320fef38bf57551dc" exitCode=0 Nov 24 20:40:03 crc kubenswrapper[4812]: I1124 20:40:03.776426 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54564445dc-jx4gp" event={"ID":"beccfcee-f6c2-418c-a598-40f78a06b03c","Type":"ContainerDied","Data":"9e740797b895af05e04286334ca8268a6ec08e25da9d4f5320fef38bf57551dc"} Nov 24 20:40:03 crc kubenswrapper[4812]: I1124 20:40:03.776564 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54564445dc-jx4gp" event={"ID":"beccfcee-f6c2-418c-a598-40f78a06b03c","Type":"ContainerStarted","Data":"4059444df73457378bad8dc50097dec731278a3bdcc243092e7aef13f718dc4c"} Nov 24 20:40:04 crc kubenswrapper[4812]: I1124 20:40:04.195462 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 20:40:04 crc kubenswrapper[4812]: I1124 20:40:04.784299 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54564445dc-jx4gp" event={"ID":"beccfcee-f6c2-418c-a598-40f78a06b03c","Type":"ContainerStarted","Data":"13669e15f6c5f931c7bcc344d214d76bce6f192720e0785f9d085109d5c790aa"} Nov 24 20:40:04 crc kubenswrapper[4812]: I1124 20:40:04.784534 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54564445dc-jx4gp" Nov 24 20:40:07 crc kubenswrapper[4812]: I1124 20:40:07.618143 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="e2962d2b-d745-4923-bb91-47cf0f07eb7b" containerName="rabbitmq" containerID="cri-o://a3e3e8f62126ba3683a5a7b41140d73f2247b5c22b8ff12219a49651a7f11d53" gracePeriod=604796 Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.162403 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="fec318d7-3a68-4aa4-8145-511d13aae323" containerName="rabbitmq" containerID="cri-o://9d3fe8c1d76e9a906db9bdbc174d5bc86872fcafd8943e4a5d19f0edd8b10ca8" gracePeriod=604797 Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.214590 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54564445dc-jx4gp" Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.240912 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54564445dc-jx4gp" podStartSLOduration=6.240889558 podStartE2EDuration="6.240889558s" podCreationTimestamp="2025-11-24 20:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:40:04.830008893 +0000 UTC m=+4998.618961264" watchObservedRunningTime="2025-11-24 20:40:08.240889558 +0000 UTC m=+5002.029841959" Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.283070 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59c6c64b5c-glnmb"] Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.283322 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" podUID="f180ecb5-e534-49cd-a9a4-e12a29ecca51" containerName="dnsmasq-dns" containerID="cri-o://489c45fb674c28568c6ab437b387403d4d9e026cf2fc3eefb80b1ed473f708b1" gracePeriod=10 Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.741291 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.817304 4812 generic.go:334] "Generic (PLEG): container finished" podID="f180ecb5-e534-49cd-a9a4-e12a29ecca51" containerID="489c45fb674c28568c6ab437b387403d4d9e026cf2fc3eefb80b1ed473f708b1" exitCode=0 Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.817378 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.817374 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" event={"ID":"f180ecb5-e534-49cd-a9a4-e12a29ecca51","Type":"ContainerDied","Data":"489c45fb674c28568c6ab437b387403d4d9e026cf2fc3eefb80b1ed473f708b1"} Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.817435 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c6c64b5c-glnmb" event={"ID":"f180ecb5-e534-49cd-a9a4-e12a29ecca51","Type":"ContainerDied","Data":"1456aa594e71fbb1bf8ca3e2dfbc1fe9f0b7ddfaa1406fee72ec8a109e593eae"} Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.817460 4812 scope.go:117] "RemoveContainer" containerID="489c45fb674c28568c6ab437b387403d4d9e026cf2fc3eefb80b1ed473f708b1" Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.835964 4812 scope.go:117] "RemoveContainer" containerID="db0c44173655a9d2e9e4686b3ee9b11f98ae4119a3c03ccab0164f54e1019537" Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.837123 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdznb\" (UniqueName: \"kubernetes.io/projected/f180ecb5-e534-49cd-a9a4-e12a29ecca51-kube-api-access-gdznb\") pod \"f180ecb5-e534-49cd-a9a4-e12a29ecca51\" (UID: \"f180ecb5-e534-49cd-a9a4-e12a29ecca51\") " Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.837322 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f180ecb5-e534-49cd-a9a4-e12a29ecca51-config\") pod \"f180ecb5-e534-49cd-a9a4-e12a29ecca51\" (UID: \"f180ecb5-e534-49cd-a9a4-e12a29ecca51\") " Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.837465 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f180ecb5-e534-49cd-a9a4-e12a29ecca51-dns-svc\") pod \"f180ecb5-e534-49cd-a9a4-e12a29ecca51\" (UID: \"f180ecb5-e534-49cd-a9a4-e12a29ecca51\") " Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.843668 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f180ecb5-e534-49cd-a9a4-e12a29ecca51-kube-api-access-gdznb" (OuterVolumeSpecName: "kube-api-access-gdznb") pod "f180ecb5-e534-49cd-a9a4-e12a29ecca51" (UID: "f180ecb5-e534-49cd-a9a4-e12a29ecca51"). InnerVolumeSpecName "kube-api-access-gdznb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.883790 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f180ecb5-e534-49cd-a9a4-e12a29ecca51-config" (OuterVolumeSpecName: "config") pod "f180ecb5-e534-49cd-a9a4-e12a29ecca51" (UID: "f180ecb5-e534-49cd-a9a4-e12a29ecca51"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.897024 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f180ecb5-e534-49cd-a9a4-e12a29ecca51-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f180ecb5-e534-49cd-a9a4-e12a29ecca51" (UID: "f180ecb5-e534-49cd-a9a4-e12a29ecca51"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.936301 4812 scope.go:117] "RemoveContainer" containerID="489c45fb674c28568c6ab437b387403d4d9e026cf2fc3eefb80b1ed473f708b1" Nov 24 20:40:08 crc kubenswrapper[4812]: E1124 20:40:08.937312 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"489c45fb674c28568c6ab437b387403d4d9e026cf2fc3eefb80b1ed473f708b1\": container with ID starting with 489c45fb674c28568c6ab437b387403d4d9e026cf2fc3eefb80b1ed473f708b1 not found: ID does not exist" containerID="489c45fb674c28568c6ab437b387403d4d9e026cf2fc3eefb80b1ed473f708b1" Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.937420 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"489c45fb674c28568c6ab437b387403d4d9e026cf2fc3eefb80b1ed473f708b1"} err="failed to get container status \"489c45fb674c28568c6ab437b387403d4d9e026cf2fc3eefb80b1ed473f708b1\": rpc error: code = NotFound desc = could not find container \"489c45fb674c28568c6ab437b387403d4d9e026cf2fc3eefb80b1ed473f708b1\": container with ID starting with 489c45fb674c28568c6ab437b387403d4d9e026cf2fc3eefb80b1ed473f708b1 not found: ID does not exist" Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.937457 4812 scope.go:117] "RemoveContainer" containerID="db0c44173655a9d2e9e4686b3ee9b11f98ae4119a3c03ccab0164f54e1019537" Nov 24 20:40:08 crc kubenswrapper[4812]: E1124 20:40:08.937741 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db0c44173655a9d2e9e4686b3ee9b11f98ae4119a3c03ccab0164f54e1019537\": container with ID starting with db0c44173655a9d2e9e4686b3ee9b11f98ae4119a3c03ccab0164f54e1019537 not found: ID does not exist" containerID="db0c44173655a9d2e9e4686b3ee9b11f98ae4119a3c03ccab0164f54e1019537" Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.937765 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0c44173655a9d2e9e4686b3ee9b11f98ae4119a3c03ccab0164f54e1019537"} err="failed to get container status \"db0c44173655a9d2e9e4686b3ee9b11f98ae4119a3c03ccab0164f54e1019537\": rpc error: code = NotFound desc = could not find container \"db0c44173655a9d2e9e4686b3ee9b11f98ae4119a3c03ccab0164f54e1019537\": container with ID starting with db0c44173655a9d2e9e4686b3ee9b11f98ae4119a3c03ccab0164f54e1019537 not found: ID does not exist" Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.938925 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f180ecb5-e534-49cd-a9a4-e12a29ecca51-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.938944 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdznb\" (UniqueName: \"kubernetes.io/projected/f180ecb5-e534-49cd-a9a4-e12a29ecca51-kube-api-access-gdznb\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:08 crc kubenswrapper[4812]: I1124 20:40:08.938955 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f180ecb5-e534-49cd-a9a4-e12a29ecca51-config\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:09 crc kubenswrapper[4812]: I1124 20:40:09.157207 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59c6c64b5c-glnmb"] Nov 24 20:40:09 crc kubenswrapper[4812]: I1124 20:40:09.167572 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59c6c64b5c-glnmb"] Nov 24 20:40:10 crc kubenswrapper[4812]: I1124 20:40:10.980228 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f180ecb5-e534-49cd-a9a4-e12a29ecca51" path="/var/lib/kubelet/pods/f180ecb5-e534-49cd-a9a4-e12a29ecca51/volumes" Nov 24 20:40:13 crc kubenswrapper[4812]: I1124 20:40:13.127195 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="fec318d7-3a68-4aa4-8145-511d13aae323" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.245:5671: connect: connection refused" Nov 24 20:40:13 crc kubenswrapper[4812]: I1124 20:40:13.396526 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="e2962d2b-d745-4923-bb91-47cf0f07eb7b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.246:5671: connect: connection refused" Nov 24 20:40:13 crc kubenswrapper[4812]: I1124 20:40:13.875038 4812 generic.go:334] "Generic (PLEG): container finished" podID="e2962d2b-d745-4923-bb91-47cf0f07eb7b" containerID="a3e3e8f62126ba3683a5a7b41140d73f2247b5c22b8ff12219a49651a7f11d53" exitCode=0 Nov 24 20:40:13 crc kubenswrapper[4812]: I1124 20:40:13.875099 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e2962d2b-d745-4923-bb91-47cf0f07eb7b","Type":"ContainerDied","Data":"a3e3e8f62126ba3683a5a7b41140d73f2247b5c22b8ff12219a49651a7f11d53"} Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.297309 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.443966 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-erlang-cookie\") pod \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.444025 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-tls\") pod \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.444046 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2962d2b-d745-4923-bb91-47cf0f07eb7b-server-conf\") pod \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.444674 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e2962d2b-d745-4923-bb91-47cf0f07eb7b" (UID: "e2962d2b-d745-4923-bb91-47cf0f07eb7b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.444746 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-confd\") pod \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.445054 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f2c05e05-c5f2-4121-b965-122db19dfe52\") pod \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.445093 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2962d2b-d745-4923-bb91-47cf0f07eb7b-pod-info\") pod \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.445123 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2962d2b-d745-4923-bb91-47cf0f07eb7b-erlang-cookie-secret\") pod \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.445141 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2962d2b-d745-4923-bb91-47cf0f07eb7b-plugins-conf\") pod \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.445157 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2962d2b-d745-4923-bb91-47cf0f07eb7b-config-data\") pod \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.445185 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-plugins\") pod \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.445204 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfcw5\" (UniqueName: \"kubernetes.io/projected/e2962d2b-d745-4923-bb91-47cf0f07eb7b-kube-api-access-lfcw5\") pod \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\" (UID: \"e2962d2b-d745-4923-bb91-47cf0f07eb7b\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.445443 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.446859 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2962d2b-d745-4923-bb91-47cf0f07eb7b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e2962d2b-d745-4923-bb91-47cf0f07eb7b" (UID: "e2962d2b-d745-4923-bb91-47cf0f07eb7b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.447236 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e2962d2b-d745-4923-bb91-47cf0f07eb7b" (UID: "e2962d2b-d745-4923-bb91-47cf0f07eb7b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.462074 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2962d2b-d745-4923-bb91-47cf0f07eb7b-kube-api-access-lfcw5" (OuterVolumeSpecName: "kube-api-access-lfcw5") pod "e2962d2b-d745-4923-bb91-47cf0f07eb7b" (UID: "e2962d2b-d745-4923-bb91-47cf0f07eb7b"). InnerVolumeSpecName "kube-api-access-lfcw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.464219 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e2962d2b-d745-4923-bb91-47cf0f07eb7b" (UID: "e2962d2b-d745-4923-bb91-47cf0f07eb7b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.467354 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f2c05e05-c5f2-4121-b965-122db19dfe52" (OuterVolumeSpecName: "persistence") pod "e2962d2b-d745-4923-bb91-47cf0f07eb7b" (UID: "e2962d2b-d745-4923-bb91-47cf0f07eb7b"). InnerVolumeSpecName "pvc-f2c05e05-c5f2-4121-b965-122db19dfe52". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.467542 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e2962d2b-d745-4923-bb91-47cf0f07eb7b-pod-info" (OuterVolumeSpecName: "pod-info") pod "e2962d2b-d745-4923-bb91-47cf0f07eb7b" (UID: "e2962d2b-d745-4923-bb91-47cf0f07eb7b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.474414 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2962d2b-d745-4923-bb91-47cf0f07eb7b-config-data" (OuterVolumeSpecName: "config-data") pod "e2962d2b-d745-4923-bb91-47cf0f07eb7b" (UID: "e2962d2b-d745-4923-bb91-47cf0f07eb7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.475517 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2962d2b-d745-4923-bb91-47cf0f07eb7b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e2962d2b-d745-4923-bb91-47cf0f07eb7b" (UID: "e2962d2b-d745-4923-bb91-47cf0f07eb7b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.501187 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2962d2b-d745-4923-bb91-47cf0f07eb7b-server-conf" (OuterVolumeSpecName: "server-conf") pod "e2962d2b-d745-4923-bb91-47cf0f07eb7b" (UID: "e2962d2b-d745-4923-bb91-47cf0f07eb7b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.542601 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e2962d2b-d745-4923-bb91-47cf0f07eb7b" (UID: "e2962d2b-d745-4923-bb91-47cf0f07eb7b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.546014 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.546079 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f2c05e05-c5f2-4121-b965-122db19dfe52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f2c05e05-c5f2-4121-b965-122db19dfe52\") on node \"crc\" " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.546096 4812 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2962d2b-d745-4923-bb91-47cf0f07eb7b-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.546110 4812 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2962d2b-d745-4923-bb91-47cf0f07eb7b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.546124 4812 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2962d2b-d745-4923-bb91-47cf0f07eb7b-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.546136 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2962d2b-d745-4923-bb91-47cf0f07eb7b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.546148 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.546160 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfcw5\" (UniqueName: \"kubernetes.io/projected/e2962d2b-d745-4923-bb91-47cf0f07eb7b-kube-api-access-lfcw5\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.546171 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2962d2b-d745-4923-bb91-47cf0f07eb7b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.546183 4812 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2962d2b-d745-4923-bb91-47cf0f07eb7b-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.566241 4812 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.566526 4812 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f2c05e05-c5f2-4121-b965-122db19dfe52" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f2c05e05-c5f2-4121-b965-122db19dfe52") on node "crc" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.647629 4812 reconciler_common.go:293] "Volume detached for volume \"pvc-f2c05e05-c5f2-4121-b965-122db19dfe52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f2c05e05-c5f2-4121-b965-122db19dfe52\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.688750 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.748633 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-63adf78e-a6f9-47f2-a73b-780e889378eb\") pod \"fec318d7-3a68-4aa4-8145-511d13aae323\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.748685 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fec318d7-3a68-4aa4-8145-511d13aae323-config-data\") pod \"fec318d7-3a68-4aa4-8145-511d13aae323\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.748707 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fec318d7-3a68-4aa4-8145-511d13aae323-plugins-conf\") pod \"fec318d7-3a68-4aa4-8145-511d13aae323\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.748730 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-confd\") pod \"fec318d7-3a68-4aa4-8145-511d13aae323\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.748765 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-tls\") pod \"fec318d7-3a68-4aa4-8145-511d13aae323\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.748791 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-erlang-cookie\") pod \"fec318d7-3a68-4aa4-8145-511d13aae323\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.748811 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-plugins\") pod \"fec318d7-3a68-4aa4-8145-511d13aae323\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.748836 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fec318d7-3a68-4aa4-8145-511d13aae323-erlang-cookie-secret\") pod \"fec318d7-3a68-4aa4-8145-511d13aae323\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.748851 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fec318d7-3a68-4aa4-8145-511d13aae323-server-conf\") pod \"fec318d7-3a68-4aa4-8145-511d13aae323\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.748876 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v65z\" (UniqueName: \"kubernetes.io/projected/fec318d7-3a68-4aa4-8145-511d13aae323-kube-api-access-6v65z\") pod \"fec318d7-3a68-4aa4-8145-511d13aae323\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.748894 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fec318d7-3a68-4aa4-8145-511d13aae323-pod-info\") pod \"fec318d7-3a68-4aa4-8145-511d13aae323\" (UID: \"fec318d7-3a68-4aa4-8145-511d13aae323\") " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.749957 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "fec318d7-3a68-4aa4-8145-511d13aae323" (UID: "fec318d7-3a68-4aa4-8145-511d13aae323"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.751070 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "fec318d7-3a68-4aa4-8145-511d13aae323" (UID: "fec318d7-3a68-4aa4-8145-511d13aae323"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.751142 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fec318d7-3a68-4aa4-8145-511d13aae323-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "fec318d7-3a68-4aa4-8145-511d13aae323" (UID: "fec318d7-3a68-4aa4-8145-511d13aae323"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.758266 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "fec318d7-3a68-4aa4-8145-511d13aae323" (UID: "fec318d7-3a68-4aa4-8145-511d13aae323"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.758438 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/fec318d7-3a68-4aa4-8145-511d13aae323-pod-info" (OuterVolumeSpecName: "pod-info") pod "fec318d7-3a68-4aa4-8145-511d13aae323" (UID: "fec318d7-3a68-4aa4-8145-511d13aae323"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.764552 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fec318d7-3a68-4aa4-8145-511d13aae323-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "fec318d7-3a68-4aa4-8145-511d13aae323" (UID: "fec318d7-3a68-4aa4-8145-511d13aae323"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.765147 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-63adf78e-a6f9-47f2-a73b-780e889378eb" (OuterVolumeSpecName: "persistence") pod "fec318d7-3a68-4aa4-8145-511d13aae323" (UID: "fec318d7-3a68-4aa4-8145-511d13aae323"). InnerVolumeSpecName "pvc-63adf78e-a6f9-47f2-a73b-780e889378eb". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.767358 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fec318d7-3a68-4aa4-8145-511d13aae323-kube-api-access-6v65z" (OuterVolumeSpecName: "kube-api-access-6v65z") pod "fec318d7-3a68-4aa4-8145-511d13aae323" (UID: "fec318d7-3a68-4aa4-8145-511d13aae323"). InnerVolumeSpecName "kube-api-access-6v65z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.769859 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fec318d7-3a68-4aa4-8145-511d13aae323-config-data" (OuterVolumeSpecName: "config-data") pod "fec318d7-3a68-4aa4-8145-511d13aae323" (UID: "fec318d7-3a68-4aa4-8145-511d13aae323"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.789775 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fec318d7-3a68-4aa4-8145-511d13aae323-server-conf" (OuterVolumeSpecName: "server-conf") pod "fec318d7-3a68-4aa4-8145-511d13aae323" (UID: "fec318d7-3a68-4aa4-8145-511d13aae323"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.849640 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v65z\" (UniqueName: \"kubernetes.io/projected/fec318d7-3a68-4aa4-8145-511d13aae323-kube-api-access-6v65z\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.849922 4812 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/fec318d7-3a68-4aa4-8145-511d13aae323-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.849957 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-63adf78e-a6f9-47f2-a73b-780e889378eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-63adf78e-a6f9-47f2-a73b-780e889378eb\") on node \"crc\" " Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.849970 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fec318d7-3a68-4aa4-8145-511d13aae323-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.849981 4812 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/fec318d7-3a68-4aa4-8145-511d13aae323-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.849994 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.850004 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.850015 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.850024 4812 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/fec318d7-3a68-4aa4-8145-511d13aae323-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.850033 4812 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/fec318d7-3a68-4aa4-8145-511d13aae323-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.879229 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "fec318d7-3a68-4aa4-8145-511d13aae323" (UID: "fec318d7-3a68-4aa4-8145-511d13aae323"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.880411 4812 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.880625 4812 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-63adf78e-a6f9-47f2-a73b-780e889378eb" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-63adf78e-a6f9-47f2-a73b-780e889378eb") on node "crc" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.886582 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"e2962d2b-d745-4923-bb91-47cf0f07eb7b","Type":"ContainerDied","Data":"adc50b143f7556749ffea279fe2f0a0711c982e81a99f521b834f6ed870c0e30"} Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.886626 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.886637 4812 scope.go:117] "RemoveContainer" containerID="a3e3e8f62126ba3683a5a7b41140d73f2247b5c22b8ff12219a49651a7f11d53" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.889864 4812 generic.go:334] "Generic (PLEG): container finished" podID="fec318d7-3a68-4aa4-8145-511d13aae323" containerID="9d3fe8c1d76e9a906db9bdbc174d5bc86872fcafd8943e4a5d19f0edd8b10ca8" exitCode=0 Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.889981 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fec318d7-3a68-4aa4-8145-511d13aae323","Type":"ContainerDied","Data":"9d3fe8c1d76e9a906db9bdbc174d5bc86872fcafd8943e4a5d19f0edd8b10ca8"} Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.890054 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"fec318d7-3a68-4aa4-8145-511d13aae323","Type":"ContainerDied","Data":"b298287db0d79b8ea92df56858db642759ce6dddc43c669a1ab4dee3d4cc0c05"} Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.890083 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.917202 4812 scope.go:117] "RemoveContainer" containerID="6bbff99df52c8a970fa4e4db65e02599974521241babc0b1a998f62fad0fa637" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.930660 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.938544 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.950171 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.957386 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.962877 4812 scope.go:117] "RemoveContainer" containerID="9d3fe8c1d76e9a906db9bdbc174d5bc86872fcafd8943e4a5d19f0edd8b10ca8" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.963966 4812 reconciler_common.go:293] "Volume detached for volume \"pvc-63adf78e-a6f9-47f2-a73b-780e889378eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-63adf78e-a6f9-47f2-a73b-780e889378eb\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:14 crc kubenswrapper[4812]: I1124 20:40:14.963992 4812 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/fec318d7-3a68-4aa4-8145-511d13aae323-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:14.990510 4812 scope.go:117] "RemoveContainer" containerID="735d30c1f30577ea593cd1124cbace3b6654af2ba60c9b45af06482e0aa4b2e0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:14.992951 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2962d2b-d745-4923-bb91-47cf0f07eb7b" path="/var/lib/kubelet/pods/e2962d2b-d745-4923-bb91-47cf0f07eb7b/volumes" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:14.994639 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fec318d7-3a68-4aa4-8145-511d13aae323" path="/var/lib/kubelet/pods/fec318d7-3a68-4aa4-8145-511d13aae323/volumes" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.000670 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 20:40:15 crc kubenswrapper[4812]: E1124 20:40:15.007676 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f180ecb5-e534-49cd-a9a4-e12a29ecca51" containerName="init" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.007708 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f180ecb5-e534-49cd-a9a4-e12a29ecca51" containerName="init" Nov 24 20:40:15 crc kubenswrapper[4812]: E1124 20:40:15.007732 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec318d7-3a68-4aa4-8145-511d13aae323" containerName="setup-container" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.007741 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec318d7-3a68-4aa4-8145-511d13aae323" containerName="setup-container" Nov 24 20:40:15 crc kubenswrapper[4812]: E1124 20:40:15.007765 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec318d7-3a68-4aa4-8145-511d13aae323" containerName="rabbitmq" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.007778 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec318d7-3a68-4aa4-8145-511d13aae323" containerName="rabbitmq" Nov 24 20:40:15 crc kubenswrapper[4812]: E1124 20:40:15.007804 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2962d2b-d745-4923-bb91-47cf0f07eb7b" containerName="rabbitmq" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.007821 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2962d2b-d745-4923-bb91-47cf0f07eb7b" containerName="rabbitmq" Nov 24 20:40:15 crc kubenswrapper[4812]: E1124 20:40:15.007851 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2962d2b-d745-4923-bb91-47cf0f07eb7b" containerName="setup-container" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.007860 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2962d2b-d745-4923-bb91-47cf0f07eb7b" containerName="setup-container" Nov 24 20:40:15 crc kubenswrapper[4812]: E1124 20:40:15.007896 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f180ecb5-e534-49cd-a9a4-e12a29ecca51" containerName="dnsmasq-dns" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.007904 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f180ecb5-e534-49cd-a9a4-e12a29ecca51" containerName="dnsmasq-dns" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.018846 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f180ecb5-e534-49cd-a9a4-e12a29ecca51" containerName="dnsmasq-dns" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.018886 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="fec318d7-3a68-4aa4-8145-511d13aae323" containerName="rabbitmq" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.018908 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2962d2b-d745-4923-bb91-47cf0f07eb7b" containerName="rabbitmq" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.020315 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.020608 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.021067 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.026453 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.040734 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.040766 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.041096 4812 scope.go:117] "RemoveContainer" containerID="9d3fe8c1d76e9a906db9bdbc174d5bc86872fcafd8943e4a5d19f0edd8b10ca8" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.041938 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 20:40:15 crc kubenswrapper[4812]: E1124 20:40:15.042281 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d3fe8c1d76e9a906db9bdbc174d5bc86872fcafd8943e4a5d19f0edd8b10ca8\": container with ID starting with 9d3fe8c1d76e9a906db9bdbc174d5bc86872fcafd8943e4a5d19f0edd8b10ca8 not found: ID does not exist" containerID="9d3fe8c1d76e9a906db9bdbc174d5bc86872fcafd8943e4a5d19f0edd8b10ca8" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.042321 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d3fe8c1d76e9a906db9bdbc174d5bc86872fcafd8943e4a5d19f0edd8b10ca8"} err="failed to get container status \"9d3fe8c1d76e9a906db9bdbc174d5bc86872fcafd8943e4a5d19f0edd8b10ca8\": rpc error: code = NotFound desc = could not find container \"9d3fe8c1d76e9a906db9bdbc174d5bc86872fcafd8943e4a5d19f0edd8b10ca8\": container with ID starting with 9d3fe8c1d76e9a906db9bdbc174d5bc86872fcafd8943e4a5d19f0edd8b10ca8 not found: ID does not exist" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.042362 4812 scope.go:117] "RemoveContainer" containerID="735d30c1f30577ea593cd1124cbace3b6654af2ba60c9b45af06482e0aa4b2e0" Nov 24 20:40:15 crc kubenswrapper[4812]: E1124 20:40:15.042605 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"735d30c1f30577ea593cd1124cbace3b6654af2ba60c9b45af06482e0aa4b2e0\": container with ID starting with 735d30c1f30577ea593cd1124cbace3b6654af2ba60c9b45af06482e0aa4b2e0 not found: ID does not exist" containerID="735d30c1f30577ea593cd1124cbace3b6654af2ba60c9b45af06482e0aa4b2e0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.042623 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"735d30c1f30577ea593cd1124cbace3b6654af2ba60c9b45af06482e0aa4b2e0"} err="failed to get container status \"735d30c1f30577ea593cd1124cbace3b6654af2ba60c9b45af06482e0aa4b2e0\": rpc error: code = NotFound desc = could not find container \"735d30c1f30577ea593cd1124cbace3b6654af2ba60c9b45af06482e0aa4b2e0\": container with ID starting with 735d30c1f30577ea593cd1124cbace3b6654af2ba60c9b45af06482e0aa4b2e0 not found: ID does not exist" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.043486 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.043720 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.044510 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.044586 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-7cm2b" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.044669 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.044873 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.044899 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.044918 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-s2qdl" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.044997 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.045152 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.046784 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.046834 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.065088 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-63adf78e-a6f9-47f2-a73b-780e889378eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-63adf78e-a6f9-47f2-a73b-780e889378eb\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.065167 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d432b03-6333-4650-b763-433dd01c0977-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.065227 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d432b03-6333-4650-b763-433dd01c0977-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.065256 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d432b03-6333-4650-b763-433dd01c0977-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.065282 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d432b03-6333-4650-b763-433dd01c0977-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.065316 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d432b03-6333-4650-b763-433dd01c0977-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.065362 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d432b03-6333-4650-b763-433dd01c0977-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.065381 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d432b03-6333-4650-b763-433dd01c0977-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.065397 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbmdl\" (UniqueName: \"kubernetes.io/projected/2d432b03-6333-4650-b763-433dd01c0977-kube-api-access-nbmdl\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.065419 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d432b03-6333-4650-b763-433dd01c0977-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.065468 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d432b03-6333-4650-b763-433dd01c0977-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166441 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d432b03-6333-4650-b763-433dd01c0977-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166495 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d432b03-6333-4650-b763-433dd01c0977-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166526 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67f318f4-965e-4123-8fd2-21d1f495d110-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166548 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d432b03-6333-4650-b763-433dd01c0977-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166565 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d432b03-6333-4650-b763-433dd01c0977-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166587 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d432b03-6333-4650-b763-433dd01c0977-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166608 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d432b03-6333-4650-b763-433dd01c0977-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166631 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbmdl\" (UniqueName: \"kubernetes.io/projected/2d432b03-6333-4650-b763-433dd01c0977-kube-api-access-nbmdl\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166653 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67f318f4-965e-4123-8fd2-21d1f495d110-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166672 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d432b03-6333-4650-b763-433dd01c0977-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166688 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67f318f4-965e-4123-8fd2-21d1f495d110-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166719 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d432b03-6333-4650-b763-433dd01c0977-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166749 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-63adf78e-a6f9-47f2-a73b-780e889378eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-63adf78e-a6f9-47f2-a73b-780e889378eb\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166773 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67f318f4-965e-4123-8fd2-21d1f495d110-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166790 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67f318f4-965e-4123-8fd2-21d1f495d110-pod-info\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166805 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q986\" (UniqueName: \"kubernetes.io/projected/67f318f4-965e-4123-8fd2-21d1f495d110-kube-api-access-5q986\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166828 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67f318f4-965e-4123-8fd2-21d1f495d110-server-conf\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166852 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d432b03-6333-4650-b763-433dd01c0977-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166867 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67f318f4-965e-4123-8fd2-21d1f495d110-config-data\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166882 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67f318f4-965e-4123-8fd2-21d1f495d110-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166898 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67f318f4-965e-4123-8fd2-21d1f495d110-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.166923 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f2c05e05-c5f2-4121-b965-122db19dfe52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f2c05e05-c5f2-4121-b965-122db19dfe52\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.167433 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2d432b03-6333-4650-b763-433dd01c0977-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.167454 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2d432b03-6333-4650-b763-433dd01c0977-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.168008 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2d432b03-6333-4650-b763-433dd01c0977-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.168110 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2d432b03-6333-4650-b763-433dd01c0977-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.168228 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2d432b03-6333-4650-b763-433dd01c0977-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.169472 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.169495 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-63adf78e-a6f9-47f2-a73b-780e889378eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-63adf78e-a6f9-47f2-a73b-780e889378eb\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b7339b2a679bf59bd9d095b8c9a5f08f52f05b9abec851abbd4b77e9fd0415e3/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.169852 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2d432b03-6333-4650-b763-433dd01c0977-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.171607 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2d432b03-6333-4650-b763-433dd01c0977-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.172467 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2d432b03-6333-4650-b763-433dd01c0977-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.175852 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2d432b03-6333-4650-b763-433dd01c0977-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.185559 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbmdl\" (UniqueName: \"kubernetes.io/projected/2d432b03-6333-4650-b763-433dd01c0977-kube-api-access-nbmdl\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.200437 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-63adf78e-a6f9-47f2-a73b-780e889378eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-63adf78e-a6f9-47f2-a73b-780e889378eb\") pod \"rabbitmq-cell1-server-0\" (UID: \"2d432b03-6333-4650-b763-433dd01c0977\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.268690 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67f318f4-965e-4123-8fd2-21d1f495d110-server-conf\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.268799 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67f318f4-965e-4123-8fd2-21d1f495d110-config-data\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.268833 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67f318f4-965e-4123-8fd2-21d1f495d110-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.268864 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67f318f4-965e-4123-8fd2-21d1f495d110-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.268915 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f2c05e05-c5f2-4121-b965-122db19dfe52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f2c05e05-c5f2-4121-b965-122db19dfe52\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.269052 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67f318f4-965e-4123-8fd2-21d1f495d110-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.269147 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67f318f4-965e-4123-8fd2-21d1f495d110-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.269183 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67f318f4-965e-4123-8fd2-21d1f495d110-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.269282 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67f318f4-965e-4123-8fd2-21d1f495d110-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.269315 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67f318f4-965e-4123-8fd2-21d1f495d110-pod-info\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.269354 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q986\" (UniqueName: \"kubernetes.io/projected/67f318f4-965e-4123-8fd2-21d1f495d110-kube-api-access-5q986\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.269748 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/67f318f4-965e-4123-8fd2-21d1f495d110-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.269899 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/67f318f4-965e-4123-8fd2-21d1f495d110-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.270444 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/67f318f4-965e-4123-8fd2-21d1f495d110-config-data\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.271088 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/67f318f4-965e-4123-8fd2-21d1f495d110-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.273354 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/67f318f4-965e-4123-8fd2-21d1f495d110-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.274091 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/67f318f4-965e-4123-8fd2-21d1f495d110-server-conf\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.274494 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.274594 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f2c05e05-c5f2-4121-b965-122db19dfe52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f2c05e05-c5f2-4121-b965-122db19dfe52\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3ae9faa9c8565b4a9c378d06dabfd62c4b1d925f5972ce11cf26ac121ab533ae/globalmount\"" pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.274626 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/67f318f4-965e-4123-8fd2-21d1f495d110-pod-info\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.274851 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/67f318f4-965e-4123-8fd2-21d1f495d110-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.277207 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/67f318f4-965e-4123-8fd2-21d1f495d110-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.298808 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q986\" (UniqueName: \"kubernetes.io/projected/67f318f4-965e-4123-8fd2-21d1f495d110-kube-api-access-5q986\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.307094 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f2c05e05-c5f2-4121-b965-122db19dfe52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f2c05e05-c5f2-4121-b965-122db19dfe52\") pod \"rabbitmq-server-0\" (UID: \"67f318f4-965e-4123-8fd2-21d1f495d110\") " pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.374592 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.384970 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.697809 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 20:40:15 crc kubenswrapper[4812]: W1124 20:40:15.764755 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67f318f4_965e_4123_8fd2_21d1f495d110.slice/crio-aaf0392b82e1265babf4cf76452cef726b320ac8493cf9fa65cc1205afbeafca WatchSource:0}: Error finding container aaf0392b82e1265babf4cf76452cef726b320ac8493cf9fa65cc1205afbeafca: Status 404 returned error can't find the container with id aaf0392b82e1265babf4cf76452cef726b320ac8493cf9fa65cc1205afbeafca Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.764935 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.904852 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2d432b03-6333-4650-b763-433dd01c0977","Type":"ContainerStarted","Data":"97b2fef8292cc8b0d074cf0bf75960c44967c7019dfd65c331bfbeca2b5dd247"} Nov 24 20:40:15 crc kubenswrapper[4812]: I1124 20:40:15.909352 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67f318f4-965e-4123-8fd2-21d1f495d110","Type":"ContainerStarted","Data":"aaf0392b82e1265babf4cf76452cef726b320ac8493cf9fa65cc1205afbeafca"} Nov 24 20:40:17 crc kubenswrapper[4812]: I1124 20:40:17.936567 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2d432b03-6333-4650-b763-433dd01c0977","Type":"ContainerStarted","Data":"bb2a8eaedec73e8876bd8027a1b63275a0cccac39ab0e1cb2bc406b6fd75a062"} Nov 24 20:40:17 crc kubenswrapper[4812]: I1124 20:40:17.942950 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67f318f4-965e-4123-8fd2-21d1f495d110","Type":"ContainerStarted","Data":"bb853bea301f114502c21fb619dae26dcca812c89bd4bdb58f7fc6c69c30b9e4"} Nov 24 20:40:32 crc kubenswrapper[4812]: I1124 20:40:32.998874 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:40:33 crc kubenswrapper[4812]: I1124 20:40:32.999570 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:40:52 crc kubenswrapper[4812]: I1124 20:40:52.302752 4812 generic.go:334] "Generic (PLEG): container finished" podID="2d432b03-6333-4650-b763-433dd01c0977" containerID="bb2a8eaedec73e8876bd8027a1b63275a0cccac39ab0e1cb2bc406b6fd75a062" exitCode=0 Nov 24 20:40:52 crc kubenswrapper[4812]: I1124 20:40:52.302858 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2d432b03-6333-4650-b763-433dd01c0977","Type":"ContainerDied","Data":"bb2a8eaedec73e8876bd8027a1b63275a0cccac39ab0e1cb2bc406b6fd75a062"} Nov 24 20:40:52 crc kubenswrapper[4812]: I1124 20:40:52.310669 4812 generic.go:334] "Generic (PLEG): container finished" podID="67f318f4-965e-4123-8fd2-21d1f495d110" containerID="bb853bea301f114502c21fb619dae26dcca812c89bd4bdb58f7fc6c69c30b9e4" exitCode=0 Nov 24 20:40:52 crc kubenswrapper[4812]: I1124 20:40:52.310721 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67f318f4-965e-4123-8fd2-21d1f495d110","Type":"ContainerDied","Data":"bb853bea301f114502c21fb619dae26dcca812c89bd4bdb58f7fc6c69c30b9e4"} Nov 24 20:40:53 crc kubenswrapper[4812]: I1124 20:40:53.321252 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"67f318f4-965e-4123-8fd2-21d1f495d110","Type":"ContainerStarted","Data":"e0e9709ef4cc22a11a67833d313a0e7b5551d9451a7bfa10353874fc345c4e27"} Nov 24 20:40:53 crc kubenswrapper[4812]: I1124 20:40:53.322206 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 20:40:53 crc kubenswrapper[4812]: I1124 20:40:53.323815 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2d432b03-6333-4650-b763-433dd01c0977","Type":"ContainerStarted","Data":"c13bf56a06d3a0151effec7b78bbbc779052573c544ce581a1502ce65b5719ed"} Nov 24 20:40:53 crc kubenswrapper[4812]: I1124 20:40:53.323999 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:40:53 crc kubenswrapper[4812]: I1124 20:40:53.352459 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.352430388 podStartE2EDuration="39.352430388s" podCreationTimestamp="2025-11-24 20:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:40:53.345722807 +0000 UTC m=+5047.134675188" watchObservedRunningTime="2025-11-24 20:40:53.352430388 +0000 UTC m=+5047.141382769" Nov 24 20:40:53 crc kubenswrapper[4812]: I1124 20:40:53.376395 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.37637216 podStartE2EDuration="39.37637216s" podCreationTimestamp="2025-11-24 20:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:40:53.370581475 +0000 UTC m=+5047.159533876" watchObservedRunningTime="2025-11-24 20:40:53.37637216 +0000 UTC m=+5047.165324541" Nov 24 20:41:00 crc kubenswrapper[4812]: I1124 20:41:00.361075 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zgpf5"] Nov 24 20:41:00 crc kubenswrapper[4812]: I1124 20:41:00.363932 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgpf5" Nov 24 20:41:00 crc kubenswrapper[4812]: I1124 20:41:00.381627 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgpf5"] Nov 24 20:41:00 crc kubenswrapper[4812]: I1124 20:41:00.496084 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpdnl\" (UniqueName: \"kubernetes.io/projected/87ffda09-50aa-4938-bbdc-a7bcd8a3ce02-kube-api-access-gpdnl\") pod \"redhat-marketplace-zgpf5\" (UID: \"87ffda09-50aa-4938-bbdc-a7bcd8a3ce02\") " pod="openshift-marketplace/redhat-marketplace-zgpf5" Nov 24 20:41:00 crc kubenswrapper[4812]: I1124 20:41:00.496176 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87ffda09-50aa-4938-bbdc-a7bcd8a3ce02-catalog-content\") pod \"redhat-marketplace-zgpf5\" (UID: \"87ffda09-50aa-4938-bbdc-a7bcd8a3ce02\") " pod="openshift-marketplace/redhat-marketplace-zgpf5" Nov 24 20:41:00 crc kubenswrapper[4812]: I1124 20:41:00.496315 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87ffda09-50aa-4938-bbdc-a7bcd8a3ce02-utilities\") pod \"redhat-marketplace-zgpf5\" (UID: \"87ffda09-50aa-4938-bbdc-a7bcd8a3ce02\") " pod="openshift-marketplace/redhat-marketplace-zgpf5" Nov 24 20:41:00 crc kubenswrapper[4812]: I1124 20:41:00.597272 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpdnl\" (UniqueName: \"kubernetes.io/projected/87ffda09-50aa-4938-bbdc-a7bcd8a3ce02-kube-api-access-gpdnl\") pod \"redhat-marketplace-zgpf5\" (UID: \"87ffda09-50aa-4938-bbdc-a7bcd8a3ce02\") " pod="openshift-marketplace/redhat-marketplace-zgpf5" Nov 24 20:41:00 crc kubenswrapper[4812]: I1124 20:41:00.597323 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87ffda09-50aa-4938-bbdc-a7bcd8a3ce02-catalog-content\") pod \"redhat-marketplace-zgpf5\" (UID: \"87ffda09-50aa-4938-bbdc-a7bcd8a3ce02\") " pod="openshift-marketplace/redhat-marketplace-zgpf5" Nov 24 20:41:00 crc kubenswrapper[4812]: I1124 20:41:00.597402 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87ffda09-50aa-4938-bbdc-a7bcd8a3ce02-utilities\") pod \"redhat-marketplace-zgpf5\" (UID: \"87ffda09-50aa-4938-bbdc-a7bcd8a3ce02\") " pod="openshift-marketplace/redhat-marketplace-zgpf5" Nov 24 20:41:00 crc kubenswrapper[4812]: I1124 20:41:00.597958 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87ffda09-50aa-4938-bbdc-a7bcd8a3ce02-catalog-content\") pod \"redhat-marketplace-zgpf5\" (UID: \"87ffda09-50aa-4938-bbdc-a7bcd8a3ce02\") " pod="openshift-marketplace/redhat-marketplace-zgpf5" Nov 24 20:41:00 crc kubenswrapper[4812]: I1124 20:41:00.598188 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87ffda09-50aa-4938-bbdc-a7bcd8a3ce02-utilities\") pod \"redhat-marketplace-zgpf5\" (UID: \"87ffda09-50aa-4938-bbdc-a7bcd8a3ce02\") " pod="openshift-marketplace/redhat-marketplace-zgpf5" Nov 24 20:41:00 crc kubenswrapper[4812]: I1124 20:41:00.619156 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpdnl\" (UniqueName: \"kubernetes.io/projected/87ffda09-50aa-4938-bbdc-a7bcd8a3ce02-kube-api-access-gpdnl\") pod \"redhat-marketplace-zgpf5\" (UID: \"87ffda09-50aa-4938-bbdc-a7bcd8a3ce02\") " pod="openshift-marketplace/redhat-marketplace-zgpf5" Nov 24 20:41:00 crc kubenswrapper[4812]: I1124 20:41:00.698643 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgpf5" Nov 24 20:41:01 crc kubenswrapper[4812]: I1124 20:41:01.146291 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgpf5"] Nov 24 20:41:01 crc kubenswrapper[4812]: I1124 20:41:01.391193 4812 generic.go:334] "Generic (PLEG): container finished" podID="87ffda09-50aa-4938-bbdc-a7bcd8a3ce02" containerID="94c34b170b5ba961f198ed4f3e4ef56637674dbbd945884f6259f743780787e9" exitCode=0 Nov 24 20:41:01 crc kubenswrapper[4812]: I1124 20:41:01.391235 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgpf5" event={"ID":"87ffda09-50aa-4938-bbdc-a7bcd8a3ce02","Type":"ContainerDied","Data":"94c34b170b5ba961f198ed4f3e4ef56637674dbbd945884f6259f743780787e9"} Nov 24 20:41:01 crc kubenswrapper[4812]: I1124 20:41:01.391537 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgpf5" event={"ID":"87ffda09-50aa-4938-bbdc-a7bcd8a3ce02","Type":"ContainerStarted","Data":"4eee9a541bf2d840c927f870f2214d3af100b628658c974077f0563e1a34bfb3"} Nov 24 20:41:02 crc kubenswrapper[4812]: I1124 20:41:02.401555 4812 generic.go:334] "Generic (PLEG): container finished" podID="87ffda09-50aa-4938-bbdc-a7bcd8a3ce02" containerID="652ac0c1f16fbcaa6e13e6c0afff0212fd48c649b9fc3ab5eecf243393d98307" exitCode=0 Nov 24 20:41:02 crc kubenswrapper[4812]: I1124 20:41:02.401646 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgpf5" event={"ID":"87ffda09-50aa-4938-bbdc-a7bcd8a3ce02","Type":"ContainerDied","Data":"652ac0c1f16fbcaa6e13e6c0afff0212fd48c649b9fc3ab5eecf243393d98307"} Nov 24 20:41:02 crc kubenswrapper[4812]: I1124 20:41:02.999117 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:41:02 crc kubenswrapper[4812]: I1124 20:41:02.999186 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:41:03 crc kubenswrapper[4812]: I1124 20:41:03.440354 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgpf5" event={"ID":"87ffda09-50aa-4938-bbdc-a7bcd8a3ce02","Type":"ContainerStarted","Data":"946f9d54f664053b8b465ae68d7783b1d4ba315d23739ccfbb304ca43fa8efba"} Nov 24 20:41:03 crc kubenswrapper[4812]: I1124 20:41:03.501239 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zgpf5" podStartSLOduration=1.9427966639999998 podStartE2EDuration="3.501211332s" podCreationTimestamp="2025-11-24 20:41:00 +0000 UTC" firstStartedPulling="2025-11-24 20:41:01.392829696 +0000 UTC m=+5055.181782067" lastFinishedPulling="2025-11-24 20:41:02.951244364 +0000 UTC m=+5056.740196735" observedRunningTime="2025-11-24 20:41:03.497850796 +0000 UTC m=+5057.286803177" watchObservedRunningTime="2025-11-24 20:41:03.501211332 +0000 UTC m=+5057.290163723" Nov 24 20:41:05 crc kubenswrapper[4812]: I1124 20:41:05.378670 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 20:41:05 crc kubenswrapper[4812]: I1124 20:41:05.388569 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 20:41:09 crc kubenswrapper[4812]: I1124 20:41:09.791534 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-default"] Nov 24 20:41:09 crc kubenswrapper[4812]: I1124 20:41:09.795099 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 24 20:41:09 crc kubenswrapper[4812]: I1124 20:41:09.800763 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-pz88h" Nov 24 20:41:09 crc kubenswrapper[4812]: I1124 20:41:09.802022 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 24 20:41:09 crc kubenswrapper[4812]: I1124 20:41:09.891105 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fn8v\" (UniqueName: \"kubernetes.io/projected/8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318-kube-api-access-5fn8v\") pod \"mariadb-client-1-default\" (UID: \"8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318\") " pod="openstack/mariadb-client-1-default" Nov 24 20:41:09 crc kubenswrapper[4812]: I1124 20:41:09.992460 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fn8v\" (UniqueName: \"kubernetes.io/projected/8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318-kube-api-access-5fn8v\") pod \"mariadb-client-1-default\" (UID: \"8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318\") " pod="openstack/mariadb-client-1-default" Nov 24 20:41:10 crc kubenswrapper[4812]: I1124 20:41:10.036228 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fn8v\" (UniqueName: \"kubernetes.io/projected/8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318-kube-api-access-5fn8v\") pod \"mariadb-client-1-default\" (UID: \"8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318\") " pod="openstack/mariadb-client-1-default" Nov 24 20:41:10 crc kubenswrapper[4812]: I1124 20:41:10.131491 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 24 20:41:10 crc kubenswrapper[4812]: I1124 20:41:10.692081 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 24 20:41:10 crc kubenswrapper[4812]: W1124 20:41:10.695159 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e6ea4c8_e85d_42ee_8dd1_7945bf5fa318.slice/crio-8a15a9c93ec2c48f8dbe355e239119cec16b70fbbd5d98280efcde0669ff3a9b WatchSource:0}: Error finding container 8a15a9c93ec2c48f8dbe355e239119cec16b70fbbd5d98280efcde0669ff3a9b: Status 404 returned error can't find the container with id 8a15a9c93ec2c48f8dbe355e239119cec16b70fbbd5d98280efcde0669ff3a9b Nov 24 20:41:10 crc kubenswrapper[4812]: I1124 20:41:10.698733 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zgpf5" Nov 24 20:41:10 crc kubenswrapper[4812]: I1124 20:41:10.698972 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zgpf5" Nov 24 20:41:10 crc kubenswrapper[4812]: I1124 20:41:10.761325 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zgpf5" Nov 24 20:41:11 crc kubenswrapper[4812]: I1124 20:41:11.516947 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318","Type":"ContainerStarted","Data":"8a15a9c93ec2c48f8dbe355e239119cec16b70fbbd5d98280efcde0669ff3a9b"} Nov 24 20:41:11 crc kubenswrapper[4812]: I1124 20:41:11.591730 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zgpf5" Nov 24 20:41:11 crc kubenswrapper[4812]: I1124 20:41:11.652066 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgpf5"] Nov 24 20:41:12 crc kubenswrapper[4812]: I1124 20:41:12.533540 4812 generic.go:334] "Generic (PLEG): container finished" podID="8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318" containerID="dc8d22dfff79c932edf45fdc8a10bf030f76c8c92aabdb16755aa411fd900d4b" exitCode=0 Nov 24 20:41:12 crc kubenswrapper[4812]: I1124 20:41:12.533743 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318","Type":"ContainerDied","Data":"dc8d22dfff79c932edf45fdc8a10bf030f76c8c92aabdb16755aa411fd900d4b"} Nov 24 20:41:13 crc kubenswrapper[4812]: I1124 20:41:13.542332 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zgpf5" podUID="87ffda09-50aa-4938-bbdc-a7bcd8a3ce02" containerName="registry-server" containerID="cri-o://946f9d54f664053b8b465ae68d7783b1d4ba315d23739ccfbb304ca43fa8efba" gracePeriod=2 Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.069149 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.078828 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgpf5" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.094265 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-default_8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318/mariadb-client-1-default/0.log" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.127687 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.134152 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.162547 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87ffda09-50aa-4938-bbdc-a7bcd8a3ce02-utilities\") pod \"87ffda09-50aa-4938-bbdc-a7bcd8a3ce02\" (UID: \"87ffda09-50aa-4938-bbdc-a7bcd8a3ce02\") " Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.162622 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpdnl\" (UniqueName: \"kubernetes.io/projected/87ffda09-50aa-4938-bbdc-a7bcd8a3ce02-kube-api-access-gpdnl\") pod \"87ffda09-50aa-4938-bbdc-a7bcd8a3ce02\" (UID: \"87ffda09-50aa-4938-bbdc-a7bcd8a3ce02\") " Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.162740 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87ffda09-50aa-4938-bbdc-a7bcd8a3ce02-catalog-content\") pod \"87ffda09-50aa-4938-bbdc-a7bcd8a3ce02\" (UID: \"87ffda09-50aa-4938-bbdc-a7bcd8a3ce02\") " Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.162770 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fn8v\" (UniqueName: \"kubernetes.io/projected/8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318-kube-api-access-5fn8v\") pod \"8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318\" (UID: \"8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318\") " Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.163528 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87ffda09-50aa-4938-bbdc-a7bcd8a3ce02-utilities" (OuterVolumeSpecName: "utilities") pod "87ffda09-50aa-4938-bbdc-a7bcd8a3ce02" (UID: "87ffda09-50aa-4938-bbdc-a7bcd8a3ce02"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.170302 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318-kube-api-access-5fn8v" (OuterVolumeSpecName: "kube-api-access-5fn8v") pod "8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318" (UID: "8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318"). InnerVolumeSpecName "kube-api-access-5fn8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.170832 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87ffda09-50aa-4938-bbdc-a7bcd8a3ce02-kube-api-access-gpdnl" (OuterVolumeSpecName: "kube-api-access-gpdnl") pod "87ffda09-50aa-4938-bbdc-a7bcd8a3ce02" (UID: "87ffda09-50aa-4938-bbdc-a7bcd8a3ce02"). InnerVolumeSpecName "kube-api-access-gpdnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.179957 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87ffda09-50aa-4938-bbdc-a7bcd8a3ce02-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87ffda09-50aa-4938-bbdc-a7bcd8a3ce02" (UID: "87ffda09-50aa-4938-bbdc-a7bcd8a3ce02"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.264380 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpdnl\" (UniqueName: \"kubernetes.io/projected/87ffda09-50aa-4938-bbdc-a7bcd8a3ce02-kube-api-access-gpdnl\") on node \"crc\" DevicePath \"\"" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.264692 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87ffda09-50aa-4938-bbdc-a7bcd8a3ce02-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.264884 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fn8v\" (UniqueName: \"kubernetes.io/projected/8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318-kube-api-access-5fn8v\") on node \"crc\" DevicePath \"\"" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.264972 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87ffda09-50aa-4938-bbdc-a7bcd8a3ce02-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.553260 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.553274 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a15a9c93ec2c48f8dbe355e239119cec16b70fbbd5d98280efcde0669ff3a9b" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.557110 4812 generic.go:334] "Generic (PLEG): container finished" podID="87ffda09-50aa-4938-bbdc-a7bcd8a3ce02" containerID="946f9d54f664053b8b465ae68d7783b1d4ba315d23739ccfbb304ca43fa8efba" exitCode=0 Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.557188 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgpf5" event={"ID":"87ffda09-50aa-4938-bbdc-a7bcd8a3ce02","Type":"ContainerDied","Data":"946f9d54f664053b8b465ae68d7783b1d4ba315d23739ccfbb304ca43fa8efba"} Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.557235 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgpf5" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.557255 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgpf5" event={"ID":"87ffda09-50aa-4938-bbdc-a7bcd8a3ce02","Type":"ContainerDied","Data":"4eee9a541bf2d840c927f870f2214d3af100b628658c974077f0563e1a34bfb3"} Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.557292 4812 scope.go:117] "RemoveContainer" containerID="946f9d54f664053b8b465ae68d7783b1d4ba315d23739ccfbb304ca43fa8efba" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.591057 4812 scope.go:117] "RemoveContainer" containerID="652ac0c1f16fbcaa6e13e6c0afff0212fd48c649b9fc3ab5eecf243393d98307" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.623921 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgpf5"] Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.637611 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgpf5"] Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.645291 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2-default"] Nov 24 20:41:14 crc kubenswrapper[4812]: E1124 20:41:14.645926 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318" containerName="mariadb-client-1-default" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.645970 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318" containerName="mariadb-client-1-default" Nov 24 20:41:14 crc kubenswrapper[4812]: E1124 20:41:14.646025 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87ffda09-50aa-4938-bbdc-a7bcd8a3ce02" containerName="extract-content" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.646045 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="87ffda09-50aa-4938-bbdc-a7bcd8a3ce02" containerName="extract-content" Nov 24 20:41:14 crc kubenswrapper[4812]: E1124 20:41:14.646093 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87ffda09-50aa-4938-bbdc-a7bcd8a3ce02" containerName="registry-server" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.646112 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="87ffda09-50aa-4938-bbdc-a7bcd8a3ce02" containerName="registry-server" Nov 24 20:41:14 crc kubenswrapper[4812]: E1124 20:41:14.646149 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87ffda09-50aa-4938-bbdc-a7bcd8a3ce02" containerName="extract-utilities" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.646166 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="87ffda09-50aa-4938-bbdc-a7bcd8a3ce02" containerName="extract-utilities" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.646544 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="87ffda09-50aa-4938-bbdc-a7bcd8a3ce02" containerName="registry-server" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.646584 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318" containerName="mariadb-client-1-default" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.647744 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.661280 4812 scope.go:117] "RemoveContainer" containerID="94c34b170b5ba961f198ed4f3e4ef56637674dbbd945884f6259f743780787e9" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.661598 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-pz88h" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.676679 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.714508 4812 scope.go:117] "RemoveContainer" containerID="946f9d54f664053b8b465ae68d7783b1d4ba315d23739ccfbb304ca43fa8efba" Nov 24 20:41:14 crc kubenswrapper[4812]: E1124 20:41:14.715032 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"946f9d54f664053b8b465ae68d7783b1d4ba315d23739ccfbb304ca43fa8efba\": container with ID starting with 946f9d54f664053b8b465ae68d7783b1d4ba315d23739ccfbb304ca43fa8efba not found: ID does not exist" containerID="946f9d54f664053b8b465ae68d7783b1d4ba315d23739ccfbb304ca43fa8efba" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.715085 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"946f9d54f664053b8b465ae68d7783b1d4ba315d23739ccfbb304ca43fa8efba"} err="failed to get container status \"946f9d54f664053b8b465ae68d7783b1d4ba315d23739ccfbb304ca43fa8efba\": rpc error: code = NotFound desc = could not find container \"946f9d54f664053b8b465ae68d7783b1d4ba315d23739ccfbb304ca43fa8efba\": container with ID starting with 946f9d54f664053b8b465ae68d7783b1d4ba315d23739ccfbb304ca43fa8efba not found: ID does not exist" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.715118 4812 scope.go:117] "RemoveContainer" containerID="652ac0c1f16fbcaa6e13e6c0afff0212fd48c649b9fc3ab5eecf243393d98307" Nov 24 20:41:14 crc kubenswrapper[4812]: E1124 20:41:14.715527 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"652ac0c1f16fbcaa6e13e6c0afff0212fd48c649b9fc3ab5eecf243393d98307\": container with ID starting with 652ac0c1f16fbcaa6e13e6c0afff0212fd48c649b9fc3ab5eecf243393d98307 not found: ID does not exist" containerID="652ac0c1f16fbcaa6e13e6c0afff0212fd48c649b9fc3ab5eecf243393d98307" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.715659 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"652ac0c1f16fbcaa6e13e6c0afff0212fd48c649b9fc3ab5eecf243393d98307"} err="failed to get container status \"652ac0c1f16fbcaa6e13e6c0afff0212fd48c649b9fc3ab5eecf243393d98307\": rpc error: code = NotFound desc = could not find container \"652ac0c1f16fbcaa6e13e6c0afff0212fd48c649b9fc3ab5eecf243393d98307\": container with ID starting with 652ac0c1f16fbcaa6e13e6c0afff0212fd48c649b9fc3ab5eecf243393d98307 not found: ID does not exist" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.715762 4812 scope.go:117] "RemoveContainer" containerID="94c34b170b5ba961f198ed4f3e4ef56637674dbbd945884f6259f743780787e9" Nov 24 20:41:14 crc kubenswrapper[4812]: E1124 20:41:14.716231 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94c34b170b5ba961f198ed4f3e4ef56637674dbbd945884f6259f743780787e9\": container with ID starting with 94c34b170b5ba961f198ed4f3e4ef56637674dbbd945884f6259f743780787e9 not found: ID does not exist" containerID="94c34b170b5ba961f198ed4f3e4ef56637674dbbd945884f6259f743780787e9" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.716275 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94c34b170b5ba961f198ed4f3e4ef56637674dbbd945884f6259f743780787e9"} err="failed to get container status \"94c34b170b5ba961f198ed4f3e4ef56637674dbbd945884f6259f743780787e9\": rpc error: code = NotFound desc = could not find container \"94c34b170b5ba961f198ed4f3e4ef56637674dbbd945884f6259f743780787e9\": container with ID starting with 94c34b170b5ba961f198ed4f3e4ef56637674dbbd945884f6259f743780787e9 not found: ID does not exist" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.774036 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkv9t\" (UniqueName: \"kubernetes.io/projected/7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc-kube-api-access-kkv9t\") pod \"mariadb-client-2-default\" (UID: \"7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc\") " pod="openstack/mariadb-client-2-default" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.876697 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkv9t\" (UniqueName: \"kubernetes.io/projected/7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc-kube-api-access-kkv9t\") pod \"mariadb-client-2-default\" (UID: \"7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc\") " pod="openstack/mariadb-client-2-default" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.901149 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkv9t\" (UniqueName: \"kubernetes.io/projected/7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc-kube-api-access-kkv9t\") pod \"mariadb-client-2-default\" (UID: \"7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc\") " pod="openstack/mariadb-client-2-default" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.973784 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87ffda09-50aa-4938-bbdc-a7bcd8a3ce02" path="/var/lib/kubelet/pods/87ffda09-50aa-4938-bbdc-a7bcd8a3ce02/volumes" Nov 24 20:41:14 crc kubenswrapper[4812]: I1124 20:41:14.974641 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318" path="/var/lib/kubelet/pods/8e6ea4c8-e85d-42ee-8dd1-7945bf5fa318/volumes" Nov 24 20:41:15 crc kubenswrapper[4812]: I1124 20:41:15.036742 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 24 20:41:15 crc kubenswrapper[4812]: W1124 20:41:15.643816 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ae0229c_9bcb_43f4_88fe_c1a6ae03ebcc.slice/crio-79a23549f3037e7010e92dfa8bfb09892c0b2913e8692fe7ba1c72c641715c0e WatchSource:0}: Error finding container 79a23549f3037e7010e92dfa8bfb09892c0b2913e8692fe7ba1c72c641715c0e: Status 404 returned error can't find the container with id 79a23549f3037e7010e92dfa8bfb09892c0b2913e8692fe7ba1c72c641715c0e Nov 24 20:41:15 crc kubenswrapper[4812]: I1124 20:41:15.645263 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 24 20:41:16 crc kubenswrapper[4812]: I1124 20:41:16.580203 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc","Type":"ContainerStarted","Data":"21f3ce6f18737be1eb3e1d1ffd0b21ae7c6433e60b2d3bf839f5f09b78105d07"} Nov 24 20:41:16 crc kubenswrapper[4812]: I1124 20:41:16.580668 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc","Type":"ContainerStarted","Data":"79a23549f3037e7010e92dfa8bfb09892c0b2913e8692fe7ba1c72c641715c0e"} Nov 24 20:41:16 crc kubenswrapper[4812]: I1124 20:41:16.609325 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-2-default" podStartSLOduration=2.6092963510000002 podStartE2EDuration="2.609296351s" podCreationTimestamp="2025-11-24 20:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:41:16.598551195 +0000 UTC m=+5070.387503626" watchObservedRunningTime="2025-11-24 20:41:16.609296351 +0000 UTC m=+5070.398248752" Nov 24 20:41:17 crc kubenswrapper[4812]: I1124 20:41:17.593154 4812 generic.go:334] "Generic (PLEG): container finished" podID="7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc" containerID="21f3ce6f18737be1eb3e1d1ffd0b21ae7c6433e60b2d3bf839f5f09b78105d07" exitCode=1 Nov 24 20:41:17 crc kubenswrapper[4812]: I1124 20:41:17.593256 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc","Type":"ContainerDied","Data":"21f3ce6f18737be1eb3e1d1ffd0b21ae7c6433e60b2d3bf839f5f09b78105d07"} Nov 24 20:41:19 crc kubenswrapper[4812]: I1124 20:41:19.004741 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 24 20:41:19 crc kubenswrapper[4812]: I1124 20:41:19.044972 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 24 20:41:19 crc kubenswrapper[4812]: I1124 20:41:19.047470 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkv9t\" (UniqueName: \"kubernetes.io/projected/7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc-kube-api-access-kkv9t\") pod \"7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc\" (UID: \"7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc\") " Nov 24 20:41:19 crc kubenswrapper[4812]: I1124 20:41:19.051083 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 24 20:41:19 crc kubenswrapper[4812]: I1124 20:41:19.053701 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc-kube-api-access-kkv9t" (OuterVolumeSpecName: "kube-api-access-kkv9t") pod "7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc" (UID: "7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc"). InnerVolumeSpecName "kube-api-access-kkv9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:41:19 crc kubenswrapper[4812]: I1124 20:41:19.148623 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkv9t\" (UniqueName: \"kubernetes.io/projected/7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc-kube-api-access-kkv9t\") on node \"crc\" DevicePath \"\"" Nov 24 20:41:19 crc kubenswrapper[4812]: I1124 20:41:19.507371 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1"] Nov 24 20:41:19 crc kubenswrapper[4812]: E1124 20:41:19.507947 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc" containerName="mariadb-client-2-default" Nov 24 20:41:19 crc kubenswrapper[4812]: I1124 20:41:19.508052 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc" containerName="mariadb-client-2-default" Nov 24 20:41:19 crc kubenswrapper[4812]: I1124 20:41:19.508320 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc" containerName="mariadb-client-2-default" Nov 24 20:41:19 crc kubenswrapper[4812]: I1124 20:41:19.509094 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 24 20:41:19 crc kubenswrapper[4812]: I1124 20:41:19.514949 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Nov 24 20:41:19 crc kubenswrapper[4812]: I1124 20:41:19.554228 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w96x2\" (UniqueName: \"kubernetes.io/projected/b5de9b0d-a643-4115-8532-80c38eccf9bc-kube-api-access-w96x2\") pod \"mariadb-client-1\" (UID: \"b5de9b0d-a643-4115-8532-80c38eccf9bc\") " pod="openstack/mariadb-client-1" Nov 24 20:41:19 crc kubenswrapper[4812]: I1124 20:41:19.613622 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79a23549f3037e7010e92dfa8bfb09892c0b2913e8692fe7ba1c72c641715c0e" Nov 24 20:41:19 crc kubenswrapper[4812]: I1124 20:41:19.613711 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 24 20:41:19 crc kubenswrapper[4812]: I1124 20:41:19.655373 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w96x2\" (UniqueName: \"kubernetes.io/projected/b5de9b0d-a643-4115-8532-80c38eccf9bc-kube-api-access-w96x2\") pod \"mariadb-client-1\" (UID: \"b5de9b0d-a643-4115-8532-80c38eccf9bc\") " pod="openstack/mariadb-client-1" Nov 24 20:41:19 crc kubenswrapper[4812]: I1124 20:41:19.670694 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w96x2\" (UniqueName: \"kubernetes.io/projected/b5de9b0d-a643-4115-8532-80c38eccf9bc-kube-api-access-w96x2\") pod \"mariadb-client-1\" (UID: \"b5de9b0d-a643-4115-8532-80c38eccf9bc\") " pod="openstack/mariadb-client-1" Nov 24 20:41:19 crc kubenswrapper[4812]: I1124 20:41:19.838867 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 24 20:41:20 crc kubenswrapper[4812]: I1124 20:41:20.477241 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Nov 24 20:41:20 crc kubenswrapper[4812]: I1124 20:41:20.625613 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"b5de9b0d-a643-4115-8532-80c38eccf9bc","Type":"ContainerStarted","Data":"cbb2c77364cbffd87cd1d81b6d35d92c629d49d3d35a513e3a43d4f2c39c1ef7"} Nov 24 20:41:20 crc kubenswrapper[4812]: I1124 20:41:20.986372 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc" path="/var/lib/kubelet/pods/7ae0229c-9bcb-43f4-88fe-c1a6ae03ebcc/volumes" Nov 24 20:41:21 crc kubenswrapper[4812]: I1124 20:41:21.640744 4812 generic.go:334] "Generic (PLEG): container finished" podID="b5de9b0d-a643-4115-8532-80c38eccf9bc" containerID="20a8f0f36c2f2d6f0ae28a7f82c46f198393607d99c2dbe2b18541246a316305" exitCode=0 Nov 24 20:41:21 crc kubenswrapper[4812]: I1124 20:41:21.640838 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"b5de9b0d-a643-4115-8532-80c38eccf9bc","Type":"ContainerDied","Data":"20a8f0f36c2f2d6f0ae28a7f82c46f198393607d99c2dbe2b18541246a316305"} Nov 24 20:41:23 crc kubenswrapper[4812]: I1124 20:41:23.042154 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 24 20:41:23 crc kubenswrapper[4812]: I1124 20:41:23.061463 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1_b5de9b0d-a643-4115-8532-80c38eccf9bc/mariadb-client-1/0.log" Nov 24 20:41:23 crc kubenswrapper[4812]: I1124 20:41:23.097441 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1"] Nov 24 20:41:23 crc kubenswrapper[4812]: I1124 20:41:23.103402 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1"] Nov 24 20:41:23 crc kubenswrapper[4812]: I1124 20:41:23.112561 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w96x2\" (UniqueName: \"kubernetes.io/projected/b5de9b0d-a643-4115-8532-80c38eccf9bc-kube-api-access-w96x2\") pod \"b5de9b0d-a643-4115-8532-80c38eccf9bc\" (UID: \"b5de9b0d-a643-4115-8532-80c38eccf9bc\") " Nov 24 20:41:23 crc kubenswrapper[4812]: I1124 20:41:23.119807 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5de9b0d-a643-4115-8532-80c38eccf9bc-kube-api-access-w96x2" (OuterVolumeSpecName: "kube-api-access-w96x2") pod "b5de9b0d-a643-4115-8532-80c38eccf9bc" (UID: "b5de9b0d-a643-4115-8532-80c38eccf9bc"). InnerVolumeSpecName "kube-api-access-w96x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:41:23 crc kubenswrapper[4812]: I1124 20:41:23.214520 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w96x2\" (UniqueName: \"kubernetes.io/projected/b5de9b0d-a643-4115-8532-80c38eccf9bc-kube-api-access-w96x2\") on node \"crc\" DevicePath \"\"" Nov 24 20:41:23 crc kubenswrapper[4812]: I1124 20:41:23.520467 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-default"] Nov 24 20:41:23 crc kubenswrapper[4812]: E1124 20:41:23.521211 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5de9b0d-a643-4115-8532-80c38eccf9bc" containerName="mariadb-client-1" Nov 24 20:41:23 crc kubenswrapper[4812]: I1124 20:41:23.521230 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5de9b0d-a643-4115-8532-80c38eccf9bc" containerName="mariadb-client-1" Nov 24 20:41:23 crc kubenswrapper[4812]: I1124 20:41:23.521416 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5de9b0d-a643-4115-8532-80c38eccf9bc" containerName="mariadb-client-1" Nov 24 20:41:23 crc kubenswrapper[4812]: I1124 20:41:23.522448 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 24 20:41:23 crc kubenswrapper[4812]: I1124 20:41:23.553272 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 24 20:41:23 crc kubenswrapper[4812]: I1124 20:41:23.620975 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jznm4\" (UniqueName: \"kubernetes.io/projected/8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510-kube-api-access-jznm4\") pod \"mariadb-client-4-default\" (UID: \"8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510\") " pod="openstack/mariadb-client-4-default" Nov 24 20:41:23 crc kubenswrapper[4812]: I1124 20:41:23.657356 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbb2c77364cbffd87cd1d81b6d35d92c629d49d3d35a513e3a43d4f2c39c1ef7" Nov 24 20:41:23 crc kubenswrapper[4812]: I1124 20:41:23.657408 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 24 20:41:23 crc kubenswrapper[4812]: I1124 20:41:23.722256 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jznm4\" (UniqueName: \"kubernetes.io/projected/8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510-kube-api-access-jznm4\") pod \"mariadb-client-4-default\" (UID: \"8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510\") " pod="openstack/mariadb-client-4-default" Nov 24 20:41:23 crc kubenswrapper[4812]: I1124 20:41:23.743147 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jznm4\" (UniqueName: \"kubernetes.io/projected/8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510-kube-api-access-jznm4\") pod \"mariadb-client-4-default\" (UID: \"8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510\") " pod="openstack/mariadb-client-4-default" Nov 24 20:41:23 crc kubenswrapper[4812]: I1124 20:41:23.842834 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 24 20:41:24 crc kubenswrapper[4812]: I1124 20:41:24.168156 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 24 20:41:24 crc kubenswrapper[4812]: I1124 20:41:24.673025 4812 generic.go:334] "Generic (PLEG): container finished" podID="8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510" containerID="30bec8470eb2fad5fa3632a9fe05bf6d8957bc96667337016d2bedc67f57555e" exitCode=0 Nov 24 20:41:24 crc kubenswrapper[4812]: I1124 20:41:24.673118 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510","Type":"ContainerDied","Data":"30bec8470eb2fad5fa3632a9fe05bf6d8957bc96667337016d2bedc67f57555e"} Nov 24 20:41:24 crc kubenswrapper[4812]: I1124 20:41:24.673178 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510","Type":"ContainerStarted","Data":"15ecd1c3cccc457a88f77c281f0e703a2a472673f1a9ba74d680d0e4865e0f87"} Nov 24 20:41:24 crc kubenswrapper[4812]: I1124 20:41:24.980314 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5de9b0d-a643-4115-8532-80c38eccf9bc" path="/var/lib/kubelet/pods/b5de9b0d-a643-4115-8532-80c38eccf9bc/volumes" Nov 24 20:41:26 crc kubenswrapper[4812]: I1124 20:41:26.064099 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 24 20:41:26 crc kubenswrapper[4812]: I1124 20:41:26.085034 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-default_8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510/mariadb-client-4-default/0.log" Nov 24 20:41:26 crc kubenswrapper[4812]: I1124 20:41:26.116022 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 24 20:41:26 crc kubenswrapper[4812]: I1124 20:41:26.126179 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 24 20:41:26 crc kubenswrapper[4812]: I1124 20:41:26.160292 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jznm4\" (UniqueName: \"kubernetes.io/projected/8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510-kube-api-access-jznm4\") pod \"8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510\" (UID: \"8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510\") " Nov 24 20:41:26 crc kubenswrapper[4812]: I1124 20:41:26.165758 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510-kube-api-access-jznm4" (OuterVolumeSpecName: "kube-api-access-jznm4") pod "8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510" (UID: "8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510"). InnerVolumeSpecName "kube-api-access-jznm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:41:26 crc kubenswrapper[4812]: I1124 20:41:26.262824 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jznm4\" (UniqueName: \"kubernetes.io/projected/8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510-kube-api-access-jznm4\") on node \"crc\" DevicePath \"\"" Nov 24 20:41:26 crc kubenswrapper[4812]: I1124 20:41:26.706052 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15ecd1c3cccc457a88f77c281f0e703a2a472673f1a9ba74d680d0e4865e0f87" Nov 24 20:41:26 crc kubenswrapper[4812]: I1124 20:41:26.706220 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 24 20:41:26 crc kubenswrapper[4812]: I1124 20:41:26.989156 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510" path="/var/lib/kubelet/pods/8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510/volumes" Nov 24 20:41:30 crc kubenswrapper[4812]: I1124 20:41:30.282309 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-default"] Nov 24 20:41:30 crc kubenswrapper[4812]: E1124 20:41:30.283593 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510" containerName="mariadb-client-4-default" Nov 24 20:41:30 crc kubenswrapper[4812]: I1124 20:41:30.283632 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510" containerName="mariadb-client-4-default" Nov 24 20:41:30 crc kubenswrapper[4812]: I1124 20:41:30.284043 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af7c1d6-2ac5-4e4a-b435-5dcfa2ee0510" containerName="mariadb-client-4-default" Nov 24 20:41:30 crc kubenswrapper[4812]: I1124 20:41:30.285274 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 24 20:41:30 crc kubenswrapper[4812]: I1124 20:41:30.288382 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-pz88h" Nov 24 20:41:30 crc kubenswrapper[4812]: I1124 20:41:30.294015 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 24 20:41:30 crc kubenswrapper[4812]: I1124 20:41:30.326237 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hknr\" (UniqueName: \"kubernetes.io/projected/991b555d-63ae-4af0-9b93-dd100189ecec-kube-api-access-2hknr\") pod \"mariadb-client-5-default\" (UID: \"991b555d-63ae-4af0-9b93-dd100189ecec\") " pod="openstack/mariadb-client-5-default" Nov 24 20:41:30 crc kubenswrapper[4812]: I1124 20:41:30.427636 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hknr\" (UniqueName: \"kubernetes.io/projected/991b555d-63ae-4af0-9b93-dd100189ecec-kube-api-access-2hknr\") pod \"mariadb-client-5-default\" (UID: \"991b555d-63ae-4af0-9b93-dd100189ecec\") " pod="openstack/mariadb-client-5-default" Nov 24 20:41:30 crc kubenswrapper[4812]: I1124 20:41:30.454901 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hknr\" (UniqueName: \"kubernetes.io/projected/991b555d-63ae-4af0-9b93-dd100189ecec-kube-api-access-2hknr\") pod \"mariadb-client-5-default\" (UID: \"991b555d-63ae-4af0-9b93-dd100189ecec\") " pod="openstack/mariadb-client-5-default" Nov 24 20:41:30 crc kubenswrapper[4812]: I1124 20:41:30.622680 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 24 20:41:31 crc kubenswrapper[4812]: I1124 20:41:31.246483 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 24 20:41:31 crc kubenswrapper[4812]: I1124 20:41:31.770247 4812 generic.go:334] "Generic (PLEG): container finished" podID="991b555d-63ae-4af0-9b93-dd100189ecec" containerID="eb6309e6554adc9c1e72bd5de294bb46615395e03cc255f4ad72a11b941457ec" exitCode=0 Nov 24 20:41:31 crc kubenswrapper[4812]: I1124 20:41:31.770419 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"991b555d-63ae-4af0-9b93-dd100189ecec","Type":"ContainerDied","Data":"eb6309e6554adc9c1e72bd5de294bb46615395e03cc255f4ad72a11b941457ec"} Nov 24 20:41:31 crc kubenswrapper[4812]: I1124 20:41:31.770727 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"991b555d-63ae-4af0-9b93-dd100189ecec","Type":"ContainerStarted","Data":"d572ffd4d591536330018cf8f7e5ca3227ca87f537e1b24593af0477c7a4348e"} Nov 24 20:41:32 crc kubenswrapper[4812]: I1124 20:41:32.999197 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:32.999565 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:32.999647 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.000716 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dbde7a21765fc5b63caec3c7e3d0f2beae4d92e2ef96dc251d6a1b6345e4d456"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.000786 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://dbde7a21765fc5b63caec3c7e3d0f2beae4d92e2ef96dc251d6a1b6345e4d456" gracePeriod=600 Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.268865 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.289123 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-default_991b555d-63ae-4af0-9b93-dd100189ecec/mariadb-client-5-default/0.log" Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.320827 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.331132 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.386845 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hknr\" (UniqueName: \"kubernetes.io/projected/991b555d-63ae-4af0-9b93-dd100189ecec-kube-api-access-2hknr\") pod \"991b555d-63ae-4af0-9b93-dd100189ecec\" (UID: \"991b555d-63ae-4af0-9b93-dd100189ecec\") " Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.393496 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/991b555d-63ae-4af0-9b93-dd100189ecec-kube-api-access-2hknr" (OuterVolumeSpecName: "kube-api-access-2hknr") pod "991b555d-63ae-4af0-9b93-dd100189ecec" (UID: "991b555d-63ae-4af0-9b93-dd100189ecec"). InnerVolumeSpecName "kube-api-access-2hknr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.458490 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-default"] Nov 24 20:41:33 crc kubenswrapper[4812]: E1124 20:41:33.459081 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="991b555d-63ae-4af0-9b93-dd100189ecec" containerName="mariadb-client-5-default" Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.459116 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="991b555d-63ae-4af0-9b93-dd100189ecec" containerName="mariadb-client-5-default" Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.459470 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="991b555d-63ae-4af0-9b93-dd100189ecec" containerName="mariadb-client-5-default" Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.460366 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.464255 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.489639 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xg2z\" (UniqueName: \"kubernetes.io/projected/aebe1378-1358-41f2-a24d-4263a616d1f8-kube-api-access-2xg2z\") pod \"mariadb-client-6-default\" (UID: \"aebe1378-1358-41f2-a24d-4263a616d1f8\") " pod="openstack/mariadb-client-6-default" Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.489701 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hknr\" (UniqueName: \"kubernetes.io/projected/991b555d-63ae-4af0-9b93-dd100189ecec-kube-api-access-2hknr\") on node \"crc\" DevicePath \"\"" Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.591805 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xg2z\" (UniqueName: \"kubernetes.io/projected/aebe1378-1358-41f2-a24d-4263a616d1f8-kube-api-access-2xg2z\") pod \"mariadb-client-6-default\" (UID: \"aebe1378-1358-41f2-a24d-4263a616d1f8\") " pod="openstack/mariadb-client-6-default" Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.631756 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xg2z\" (UniqueName: \"kubernetes.io/projected/aebe1378-1358-41f2-a24d-4263a616d1f8-kube-api-access-2xg2z\") pod \"mariadb-client-6-default\" (UID: \"aebe1378-1358-41f2-a24d-4263a616d1f8\") " pod="openstack/mariadb-client-6-default" Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.793645 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.796631 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="dbde7a21765fc5b63caec3c7e3d0f2beae4d92e2ef96dc251d6a1b6345e4d456" exitCode=0 Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.796750 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"dbde7a21765fc5b63caec3c7e3d0f2beae4d92e2ef96dc251d6a1b6345e4d456"} Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.796827 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445"} Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.796859 4812 scope.go:117] "RemoveContainer" containerID="df3c6a620949fb6456ce5fae589de1a7ed65c96c1ccb90e04ec82247d022951f" Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.801302 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d572ffd4d591536330018cf8f7e5ca3227ca87f537e1b24593af0477c7a4348e" Nov 24 20:41:33 crc kubenswrapper[4812]: I1124 20:41:33.801536 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 24 20:41:34 crc kubenswrapper[4812]: I1124 20:41:34.518163 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 24 20:41:34 crc kubenswrapper[4812]: W1124 20:41:34.526584 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaebe1378_1358_41f2_a24d_4263a616d1f8.slice/crio-49dc2459326ac3295c9d5da95ce447bd2a77f40962d595d7d839f391eb26c562 WatchSource:0}: Error finding container 49dc2459326ac3295c9d5da95ce447bd2a77f40962d595d7d839f391eb26c562: Status 404 returned error can't find the container with id 49dc2459326ac3295c9d5da95ce447bd2a77f40962d595d7d839f391eb26c562 Nov 24 20:41:34 crc kubenswrapper[4812]: I1124 20:41:34.811476 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"aebe1378-1358-41f2-a24d-4263a616d1f8","Type":"ContainerStarted","Data":"e627b21f9b4742f510890b23e21a47f55cee0a5bb5b493e3d8b3f607cd9c8de2"} Nov 24 20:41:34 crc kubenswrapper[4812]: I1124 20:41:34.811540 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"aebe1378-1358-41f2-a24d-4263a616d1f8","Type":"ContainerStarted","Data":"49dc2459326ac3295c9d5da95ce447bd2a77f40962d595d7d839f391eb26c562"} Nov 24 20:41:34 crc kubenswrapper[4812]: I1124 20:41:34.831843 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-6-default" podStartSLOduration=1.8318158439999999 podStartE2EDuration="1.831815844s" podCreationTimestamp="2025-11-24 20:41:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:41:34.829116868 +0000 UTC m=+5088.618069279" watchObservedRunningTime="2025-11-24 20:41:34.831815844 +0000 UTC m=+5088.620768225" Nov 24 20:41:34 crc kubenswrapper[4812]: I1124 20:41:34.982108 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="991b555d-63ae-4af0-9b93-dd100189ecec" path="/var/lib/kubelet/pods/991b555d-63ae-4af0-9b93-dd100189ecec/volumes" Nov 24 20:41:35 crc kubenswrapper[4812]: I1124 20:41:35.828897 4812 generic.go:334] "Generic (PLEG): container finished" podID="aebe1378-1358-41f2-a24d-4263a616d1f8" containerID="e627b21f9b4742f510890b23e21a47f55cee0a5bb5b493e3d8b3f607cd9c8de2" exitCode=1 Nov 24 20:41:35 crc kubenswrapper[4812]: I1124 20:41:35.828969 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"aebe1378-1358-41f2-a24d-4263a616d1f8","Type":"ContainerDied","Data":"e627b21f9b4742f510890b23e21a47f55cee0a5bb5b493e3d8b3f607cd9c8de2"} Nov 24 20:41:37 crc kubenswrapper[4812]: I1124 20:41:37.238781 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 24 20:41:37 crc kubenswrapper[4812]: I1124 20:41:37.265326 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xg2z\" (UniqueName: \"kubernetes.io/projected/aebe1378-1358-41f2-a24d-4263a616d1f8-kube-api-access-2xg2z\") pod \"aebe1378-1358-41f2-a24d-4263a616d1f8\" (UID: \"aebe1378-1358-41f2-a24d-4263a616d1f8\") " Nov 24 20:41:37 crc kubenswrapper[4812]: I1124 20:41:37.285036 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aebe1378-1358-41f2-a24d-4263a616d1f8-kube-api-access-2xg2z" (OuterVolumeSpecName: "kube-api-access-2xg2z") pod "aebe1378-1358-41f2-a24d-4263a616d1f8" (UID: "aebe1378-1358-41f2-a24d-4263a616d1f8"). InnerVolumeSpecName "kube-api-access-2xg2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:41:37 crc kubenswrapper[4812]: I1124 20:41:37.307564 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 24 20:41:37 crc kubenswrapper[4812]: I1124 20:41:37.313572 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 24 20:41:37 crc kubenswrapper[4812]: I1124 20:41:37.367161 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xg2z\" (UniqueName: \"kubernetes.io/projected/aebe1378-1358-41f2-a24d-4263a616d1f8-kube-api-access-2xg2z\") on node \"crc\" DevicePath \"\"" Nov 24 20:41:37 crc kubenswrapper[4812]: I1124 20:41:37.445139 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-default"] Nov 24 20:41:37 crc kubenswrapper[4812]: E1124 20:41:37.445754 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aebe1378-1358-41f2-a24d-4263a616d1f8" containerName="mariadb-client-6-default" Nov 24 20:41:37 crc kubenswrapper[4812]: I1124 20:41:37.445788 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="aebe1378-1358-41f2-a24d-4263a616d1f8" containerName="mariadb-client-6-default" Nov 24 20:41:37 crc kubenswrapper[4812]: I1124 20:41:37.446095 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="aebe1378-1358-41f2-a24d-4263a616d1f8" containerName="mariadb-client-6-default" Nov 24 20:41:37 crc kubenswrapper[4812]: I1124 20:41:37.447030 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 24 20:41:37 crc kubenswrapper[4812]: I1124 20:41:37.451995 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 24 20:41:37 crc kubenswrapper[4812]: I1124 20:41:37.468505 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9l7k\" (UniqueName: \"kubernetes.io/projected/480fb33b-53cd-4dd8-a76a-edef2ac3a846-kube-api-access-f9l7k\") pod \"mariadb-client-7-default\" (UID: \"480fb33b-53cd-4dd8-a76a-edef2ac3a846\") " pod="openstack/mariadb-client-7-default" Nov 24 20:41:37 crc kubenswrapper[4812]: I1124 20:41:37.569326 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9l7k\" (UniqueName: \"kubernetes.io/projected/480fb33b-53cd-4dd8-a76a-edef2ac3a846-kube-api-access-f9l7k\") pod \"mariadb-client-7-default\" (UID: \"480fb33b-53cd-4dd8-a76a-edef2ac3a846\") " pod="openstack/mariadb-client-7-default" Nov 24 20:41:37 crc kubenswrapper[4812]: I1124 20:41:37.609120 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9l7k\" (UniqueName: \"kubernetes.io/projected/480fb33b-53cd-4dd8-a76a-edef2ac3a846-kube-api-access-f9l7k\") pod \"mariadb-client-7-default\" (UID: \"480fb33b-53cd-4dd8-a76a-edef2ac3a846\") " pod="openstack/mariadb-client-7-default" Nov 24 20:41:37 crc kubenswrapper[4812]: I1124 20:41:37.775143 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 24 20:41:37 crc kubenswrapper[4812]: I1124 20:41:37.861548 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49dc2459326ac3295c9d5da95ce447bd2a77f40962d595d7d839f391eb26c562" Nov 24 20:41:37 crc kubenswrapper[4812]: I1124 20:41:37.861565 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 24 20:41:38 crc kubenswrapper[4812]: I1124 20:41:38.398079 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 24 20:41:38 crc kubenswrapper[4812]: I1124 20:41:38.876408 4812 generic.go:334] "Generic (PLEG): container finished" podID="480fb33b-53cd-4dd8-a76a-edef2ac3a846" containerID="ee17754759a0a69b2f56d0e7308ee2342ced3889325ec9e623c64c85f05476d7" exitCode=0 Nov 24 20:41:38 crc kubenswrapper[4812]: I1124 20:41:38.876583 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"480fb33b-53cd-4dd8-a76a-edef2ac3a846","Type":"ContainerDied","Data":"ee17754759a0a69b2f56d0e7308ee2342ced3889325ec9e623c64c85f05476d7"} Nov 24 20:41:38 crc kubenswrapper[4812]: I1124 20:41:38.876826 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"480fb33b-53cd-4dd8-a76a-edef2ac3a846","Type":"ContainerStarted","Data":"ff344b9efeb98f05e9209e9a3ff2e21fdd343c8121d73c077ed42b2a92e56e6e"} Nov 24 20:41:38 crc kubenswrapper[4812]: I1124 20:41:38.977525 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aebe1378-1358-41f2-a24d-4263a616d1f8" path="/var/lib/kubelet/pods/aebe1378-1358-41f2-a24d-4263a616d1f8/volumes" Nov 24 20:41:40 crc kubenswrapper[4812]: I1124 20:41:40.324765 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 24 20:41:40 crc kubenswrapper[4812]: I1124 20:41:40.346373 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-default_480fb33b-53cd-4dd8-a76a-edef2ac3a846/mariadb-client-7-default/0.log" Nov 24 20:41:40 crc kubenswrapper[4812]: I1124 20:41:40.377041 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 24 20:41:40 crc kubenswrapper[4812]: I1124 20:41:40.391301 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 24 20:41:40 crc kubenswrapper[4812]: I1124 20:41:40.421781 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9l7k\" (UniqueName: \"kubernetes.io/projected/480fb33b-53cd-4dd8-a76a-edef2ac3a846-kube-api-access-f9l7k\") pod \"480fb33b-53cd-4dd8-a76a-edef2ac3a846\" (UID: \"480fb33b-53cd-4dd8-a76a-edef2ac3a846\") " Nov 24 20:41:40 crc kubenswrapper[4812]: I1124 20:41:40.429955 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/480fb33b-53cd-4dd8-a76a-edef2ac3a846-kube-api-access-f9l7k" (OuterVolumeSpecName: "kube-api-access-f9l7k") pod "480fb33b-53cd-4dd8-a76a-edef2ac3a846" (UID: "480fb33b-53cd-4dd8-a76a-edef2ac3a846"). InnerVolumeSpecName "kube-api-access-f9l7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:41:40 crc kubenswrapper[4812]: I1124 20:41:40.522980 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Nov 24 20:41:40 crc kubenswrapper[4812]: E1124 20:41:40.523358 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="480fb33b-53cd-4dd8-a76a-edef2ac3a846" containerName="mariadb-client-7-default" Nov 24 20:41:40 crc kubenswrapper[4812]: I1124 20:41:40.523373 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="480fb33b-53cd-4dd8-a76a-edef2ac3a846" containerName="mariadb-client-7-default" Nov 24 20:41:40 crc kubenswrapper[4812]: I1124 20:41:40.523524 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="480fb33b-53cd-4dd8-a76a-edef2ac3a846" containerName="mariadb-client-7-default" Nov 24 20:41:40 crc kubenswrapper[4812]: I1124 20:41:40.524020 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 24 20:41:40 crc kubenswrapper[4812]: I1124 20:41:40.524753 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9l7k\" (UniqueName: \"kubernetes.io/projected/480fb33b-53cd-4dd8-a76a-edef2ac3a846-kube-api-access-f9l7k\") on node \"crc\" DevicePath \"\"" Nov 24 20:41:40 crc kubenswrapper[4812]: I1124 20:41:40.539833 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Nov 24 20:41:40 crc kubenswrapper[4812]: I1124 20:41:40.626597 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-997hb\" (UniqueName: \"kubernetes.io/projected/2b70999d-b07a-4528-805b-af45c83cbdb2-kube-api-access-997hb\") pod \"mariadb-client-2\" (UID: \"2b70999d-b07a-4528-805b-af45c83cbdb2\") " pod="openstack/mariadb-client-2" Nov 24 20:41:40 crc kubenswrapper[4812]: I1124 20:41:40.727935 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-997hb\" (UniqueName: \"kubernetes.io/projected/2b70999d-b07a-4528-805b-af45c83cbdb2-kube-api-access-997hb\") pod \"mariadb-client-2\" (UID: \"2b70999d-b07a-4528-805b-af45c83cbdb2\") " pod="openstack/mariadb-client-2" Nov 24 20:41:40 crc kubenswrapper[4812]: I1124 20:41:40.750685 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-997hb\" (UniqueName: \"kubernetes.io/projected/2b70999d-b07a-4528-805b-af45c83cbdb2-kube-api-access-997hb\") pod \"mariadb-client-2\" (UID: \"2b70999d-b07a-4528-805b-af45c83cbdb2\") " pod="openstack/mariadb-client-2" Nov 24 20:41:40 crc kubenswrapper[4812]: I1124 20:41:40.844092 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 24 20:41:40 crc kubenswrapper[4812]: I1124 20:41:40.896305 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff344b9efeb98f05e9209e9a3ff2e21fdd343c8121d73c077ed42b2a92e56e6e" Nov 24 20:41:40 crc kubenswrapper[4812]: I1124 20:41:40.896458 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 24 20:41:40 crc kubenswrapper[4812]: I1124 20:41:40.975655 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="480fb33b-53cd-4dd8-a76a-edef2ac3a846" path="/var/lib/kubelet/pods/480fb33b-53cd-4dd8-a76a-edef2ac3a846/volumes" Nov 24 20:41:41 crc kubenswrapper[4812]: I1124 20:41:41.431790 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Nov 24 20:41:41 crc kubenswrapper[4812]: W1124 20:41:41.448624 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b70999d_b07a_4528_805b_af45c83cbdb2.slice/crio-083dcd8d4587beff59feed9e678ec465198ecb2518b042407e46737315e0f910 WatchSource:0}: Error finding container 083dcd8d4587beff59feed9e678ec465198ecb2518b042407e46737315e0f910: Status 404 returned error can't find the container with id 083dcd8d4587beff59feed9e678ec465198ecb2518b042407e46737315e0f910 Nov 24 20:41:41 crc kubenswrapper[4812]: I1124 20:41:41.841036 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bhkm5"] Nov 24 20:41:41 crc kubenswrapper[4812]: I1124 20:41:41.842606 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhkm5" Nov 24 20:41:41 crc kubenswrapper[4812]: I1124 20:41:41.859496 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bhkm5"] Nov 24 20:41:41 crc kubenswrapper[4812]: I1124 20:41:41.904398 4812 generic.go:334] "Generic (PLEG): container finished" podID="2b70999d-b07a-4528-805b-af45c83cbdb2" containerID="032b9716509cb826cd2c9fa9280cdde971c0c32ffa278f6528739b37176aadb7" exitCode=0 Nov 24 20:41:41 crc kubenswrapper[4812]: I1124 20:41:41.904447 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"2b70999d-b07a-4528-805b-af45c83cbdb2","Type":"ContainerDied","Data":"032b9716509cb826cd2c9fa9280cdde971c0c32ffa278f6528739b37176aadb7"} Nov 24 20:41:41 crc kubenswrapper[4812]: I1124 20:41:41.904476 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"2b70999d-b07a-4528-805b-af45c83cbdb2","Type":"ContainerStarted","Data":"083dcd8d4587beff59feed9e678ec465198ecb2518b042407e46737315e0f910"} Nov 24 20:41:41 crc kubenswrapper[4812]: I1124 20:41:41.948144 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8cc17c7-7b87-4dbe-8145-ec687768578b-catalog-content\") pod \"community-operators-bhkm5\" (UID: \"d8cc17c7-7b87-4dbe-8145-ec687768578b\") " pod="openshift-marketplace/community-operators-bhkm5" Nov 24 20:41:41 crc kubenswrapper[4812]: I1124 20:41:41.948609 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5bl6\" (UniqueName: \"kubernetes.io/projected/d8cc17c7-7b87-4dbe-8145-ec687768578b-kube-api-access-t5bl6\") pod \"community-operators-bhkm5\" (UID: \"d8cc17c7-7b87-4dbe-8145-ec687768578b\") " pod="openshift-marketplace/community-operators-bhkm5" Nov 24 20:41:41 crc kubenswrapper[4812]: I1124 20:41:41.948716 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8cc17c7-7b87-4dbe-8145-ec687768578b-utilities\") pod \"community-operators-bhkm5\" (UID: \"d8cc17c7-7b87-4dbe-8145-ec687768578b\") " pod="openshift-marketplace/community-operators-bhkm5" Nov 24 20:41:42 crc kubenswrapper[4812]: I1124 20:41:42.051505 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8cc17c7-7b87-4dbe-8145-ec687768578b-catalog-content\") pod \"community-operators-bhkm5\" (UID: \"d8cc17c7-7b87-4dbe-8145-ec687768578b\") " pod="openshift-marketplace/community-operators-bhkm5" Nov 24 20:41:42 crc kubenswrapper[4812]: I1124 20:41:42.051566 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5bl6\" (UniqueName: \"kubernetes.io/projected/d8cc17c7-7b87-4dbe-8145-ec687768578b-kube-api-access-t5bl6\") pod \"community-operators-bhkm5\" (UID: \"d8cc17c7-7b87-4dbe-8145-ec687768578b\") " pod="openshift-marketplace/community-operators-bhkm5" Nov 24 20:41:42 crc kubenswrapper[4812]: I1124 20:41:42.051659 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8cc17c7-7b87-4dbe-8145-ec687768578b-utilities\") pod \"community-operators-bhkm5\" (UID: \"d8cc17c7-7b87-4dbe-8145-ec687768578b\") " pod="openshift-marketplace/community-operators-bhkm5" Nov 24 20:41:42 crc kubenswrapper[4812]: I1124 20:41:42.052332 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8cc17c7-7b87-4dbe-8145-ec687768578b-catalog-content\") pod \"community-operators-bhkm5\" (UID: \"d8cc17c7-7b87-4dbe-8145-ec687768578b\") " pod="openshift-marketplace/community-operators-bhkm5" Nov 24 20:41:42 crc kubenswrapper[4812]: I1124 20:41:42.052803 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8cc17c7-7b87-4dbe-8145-ec687768578b-utilities\") pod \"community-operators-bhkm5\" (UID: \"d8cc17c7-7b87-4dbe-8145-ec687768578b\") " pod="openshift-marketplace/community-operators-bhkm5" Nov 24 20:41:42 crc kubenswrapper[4812]: I1124 20:41:42.070231 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5bl6\" (UniqueName: \"kubernetes.io/projected/d8cc17c7-7b87-4dbe-8145-ec687768578b-kube-api-access-t5bl6\") pod \"community-operators-bhkm5\" (UID: \"d8cc17c7-7b87-4dbe-8145-ec687768578b\") " pod="openshift-marketplace/community-operators-bhkm5" Nov 24 20:41:42 crc kubenswrapper[4812]: I1124 20:41:42.164112 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhkm5" Nov 24 20:41:42 crc kubenswrapper[4812]: I1124 20:41:42.714833 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bhkm5"] Nov 24 20:41:42 crc kubenswrapper[4812]: W1124 20:41:42.719698 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8cc17c7_7b87_4dbe_8145_ec687768578b.slice/crio-122e885c3c7b0fb83b56d98ede99bd42acd95f07bbebe5d0078406ecf9c52364 WatchSource:0}: Error finding container 122e885c3c7b0fb83b56d98ede99bd42acd95f07bbebe5d0078406ecf9c52364: Status 404 returned error can't find the container with id 122e885c3c7b0fb83b56d98ede99bd42acd95f07bbebe5d0078406ecf9c52364 Nov 24 20:41:42 crc kubenswrapper[4812]: I1124 20:41:42.917847 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhkm5" event={"ID":"d8cc17c7-7b87-4dbe-8145-ec687768578b","Type":"ContainerStarted","Data":"122e885c3c7b0fb83b56d98ede99bd42acd95f07bbebe5d0078406ecf9c52364"} Nov 24 20:41:43 crc kubenswrapper[4812]: I1124 20:41:43.303088 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 24 20:41:43 crc kubenswrapper[4812]: I1124 20:41:43.324173 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_2b70999d-b07a-4528-805b-af45c83cbdb2/mariadb-client-2/0.log" Nov 24 20:41:43 crc kubenswrapper[4812]: I1124 20:41:43.364331 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Nov 24 20:41:43 crc kubenswrapper[4812]: I1124 20:41:43.364575 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Nov 24 20:41:43 crc kubenswrapper[4812]: I1124 20:41:43.371329 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-997hb\" (UniqueName: \"kubernetes.io/projected/2b70999d-b07a-4528-805b-af45c83cbdb2-kube-api-access-997hb\") pod \"2b70999d-b07a-4528-805b-af45c83cbdb2\" (UID: \"2b70999d-b07a-4528-805b-af45c83cbdb2\") " Nov 24 20:41:43 crc kubenswrapper[4812]: I1124 20:41:43.378731 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b70999d-b07a-4528-805b-af45c83cbdb2-kube-api-access-997hb" (OuterVolumeSpecName: "kube-api-access-997hb") pod "2b70999d-b07a-4528-805b-af45c83cbdb2" (UID: "2b70999d-b07a-4528-805b-af45c83cbdb2"). InnerVolumeSpecName "kube-api-access-997hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:41:43 crc kubenswrapper[4812]: I1124 20:41:43.473438 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-997hb\" (UniqueName: \"kubernetes.io/projected/2b70999d-b07a-4528-805b-af45c83cbdb2-kube-api-access-997hb\") on node \"crc\" DevicePath \"\"" Nov 24 20:41:43 crc kubenswrapper[4812]: I1124 20:41:43.931143 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="083dcd8d4587beff59feed9e678ec465198ecb2518b042407e46737315e0f910" Nov 24 20:41:43 crc kubenswrapper[4812]: I1124 20:41:43.931398 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 24 20:41:43 crc kubenswrapper[4812]: I1124 20:41:43.939696 4812 generic.go:334] "Generic (PLEG): container finished" podID="d8cc17c7-7b87-4dbe-8145-ec687768578b" containerID="9a54bfb21ca921c9dfdbd9e86e738ee9596a43e3fbbcaac092211308c278df8e" exitCode=0 Nov 24 20:41:43 crc kubenswrapper[4812]: I1124 20:41:43.939871 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhkm5" event={"ID":"d8cc17c7-7b87-4dbe-8145-ec687768578b","Type":"ContainerDied","Data":"9a54bfb21ca921c9dfdbd9e86e738ee9596a43e3fbbcaac092211308c278df8e"} Nov 24 20:41:44 crc kubenswrapper[4812]: I1124 20:41:44.984768 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b70999d-b07a-4528-805b-af45c83cbdb2" path="/var/lib/kubelet/pods/2b70999d-b07a-4528-805b-af45c83cbdb2/volumes" Nov 24 20:41:45 crc kubenswrapper[4812]: I1124 20:41:45.966998 4812 generic.go:334] "Generic (PLEG): container finished" podID="d8cc17c7-7b87-4dbe-8145-ec687768578b" containerID="ee9c8a35fe5ec1effdee05eb7e63e74629dd0da83183882da4bcac980642ac3c" exitCode=0 Nov 24 20:41:45 crc kubenswrapper[4812]: I1124 20:41:45.967063 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhkm5" event={"ID":"d8cc17c7-7b87-4dbe-8145-ec687768578b","Type":"ContainerDied","Data":"ee9c8a35fe5ec1effdee05eb7e63e74629dd0da83183882da4bcac980642ac3c"} Nov 24 20:41:46 crc kubenswrapper[4812]: I1124 20:41:46.982689 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhkm5" event={"ID":"d8cc17c7-7b87-4dbe-8145-ec687768578b","Type":"ContainerStarted","Data":"6041fdd6f46a19133a4e6bad2d3014d37250968448e3acbf4360b8836a3c38e9"} Nov 24 20:41:47 crc kubenswrapper[4812]: I1124 20:41:47.035073 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bhkm5" podStartSLOduration=3.541568711 podStartE2EDuration="6.035044917s" podCreationTimestamp="2025-11-24 20:41:41 +0000 UTC" firstStartedPulling="2025-11-24 20:41:43.942395791 +0000 UTC m=+5097.731348202" lastFinishedPulling="2025-11-24 20:41:46.435872007 +0000 UTC m=+5100.224824408" observedRunningTime="2025-11-24 20:41:47.009616483 +0000 UTC m=+5100.798568894" watchObservedRunningTime="2025-11-24 20:41:47.035044917 +0000 UTC m=+5100.823997328" Nov 24 20:41:52 crc kubenswrapper[4812]: I1124 20:41:52.164520 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bhkm5" Nov 24 20:41:52 crc kubenswrapper[4812]: I1124 20:41:52.165124 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bhkm5" Nov 24 20:41:52 crc kubenswrapper[4812]: I1124 20:41:52.228703 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bhkm5" Nov 24 20:41:53 crc kubenswrapper[4812]: I1124 20:41:53.142861 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bhkm5" Nov 24 20:41:53 crc kubenswrapper[4812]: I1124 20:41:53.216216 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bhkm5"] Nov 24 20:41:55 crc kubenswrapper[4812]: I1124 20:41:55.086026 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bhkm5" podUID="d8cc17c7-7b87-4dbe-8145-ec687768578b" containerName="registry-server" containerID="cri-o://6041fdd6f46a19133a4e6bad2d3014d37250968448e3acbf4360b8836a3c38e9" gracePeriod=2 Nov 24 20:41:55 crc kubenswrapper[4812]: I1124 20:41:55.668214 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhkm5" Nov 24 20:41:55 crc kubenswrapper[4812]: I1124 20:41:55.779958 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8cc17c7-7b87-4dbe-8145-ec687768578b-utilities\") pod \"d8cc17c7-7b87-4dbe-8145-ec687768578b\" (UID: \"d8cc17c7-7b87-4dbe-8145-ec687768578b\") " Nov 24 20:41:55 crc kubenswrapper[4812]: I1124 20:41:55.780167 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5bl6\" (UniqueName: \"kubernetes.io/projected/d8cc17c7-7b87-4dbe-8145-ec687768578b-kube-api-access-t5bl6\") pod \"d8cc17c7-7b87-4dbe-8145-ec687768578b\" (UID: \"d8cc17c7-7b87-4dbe-8145-ec687768578b\") " Nov 24 20:41:55 crc kubenswrapper[4812]: I1124 20:41:55.780308 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8cc17c7-7b87-4dbe-8145-ec687768578b-catalog-content\") pod \"d8cc17c7-7b87-4dbe-8145-ec687768578b\" (UID: \"d8cc17c7-7b87-4dbe-8145-ec687768578b\") " Nov 24 20:41:55 crc kubenswrapper[4812]: I1124 20:41:55.782444 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8cc17c7-7b87-4dbe-8145-ec687768578b-utilities" (OuterVolumeSpecName: "utilities") pod "d8cc17c7-7b87-4dbe-8145-ec687768578b" (UID: "d8cc17c7-7b87-4dbe-8145-ec687768578b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:41:55 crc kubenswrapper[4812]: I1124 20:41:55.787060 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8cc17c7-7b87-4dbe-8145-ec687768578b-kube-api-access-t5bl6" (OuterVolumeSpecName: "kube-api-access-t5bl6") pod "d8cc17c7-7b87-4dbe-8145-ec687768578b" (UID: "d8cc17c7-7b87-4dbe-8145-ec687768578b"). InnerVolumeSpecName "kube-api-access-t5bl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:41:55 crc kubenswrapper[4812]: I1124 20:41:55.882790 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8cc17c7-7b87-4dbe-8145-ec687768578b-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 20:41:55 crc kubenswrapper[4812]: I1124 20:41:55.882853 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5bl6\" (UniqueName: \"kubernetes.io/projected/d8cc17c7-7b87-4dbe-8145-ec687768578b-kube-api-access-t5bl6\") on node \"crc\" DevicePath \"\"" Nov 24 20:41:56 crc kubenswrapper[4812]: I1124 20:41:56.100518 4812 generic.go:334] "Generic (PLEG): container finished" podID="d8cc17c7-7b87-4dbe-8145-ec687768578b" containerID="6041fdd6f46a19133a4e6bad2d3014d37250968448e3acbf4360b8836a3c38e9" exitCode=0 Nov 24 20:41:56 crc kubenswrapper[4812]: I1124 20:41:56.100563 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhkm5" event={"ID":"d8cc17c7-7b87-4dbe-8145-ec687768578b","Type":"ContainerDied","Data":"6041fdd6f46a19133a4e6bad2d3014d37250968448e3acbf4360b8836a3c38e9"} Nov 24 20:41:56 crc kubenswrapper[4812]: I1124 20:41:56.100602 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhkm5" Nov 24 20:41:56 crc kubenswrapper[4812]: I1124 20:41:56.100750 4812 scope.go:117] "RemoveContainer" containerID="6041fdd6f46a19133a4e6bad2d3014d37250968448e3acbf4360b8836a3c38e9" Nov 24 20:41:56 crc kubenswrapper[4812]: I1124 20:41:56.100686 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhkm5" event={"ID":"d8cc17c7-7b87-4dbe-8145-ec687768578b","Type":"ContainerDied","Data":"122e885c3c7b0fb83b56d98ede99bd42acd95f07bbebe5d0078406ecf9c52364"} Nov 24 20:41:56 crc kubenswrapper[4812]: I1124 20:41:56.136290 4812 scope.go:117] "RemoveContainer" containerID="ee9c8a35fe5ec1effdee05eb7e63e74629dd0da83183882da4bcac980642ac3c" Nov 24 20:41:56 crc kubenswrapper[4812]: I1124 20:41:56.167472 4812 scope.go:117] "RemoveContainer" containerID="9a54bfb21ca921c9dfdbd9e86e738ee9596a43e3fbbcaac092211308c278df8e" Nov 24 20:41:56 crc kubenswrapper[4812]: I1124 20:41:56.217500 4812 scope.go:117] "RemoveContainer" containerID="6041fdd6f46a19133a4e6bad2d3014d37250968448e3acbf4360b8836a3c38e9" Nov 24 20:41:56 crc kubenswrapper[4812]: E1124 20:41:56.218158 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6041fdd6f46a19133a4e6bad2d3014d37250968448e3acbf4360b8836a3c38e9\": container with ID starting with 6041fdd6f46a19133a4e6bad2d3014d37250968448e3acbf4360b8836a3c38e9 not found: ID does not exist" containerID="6041fdd6f46a19133a4e6bad2d3014d37250968448e3acbf4360b8836a3c38e9" Nov 24 20:41:56 crc kubenswrapper[4812]: I1124 20:41:56.218194 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6041fdd6f46a19133a4e6bad2d3014d37250968448e3acbf4360b8836a3c38e9"} err="failed to get container status \"6041fdd6f46a19133a4e6bad2d3014d37250968448e3acbf4360b8836a3c38e9\": rpc error: code = NotFound desc = could not find container \"6041fdd6f46a19133a4e6bad2d3014d37250968448e3acbf4360b8836a3c38e9\": container with ID starting with 6041fdd6f46a19133a4e6bad2d3014d37250968448e3acbf4360b8836a3c38e9 not found: ID does not exist" Nov 24 20:41:56 crc kubenswrapper[4812]: I1124 20:41:56.218220 4812 scope.go:117] "RemoveContainer" containerID="ee9c8a35fe5ec1effdee05eb7e63e74629dd0da83183882da4bcac980642ac3c" Nov 24 20:41:56 crc kubenswrapper[4812]: E1124 20:41:56.218830 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee9c8a35fe5ec1effdee05eb7e63e74629dd0da83183882da4bcac980642ac3c\": container with ID starting with ee9c8a35fe5ec1effdee05eb7e63e74629dd0da83183882da4bcac980642ac3c not found: ID does not exist" containerID="ee9c8a35fe5ec1effdee05eb7e63e74629dd0da83183882da4bcac980642ac3c" Nov 24 20:41:56 crc kubenswrapper[4812]: I1124 20:41:56.218876 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee9c8a35fe5ec1effdee05eb7e63e74629dd0da83183882da4bcac980642ac3c"} err="failed to get container status \"ee9c8a35fe5ec1effdee05eb7e63e74629dd0da83183882da4bcac980642ac3c\": rpc error: code = NotFound desc = could not find container \"ee9c8a35fe5ec1effdee05eb7e63e74629dd0da83183882da4bcac980642ac3c\": container with ID starting with ee9c8a35fe5ec1effdee05eb7e63e74629dd0da83183882da4bcac980642ac3c not found: ID does not exist" Nov 24 20:41:56 crc kubenswrapper[4812]: I1124 20:41:56.218894 4812 scope.go:117] "RemoveContainer" containerID="9a54bfb21ca921c9dfdbd9e86e738ee9596a43e3fbbcaac092211308c278df8e" Nov 24 20:41:56 crc kubenswrapper[4812]: E1124 20:41:56.219424 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a54bfb21ca921c9dfdbd9e86e738ee9596a43e3fbbcaac092211308c278df8e\": container with ID starting with 9a54bfb21ca921c9dfdbd9e86e738ee9596a43e3fbbcaac092211308c278df8e not found: ID does not exist" containerID="9a54bfb21ca921c9dfdbd9e86e738ee9596a43e3fbbcaac092211308c278df8e" Nov 24 20:41:56 crc kubenswrapper[4812]: I1124 20:41:56.219462 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a54bfb21ca921c9dfdbd9e86e738ee9596a43e3fbbcaac092211308c278df8e"} err="failed to get container status \"9a54bfb21ca921c9dfdbd9e86e738ee9596a43e3fbbcaac092211308c278df8e\": rpc error: code = NotFound desc = could not find container \"9a54bfb21ca921c9dfdbd9e86e738ee9596a43e3fbbcaac092211308c278df8e\": container with ID starting with 9a54bfb21ca921c9dfdbd9e86e738ee9596a43e3fbbcaac092211308c278df8e not found: ID does not exist" Nov 24 20:41:56 crc kubenswrapper[4812]: I1124 20:41:56.427663 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8cc17c7-7b87-4dbe-8145-ec687768578b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8cc17c7-7b87-4dbe-8145-ec687768578b" (UID: "d8cc17c7-7b87-4dbe-8145-ec687768578b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:41:56 crc kubenswrapper[4812]: I1124 20:41:56.496209 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8cc17c7-7b87-4dbe-8145-ec687768578b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 20:41:56 crc kubenswrapper[4812]: I1124 20:41:56.761607 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bhkm5"] Nov 24 20:41:56 crc kubenswrapper[4812]: I1124 20:41:56.768222 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bhkm5"] Nov 24 20:41:56 crc kubenswrapper[4812]: I1124 20:41:56.985866 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8cc17c7-7b87-4dbe-8145-ec687768578b" path="/var/lib/kubelet/pods/d8cc17c7-7b87-4dbe-8145-ec687768578b/volumes" Nov 24 20:42:53 crc kubenswrapper[4812]: I1124 20:42:53.031044 4812 scope.go:117] "RemoveContainer" containerID="0eba4e002acca2167ab08e84f811c8c4ee86538c7a5a5594cd5903171a8eb80e" Nov 24 20:43:00 crc kubenswrapper[4812]: I1124 20:43:00.458655 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nm9w4"] Nov 24 20:43:00 crc kubenswrapper[4812]: E1124 20:43:00.459834 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8cc17c7-7b87-4dbe-8145-ec687768578b" containerName="registry-server" Nov 24 20:43:00 crc kubenswrapper[4812]: I1124 20:43:00.459858 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8cc17c7-7b87-4dbe-8145-ec687768578b" containerName="registry-server" Nov 24 20:43:00 crc kubenswrapper[4812]: E1124 20:43:00.459917 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b70999d-b07a-4528-805b-af45c83cbdb2" containerName="mariadb-client-2" Nov 24 20:43:00 crc kubenswrapper[4812]: I1124 20:43:00.459943 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b70999d-b07a-4528-805b-af45c83cbdb2" containerName="mariadb-client-2" Nov 24 20:43:00 crc kubenswrapper[4812]: E1124 20:43:00.459972 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8cc17c7-7b87-4dbe-8145-ec687768578b" containerName="extract-utilities" Nov 24 20:43:00 crc kubenswrapper[4812]: I1124 20:43:00.459985 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8cc17c7-7b87-4dbe-8145-ec687768578b" containerName="extract-utilities" Nov 24 20:43:00 crc kubenswrapper[4812]: E1124 20:43:00.460016 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8cc17c7-7b87-4dbe-8145-ec687768578b" containerName="extract-content" Nov 24 20:43:00 crc kubenswrapper[4812]: I1124 20:43:00.460028 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8cc17c7-7b87-4dbe-8145-ec687768578b" containerName="extract-content" Nov 24 20:43:00 crc kubenswrapper[4812]: I1124 20:43:00.460298 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8cc17c7-7b87-4dbe-8145-ec687768578b" containerName="registry-server" Nov 24 20:43:00 crc kubenswrapper[4812]: I1124 20:43:00.460402 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b70999d-b07a-4528-805b-af45c83cbdb2" containerName="mariadb-client-2" Nov 24 20:43:00 crc kubenswrapper[4812]: I1124 20:43:00.462909 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nm9w4" Nov 24 20:43:00 crc kubenswrapper[4812]: I1124 20:43:00.475269 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nm9w4"] Nov 24 20:43:00 crc kubenswrapper[4812]: I1124 20:43:00.498850 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9rxk\" (UniqueName: \"kubernetes.io/projected/90ec5d55-b8b3-4ad0-a203-49efaf2cd396-kube-api-access-s9rxk\") pod \"redhat-operators-nm9w4\" (UID: \"90ec5d55-b8b3-4ad0-a203-49efaf2cd396\") " pod="openshift-marketplace/redhat-operators-nm9w4" Nov 24 20:43:00 crc kubenswrapper[4812]: I1124 20:43:00.498937 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90ec5d55-b8b3-4ad0-a203-49efaf2cd396-catalog-content\") pod \"redhat-operators-nm9w4\" (UID: \"90ec5d55-b8b3-4ad0-a203-49efaf2cd396\") " pod="openshift-marketplace/redhat-operators-nm9w4" Nov 24 20:43:00 crc kubenswrapper[4812]: I1124 20:43:00.498974 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90ec5d55-b8b3-4ad0-a203-49efaf2cd396-utilities\") pod \"redhat-operators-nm9w4\" (UID: \"90ec5d55-b8b3-4ad0-a203-49efaf2cd396\") " pod="openshift-marketplace/redhat-operators-nm9w4" Nov 24 20:43:00 crc kubenswrapper[4812]: I1124 20:43:00.600278 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9rxk\" (UniqueName: \"kubernetes.io/projected/90ec5d55-b8b3-4ad0-a203-49efaf2cd396-kube-api-access-s9rxk\") pod \"redhat-operators-nm9w4\" (UID: \"90ec5d55-b8b3-4ad0-a203-49efaf2cd396\") " pod="openshift-marketplace/redhat-operators-nm9w4" Nov 24 20:43:00 crc kubenswrapper[4812]: I1124 20:43:00.600370 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90ec5d55-b8b3-4ad0-a203-49efaf2cd396-catalog-content\") pod \"redhat-operators-nm9w4\" (UID: \"90ec5d55-b8b3-4ad0-a203-49efaf2cd396\") " pod="openshift-marketplace/redhat-operators-nm9w4" Nov 24 20:43:00 crc kubenswrapper[4812]: I1124 20:43:00.600401 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90ec5d55-b8b3-4ad0-a203-49efaf2cd396-utilities\") pod \"redhat-operators-nm9w4\" (UID: \"90ec5d55-b8b3-4ad0-a203-49efaf2cd396\") " pod="openshift-marketplace/redhat-operators-nm9w4" Nov 24 20:43:00 crc kubenswrapper[4812]: I1124 20:43:00.600952 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90ec5d55-b8b3-4ad0-a203-49efaf2cd396-utilities\") pod \"redhat-operators-nm9w4\" (UID: \"90ec5d55-b8b3-4ad0-a203-49efaf2cd396\") " pod="openshift-marketplace/redhat-operators-nm9w4" Nov 24 20:43:00 crc kubenswrapper[4812]: I1124 20:43:00.601473 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90ec5d55-b8b3-4ad0-a203-49efaf2cd396-catalog-content\") pod \"redhat-operators-nm9w4\" (UID: \"90ec5d55-b8b3-4ad0-a203-49efaf2cd396\") " pod="openshift-marketplace/redhat-operators-nm9w4" Nov 24 20:43:00 crc kubenswrapper[4812]: I1124 20:43:00.628908 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9rxk\" (UniqueName: \"kubernetes.io/projected/90ec5d55-b8b3-4ad0-a203-49efaf2cd396-kube-api-access-s9rxk\") pod \"redhat-operators-nm9w4\" (UID: \"90ec5d55-b8b3-4ad0-a203-49efaf2cd396\") " pod="openshift-marketplace/redhat-operators-nm9w4" Nov 24 20:43:00 crc kubenswrapper[4812]: I1124 20:43:00.789393 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nm9w4" Nov 24 20:43:01 crc kubenswrapper[4812]: I1124 20:43:01.207550 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nm9w4"] Nov 24 20:43:01 crc kubenswrapper[4812]: I1124 20:43:01.771716 4812 generic.go:334] "Generic (PLEG): container finished" podID="90ec5d55-b8b3-4ad0-a203-49efaf2cd396" containerID="fbf0fed2442b230e041cb2ac46f61a42487423f4acec63ca8a60368bbaf90b5e" exitCode=0 Nov 24 20:43:01 crc kubenswrapper[4812]: I1124 20:43:01.771801 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nm9w4" event={"ID":"90ec5d55-b8b3-4ad0-a203-49efaf2cd396","Type":"ContainerDied","Data":"fbf0fed2442b230e041cb2ac46f61a42487423f4acec63ca8a60368bbaf90b5e"} Nov 24 20:43:01 crc kubenswrapper[4812]: I1124 20:43:01.772082 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nm9w4" event={"ID":"90ec5d55-b8b3-4ad0-a203-49efaf2cd396","Type":"ContainerStarted","Data":"6d1bf65ef06eb24267e887c7c0d162603b1b9558330d48c43ef12bee45c22443"} Nov 24 20:43:01 crc kubenswrapper[4812]: I1124 20:43:01.774855 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 20:43:02 crc kubenswrapper[4812]: I1124 20:43:02.784654 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nm9w4" event={"ID":"90ec5d55-b8b3-4ad0-a203-49efaf2cd396","Type":"ContainerStarted","Data":"b1fbe7fccbca3966080180e8dcebdc10982ba7cae3d762169980f16558801ce0"} Nov 24 20:43:03 crc kubenswrapper[4812]: I1124 20:43:03.798935 4812 generic.go:334] "Generic (PLEG): container finished" podID="90ec5d55-b8b3-4ad0-a203-49efaf2cd396" containerID="b1fbe7fccbca3966080180e8dcebdc10982ba7cae3d762169980f16558801ce0" exitCode=0 Nov 24 20:43:03 crc kubenswrapper[4812]: I1124 20:43:03.799056 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nm9w4" event={"ID":"90ec5d55-b8b3-4ad0-a203-49efaf2cd396","Type":"ContainerDied","Data":"b1fbe7fccbca3966080180e8dcebdc10982ba7cae3d762169980f16558801ce0"} Nov 24 20:43:04 crc kubenswrapper[4812]: I1124 20:43:04.812958 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nm9w4" event={"ID":"90ec5d55-b8b3-4ad0-a203-49efaf2cd396","Type":"ContainerStarted","Data":"f2971cdcc5ca110619dd2919b8c6da502a21a5c3774d2a9144240f739a2ead7d"} Nov 24 20:43:04 crc kubenswrapper[4812]: I1124 20:43:04.844137 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nm9w4" podStartSLOduration=2.3949932130000002 podStartE2EDuration="4.844103224s" podCreationTimestamp="2025-11-24 20:43:00 +0000 UTC" firstStartedPulling="2025-11-24 20:43:01.774584498 +0000 UTC m=+5175.563536869" lastFinishedPulling="2025-11-24 20:43:04.223694469 +0000 UTC m=+5178.012646880" observedRunningTime="2025-11-24 20:43:04.83802004 +0000 UTC m=+5178.626972461" watchObservedRunningTime="2025-11-24 20:43:04.844103224 +0000 UTC m=+5178.633055635" Nov 24 20:43:10 crc kubenswrapper[4812]: I1124 20:43:10.790001 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nm9w4" Nov 24 20:43:10 crc kubenswrapper[4812]: I1124 20:43:10.790379 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nm9w4" Nov 24 20:43:10 crc kubenswrapper[4812]: I1124 20:43:10.843445 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nm9w4" Nov 24 20:43:10 crc kubenswrapper[4812]: I1124 20:43:10.933607 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nm9w4" Nov 24 20:43:11 crc kubenswrapper[4812]: I1124 20:43:11.084227 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nm9w4"] Nov 24 20:43:12 crc kubenswrapper[4812]: I1124 20:43:12.898852 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nm9w4" podUID="90ec5d55-b8b3-4ad0-a203-49efaf2cd396" containerName="registry-server" containerID="cri-o://f2971cdcc5ca110619dd2919b8c6da502a21a5c3774d2a9144240f739a2ead7d" gracePeriod=2 Nov 24 20:43:13 crc kubenswrapper[4812]: I1124 20:43:13.407819 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nm9w4" Nov 24 20:43:13 crc kubenswrapper[4812]: I1124 20:43:13.419404 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90ec5d55-b8b3-4ad0-a203-49efaf2cd396-catalog-content\") pod \"90ec5d55-b8b3-4ad0-a203-49efaf2cd396\" (UID: \"90ec5d55-b8b3-4ad0-a203-49efaf2cd396\") " Nov 24 20:43:13 crc kubenswrapper[4812]: I1124 20:43:13.419471 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9rxk\" (UniqueName: \"kubernetes.io/projected/90ec5d55-b8b3-4ad0-a203-49efaf2cd396-kube-api-access-s9rxk\") pod \"90ec5d55-b8b3-4ad0-a203-49efaf2cd396\" (UID: \"90ec5d55-b8b3-4ad0-a203-49efaf2cd396\") " Nov 24 20:43:13 crc kubenswrapper[4812]: I1124 20:43:13.419538 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90ec5d55-b8b3-4ad0-a203-49efaf2cd396-utilities\") pod \"90ec5d55-b8b3-4ad0-a203-49efaf2cd396\" (UID: \"90ec5d55-b8b3-4ad0-a203-49efaf2cd396\") " Nov 24 20:43:13 crc kubenswrapper[4812]: I1124 20:43:13.420694 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90ec5d55-b8b3-4ad0-a203-49efaf2cd396-utilities" (OuterVolumeSpecName: "utilities") pod "90ec5d55-b8b3-4ad0-a203-49efaf2cd396" (UID: "90ec5d55-b8b3-4ad0-a203-49efaf2cd396"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:43:13 crc kubenswrapper[4812]: I1124 20:43:13.428573 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90ec5d55-b8b3-4ad0-a203-49efaf2cd396-kube-api-access-s9rxk" (OuterVolumeSpecName: "kube-api-access-s9rxk") pod "90ec5d55-b8b3-4ad0-a203-49efaf2cd396" (UID: "90ec5d55-b8b3-4ad0-a203-49efaf2cd396"). InnerVolumeSpecName "kube-api-access-s9rxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:43:13 crc kubenswrapper[4812]: I1124 20:43:13.520476 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9rxk\" (UniqueName: \"kubernetes.io/projected/90ec5d55-b8b3-4ad0-a203-49efaf2cd396-kube-api-access-s9rxk\") on node \"crc\" DevicePath \"\"" Nov 24 20:43:13 crc kubenswrapper[4812]: I1124 20:43:13.520629 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90ec5d55-b8b3-4ad0-a203-49efaf2cd396-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 20:43:13 crc kubenswrapper[4812]: I1124 20:43:13.914238 4812 generic.go:334] "Generic (PLEG): container finished" podID="90ec5d55-b8b3-4ad0-a203-49efaf2cd396" containerID="f2971cdcc5ca110619dd2919b8c6da502a21a5c3774d2a9144240f739a2ead7d" exitCode=0 Nov 24 20:43:13 crc kubenswrapper[4812]: I1124 20:43:13.914379 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nm9w4" Nov 24 20:43:13 crc kubenswrapper[4812]: I1124 20:43:13.914391 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nm9w4" event={"ID":"90ec5d55-b8b3-4ad0-a203-49efaf2cd396","Type":"ContainerDied","Data":"f2971cdcc5ca110619dd2919b8c6da502a21a5c3774d2a9144240f739a2ead7d"} Nov 24 20:43:13 crc kubenswrapper[4812]: I1124 20:43:13.916606 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nm9w4" event={"ID":"90ec5d55-b8b3-4ad0-a203-49efaf2cd396","Type":"ContainerDied","Data":"6d1bf65ef06eb24267e887c7c0d162603b1b9558330d48c43ef12bee45c22443"} Nov 24 20:43:13 crc kubenswrapper[4812]: I1124 20:43:13.916653 4812 scope.go:117] "RemoveContainer" containerID="f2971cdcc5ca110619dd2919b8c6da502a21a5c3774d2a9144240f739a2ead7d" Nov 24 20:43:13 crc kubenswrapper[4812]: I1124 20:43:13.959240 4812 scope.go:117] "RemoveContainer" containerID="b1fbe7fccbca3966080180e8dcebdc10982ba7cae3d762169980f16558801ce0" Nov 24 20:43:13 crc kubenswrapper[4812]: I1124 20:43:13.987596 4812 scope.go:117] "RemoveContainer" containerID="fbf0fed2442b230e041cb2ac46f61a42487423f4acec63ca8a60368bbaf90b5e" Nov 24 20:43:14 crc kubenswrapper[4812]: I1124 20:43:14.026233 4812 scope.go:117] "RemoveContainer" containerID="f2971cdcc5ca110619dd2919b8c6da502a21a5c3774d2a9144240f739a2ead7d" Nov 24 20:43:14 crc kubenswrapper[4812]: E1124 20:43:14.027139 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2971cdcc5ca110619dd2919b8c6da502a21a5c3774d2a9144240f739a2ead7d\": container with ID starting with f2971cdcc5ca110619dd2919b8c6da502a21a5c3774d2a9144240f739a2ead7d not found: ID does not exist" containerID="f2971cdcc5ca110619dd2919b8c6da502a21a5c3774d2a9144240f739a2ead7d" Nov 24 20:43:14 crc kubenswrapper[4812]: I1124 20:43:14.027317 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2971cdcc5ca110619dd2919b8c6da502a21a5c3774d2a9144240f739a2ead7d"} err="failed to get container status \"f2971cdcc5ca110619dd2919b8c6da502a21a5c3774d2a9144240f739a2ead7d\": rpc error: code = NotFound desc = could not find container \"f2971cdcc5ca110619dd2919b8c6da502a21a5c3774d2a9144240f739a2ead7d\": container with ID starting with f2971cdcc5ca110619dd2919b8c6da502a21a5c3774d2a9144240f739a2ead7d not found: ID does not exist" Nov 24 20:43:14 crc kubenswrapper[4812]: I1124 20:43:14.027697 4812 scope.go:117] "RemoveContainer" containerID="b1fbe7fccbca3966080180e8dcebdc10982ba7cae3d762169980f16558801ce0" Nov 24 20:43:14 crc kubenswrapper[4812]: E1124 20:43:14.028426 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1fbe7fccbca3966080180e8dcebdc10982ba7cae3d762169980f16558801ce0\": container with ID starting with b1fbe7fccbca3966080180e8dcebdc10982ba7cae3d762169980f16558801ce0 not found: ID does not exist" containerID="b1fbe7fccbca3966080180e8dcebdc10982ba7cae3d762169980f16558801ce0" Nov 24 20:43:14 crc kubenswrapper[4812]: I1124 20:43:14.028484 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1fbe7fccbca3966080180e8dcebdc10982ba7cae3d762169980f16558801ce0"} err="failed to get container status \"b1fbe7fccbca3966080180e8dcebdc10982ba7cae3d762169980f16558801ce0\": rpc error: code = NotFound desc = could not find container \"b1fbe7fccbca3966080180e8dcebdc10982ba7cae3d762169980f16558801ce0\": container with ID starting with b1fbe7fccbca3966080180e8dcebdc10982ba7cae3d762169980f16558801ce0 not found: ID does not exist" Nov 24 20:43:14 crc kubenswrapper[4812]: I1124 20:43:14.028528 4812 scope.go:117] "RemoveContainer" containerID="fbf0fed2442b230e041cb2ac46f61a42487423f4acec63ca8a60368bbaf90b5e" Nov 24 20:43:14 crc kubenswrapper[4812]: E1124 20:43:14.028868 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbf0fed2442b230e041cb2ac46f61a42487423f4acec63ca8a60368bbaf90b5e\": container with ID starting with fbf0fed2442b230e041cb2ac46f61a42487423f4acec63ca8a60368bbaf90b5e not found: ID does not exist" containerID="fbf0fed2442b230e041cb2ac46f61a42487423f4acec63ca8a60368bbaf90b5e" Nov 24 20:43:14 crc kubenswrapper[4812]: I1124 20:43:14.028906 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbf0fed2442b230e041cb2ac46f61a42487423f4acec63ca8a60368bbaf90b5e"} err="failed to get container status \"fbf0fed2442b230e041cb2ac46f61a42487423f4acec63ca8a60368bbaf90b5e\": rpc error: code = NotFound desc = could not find container \"fbf0fed2442b230e041cb2ac46f61a42487423f4acec63ca8a60368bbaf90b5e\": container with ID starting with fbf0fed2442b230e041cb2ac46f61a42487423f4acec63ca8a60368bbaf90b5e not found: ID does not exist" Nov 24 20:43:14 crc kubenswrapper[4812]: I1124 20:43:14.922573 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90ec5d55-b8b3-4ad0-a203-49efaf2cd396-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90ec5d55-b8b3-4ad0-a203-49efaf2cd396" (UID: "90ec5d55-b8b3-4ad0-a203-49efaf2cd396"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:43:14 crc kubenswrapper[4812]: I1124 20:43:14.943105 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90ec5d55-b8b3-4ad0-a203-49efaf2cd396-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 20:43:15 crc kubenswrapper[4812]: I1124 20:43:15.152177 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nm9w4"] Nov 24 20:43:15 crc kubenswrapper[4812]: I1124 20:43:15.165506 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nm9w4"] Nov 24 20:43:16 crc kubenswrapper[4812]: I1124 20:43:16.975941 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90ec5d55-b8b3-4ad0-a203-49efaf2cd396" path="/var/lib/kubelet/pods/90ec5d55-b8b3-4ad0-a203-49efaf2cd396/volumes" Nov 24 20:44:02 crc kubenswrapper[4812]: I1124 20:44:02.998198 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:44:03 crc kubenswrapper[4812]: I1124 20:44:02.998952 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:44:32 crc kubenswrapper[4812]: I1124 20:44:32.998840 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:44:33 crc kubenswrapper[4812]: I1124 20:44:32.999735 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:44:53 crc kubenswrapper[4812]: I1124 20:44:53.959016 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fz489"] Nov 24 20:44:53 crc kubenswrapper[4812]: E1124 20:44:53.960262 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ec5d55-b8b3-4ad0-a203-49efaf2cd396" containerName="extract-utilities" Nov 24 20:44:53 crc kubenswrapper[4812]: I1124 20:44:53.960290 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ec5d55-b8b3-4ad0-a203-49efaf2cd396" containerName="extract-utilities" Nov 24 20:44:53 crc kubenswrapper[4812]: E1124 20:44:53.960317 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ec5d55-b8b3-4ad0-a203-49efaf2cd396" containerName="extract-content" Nov 24 20:44:53 crc kubenswrapper[4812]: I1124 20:44:53.960330 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ec5d55-b8b3-4ad0-a203-49efaf2cd396" containerName="extract-content" Nov 24 20:44:53 crc kubenswrapper[4812]: E1124 20:44:53.960441 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ec5d55-b8b3-4ad0-a203-49efaf2cd396" containerName="registry-server" Nov 24 20:44:53 crc kubenswrapper[4812]: I1124 20:44:53.960461 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ec5d55-b8b3-4ad0-a203-49efaf2cd396" containerName="registry-server" Nov 24 20:44:53 crc kubenswrapper[4812]: I1124 20:44:53.960781 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="90ec5d55-b8b3-4ad0-a203-49efaf2cd396" containerName="registry-server" Nov 24 20:44:53 crc kubenswrapper[4812]: I1124 20:44:53.963189 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fz489" Nov 24 20:44:53 crc kubenswrapper[4812]: I1124 20:44:53.974555 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fz489"] Nov 24 20:44:53 crc kubenswrapper[4812]: I1124 20:44:53.999714 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19879df2-fb09-406f-9eee-21376ddc2a5e-utilities\") pod \"certified-operators-fz489\" (UID: \"19879df2-fb09-406f-9eee-21376ddc2a5e\") " pod="openshift-marketplace/certified-operators-fz489" Nov 24 20:44:53 crc kubenswrapper[4812]: I1124 20:44:53.999935 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86ddp\" (UniqueName: \"kubernetes.io/projected/19879df2-fb09-406f-9eee-21376ddc2a5e-kube-api-access-86ddp\") pod \"certified-operators-fz489\" (UID: \"19879df2-fb09-406f-9eee-21376ddc2a5e\") " pod="openshift-marketplace/certified-operators-fz489" Nov 24 20:44:54 crc kubenswrapper[4812]: I1124 20:44:53.999989 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19879df2-fb09-406f-9eee-21376ddc2a5e-catalog-content\") pod \"certified-operators-fz489\" (UID: \"19879df2-fb09-406f-9eee-21376ddc2a5e\") " pod="openshift-marketplace/certified-operators-fz489" Nov 24 20:44:54 crc kubenswrapper[4812]: I1124 20:44:54.101996 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19879df2-fb09-406f-9eee-21376ddc2a5e-utilities\") pod \"certified-operators-fz489\" (UID: \"19879df2-fb09-406f-9eee-21376ddc2a5e\") " pod="openshift-marketplace/certified-operators-fz489" Nov 24 20:44:54 crc kubenswrapper[4812]: I1124 20:44:54.102093 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86ddp\" (UniqueName: \"kubernetes.io/projected/19879df2-fb09-406f-9eee-21376ddc2a5e-kube-api-access-86ddp\") pod \"certified-operators-fz489\" (UID: \"19879df2-fb09-406f-9eee-21376ddc2a5e\") " pod="openshift-marketplace/certified-operators-fz489" Nov 24 20:44:54 crc kubenswrapper[4812]: I1124 20:44:54.102120 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19879df2-fb09-406f-9eee-21376ddc2a5e-catalog-content\") pod \"certified-operators-fz489\" (UID: \"19879df2-fb09-406f-9eee-21376ddc2a5e\") " pod="openshift-marketplace/certified-operators-fz489" Nov 24 20:44:54 crc kubenswrapper[4812]: I1124 20:44:54.102529 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19879df2-fb09-406f-9eee-21376ddc2a5e-catalog-content\") pod \"certified-operators-fz489\" (UID: \"19879df2-fb09-406f-9eee-21376ddc2a5e\") " pod="openshift-marketplace/certified-operators-fz489" Nov 24 20:44:54 crc kubenswrapper[4812]: I1124 20:44:54.102741 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19879df2-fb09-406f-9eee-21376ddc2a5e-utilities\") pod \"certified-operators-fz489\" (UID: \"19879df2-fb09-406f-9eee-21376ddc2a5e\") " pod="openshift-marketplace/certified-operators-fz489" Nov 24 20:44:54 crc kubenswrapper[4812]: I1124 20:44:54.136623 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86ddp\" (UniqueName: \"kubernetes.io/projected/19879df2-fb09-406f-9eee-21376ddc2a5e-kube-api-access-86ddp\") pod \"certified-operators-fz489\" (UID: \"19879df2-fb09-406f-9eee-21376ddc2a5e\") " pod="openshift-marketplace/certified-operators-fz489" Nov 24 20:44:54 crc kubenswrapper[4812]: I1124 20:44:54.312075 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fz489" Nov 24 20:44:54 crc kubenswrapper[4812]: I1124 20:44:54.768114 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fz489"] Nov 24 20:44:54 crc kubenswrapper[4812]: I1124 20:44:54.955677 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fz489" event={"ID":"19879df2-fb09-406f-9eee-21376ddc2a5e","Type":"ContainerStarted","Data":"88df8412f4c221ecaea9168acf620bbbfbfef4d306e281c03f4b0351c31aa676"} Nov 24 20:44:54 crc kubenswrapper[4812]: I1124 20:44:54.955728 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fz489" event={"ID":"19879df2-fb09-406f-9eee-21376ddc2a5e","Type":"ContainerStarted","Data":"4ec2136e9c99d70dfc7a4f99ad811c3ad983b79577dd6bab379c71a2933c4fb0"} Nov 24 20:44:55 crc kubenswrapper[4812]: I1124 20:44:55.967994 4812 generic.go:334] "Generic (PLEG): container finished" podID="19879df2-fb09-406f-9eee-21376ddc2a5e" containerID="88df8412f4c221ecaea9168acf620bbbfbfef4d306e281c03f4b0351c31aa676" exitCode=0 Nov 24 20:44:55 crc kubenswrapper[4812]: I1124 20:44:55.968117 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fz489" event={"ID":"19879df2-fb09-406f-9eee-21376ddc2a5e","Type":"ContainerDied","Data":"88df8412f4c221ecaea9168acf620bbbfbfef4d306e281c03f4b0351c31aa676"} Nov 24 20:44:56 crc kubenswrapper[4812]: I1124 20:44:56.989330 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fz489" event={"ID":"19879df2-fb09-406f-9eee-21376ddc2a5e","Type":"ContainerStarted","Data":"4b2030a02f74e780139f3dfb0823c0eec825df78fef0a7565b37b6a1c7ca47d1"} Nov 24 20:44:57 crc kubenswrapper[4812]: I1124 20:44:57.991256 4812 generic.go:334] "Generic (PLEG): container finished" podID="19879df2-fb09-406f-9eee-21376ddc2a5e" containerID="4b2030a02f74e780139f3dfb0823c0eec825df78fef0a7565b37b6a1c7ca47d1" exitCode=0 Nov 24 20:44:57 crc kubenswrapper[4812]: I1124 20:44:57.991314 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fz489" event={"ID":"19879df2-fb09-406f-9eee-21376ddc2a5e","Type":"ContainerDied","Data":"4b2030a02f74e780139f3dfb0823c0eec825df78fef0a7565b37b6a1c7ca47d1"} Nov 24 20:44:59 crc kubenswrapper[4812]: I1124 20:44:59.003208 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fz489" event={"ID":"19879df2-fb09-406f-9eee-21376ddc2a5e","Type":"ContainerStarted","Data":"64234e72daca0517b54ef0b9c7997beedf53a28fa523fb8ce8cfcf6999fc663c"} Nov 24 20:44:59 crc kubenswrapper[4812]: I1124 20:44:59.025704 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fz489" podStartSLOduration=3.590492256 podStartE2EDuration="6.02567783s" podCreationTimestamp="2025-11-24 20:44:53 +0000 UTC" firstStartedPulling="2025-11-24 20:44:55.97003148 +0000 UTC m=+5289.758983891" lastFinishedPulling="2025-11-24 20:44:58.405217094 +0000 UTC m=+5292.194169465" observedRunningTime="2025-11-24 20:44:59.021858152 +0000 UTC m=+5292.810810583" watchObservedRunningTime="2025-11-24 20:44:59.02567783 +0000 UTC m=+5292.814630231" Nov 24 20:45:00 crc kubenswrapper[4812]: I1124 20:45:00.152065 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn"] Nov 24 20:45:00 crc kubenswrapper[4812]: I1124 20:45:00.153222 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn" Nov 24 20:45:00 crc kubenswrapper[4812]: I1124 20:45:00.155496 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 20:45:00 crc kubenswrapper[4812]: I1124 20:45:00.155749 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 20:45:00 crc kubenswrapper[4812]: I1124 20:45:00.165188 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn"] Nov 24 20:45:00 crc kubenswrapper[4812]: I1124 20:45:00.198719 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg7p9\" (UniqueName: \"kubernetes.io/projected/9f58e63a-e7e2-498a-af27-92816aa53ba1-kube-api-access-bg7p9\") pod \"collect-profiles-29400285-t48mn\" (UID: \"9f58e63a-e7e2-498a-af27-92816aa53ba1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn" Nov 24 20:45:00 crc kubenswrapper[4812]: I1124 20:45:00.198800 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f58e63a-e7e2-498a-af27-92816aa53ba1-config-volume\") pod \"collect-profiles-29400285-t48mn\" (UID: \"9f58e63a-e7e2-498a-af27-92816aa53ba1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn" Nov 24 20:45:00 crc kubenswrapper[4812]: I1124 20:45:00.198871 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f58e63a-e7e2-498a-af27-92816aa53ba1-secret-volume\") pod \"collect-profiles-29400285-t48mn\" (UID: \"9f58e63a-e7e2-498a-af27-92816aa53ba1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn" Nov 24 20:45:00 crc kubenswrapper[4812]: I1124 20:45:00.300014 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f58e63a-e7e2-498a-af27-92816aa53ba1-secret-volume\") pod \"collect-profiles-29400285-t48mn\" (UID: \"9f58e63a-e7e2-498a-af27-92816aa53ba1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn" Nov 24 20:45:00 crc kubenswrapper[4812]: I1124 20:45:00.301205 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg7p9\" (UniqueName: \"kubernetes.io/projected/9f58e63a-e7e2-498a-af27-92816aa53ba1-kube-api-access-bg7p9\") pod \"collect-profiles-29400285-t48mn\" (UID: \"9f58e63a-e7e2-498a-af27-92816aa53ba1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn" Nov 24 20:45:00 crc kubenswrapper[4812]: I1124 20:45:00.301305 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f58e63a-e7e2-498a-af27-92816aa53ba1-config-volume\") pod \"collect-profiles-29400285-t48mn\" (UID: \"9f58e63a-e7e2-498a-af27-92816aa53ba1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn" Nov 24 20:45:00 crc kubenswrapper[4812]: I1124 20:45:00.302922 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f58e63a-e7e2-498a-af27-92816aa53ba1-config-volume\") pod \"collect-profiles-29400285-t48mn\" (UID: \"9f58e63a-e7e2-498a-af27-92816aa53ba1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn" Nov 24 20:45:00 crc kubenswrapper[4812]: I1124 20:45:00.309729 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f58e63a-e7e2-498a-af27-92816aa53ba1-secret-volume\") pod \"collect-profiles-29400285-t48mn\" (UID: \"9f58e63a-e7e2-498a-af27-92816aa53ba1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn" Nov 24 20:45:00 crc kubenswrapper[4812]: I1124 20:45:00.317905 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg7p9\" (UniqueName: \"kubernetes.io/projected/9f58e63a-e7e2-498a-af27-92816aa53ba1-kube-api-access-bg7p9\") pod \"collect-profiles-29400285-t48mn\" (UID: \"9f58e63a-e7e2-498a-af27-92816aa53ba1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn" Nov 24 20:45:00 crc kubenswrapper[4812]: I1124 20:45:00.484416 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn" Nov 24 20:45:00 crc kubenswrapper[4812]: I1124 20:45:00.730909 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn"] Nov 24 20:45:01 crc kubenswrapper[4812]: I1124 20:45:01.047576 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn" event={"ID":"9f58e63a-e7e2-498a-af27-92816aa53ba1","Type":"ContainerStarted","Data":"23a20c6ce136498b242d6000830dddc00eef79c8771a3144488a735792435aae"} Nov 24 20:45:01 crc kubenswrapper[4812]: I1124 20:45:01.047644 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn" event={"ID":"9f58e63a-e7e2-498a-af27-92816aa53ba1","Type":"ContainerStarted","Data":"45239cec1d1721d9fd74aba7166d14f0b79978eb277250aef4772490dc686e47"} Nov 24 20:45:01 crc kubenswrapper[4812]: I1124 20:45:01.073979 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn" podStartSLOduration=1.073958873 podStartE2EDuration="1.073958873s" podCreationTimestamp="2025-11-24 20:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:45:01.067263482 +0000 UTC m=+5294.856215893" watchObservedRunningTime="2025-11-24 20:45:01.073958873 +0000 UTC m=+5294.862911254" Nov 24 20:45:02 crc kubenswrapper[4812]: I1124 20:45:02.059731 4812 generic.go:334] "Generic (PLEG): container finished" podID="9f58e63a-e7e2-498a-af27-92816aa53ba1" containerID="23a20c6ce136498b242d6000830dddc00eef79c8771a3144488a735792435aae" exitCode=0 Nov 24 20:45:02 crc kubenswrapper[4812]: I1124 20:45:02.059877 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn" event={"ID":"9f58e63a-e7e2-498a-af27-92816aa53ba1","Type":"ContainerDied","Data":"23a20c6ce136498b242d6000830dddc00eef79c8771a3144488a735792435aae"} Nov 24 20:45:02 crc kubenswrapper[4812]: I1124 20:45:02.998531 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:45:02 crc kubenswrapper[4812]: I1124 20:45:02.999008 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:45:02 crc kubenswrapper[4812]: I1124 20:45:02.999079 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 20:45:03 crc kubenswrapper[4812]: I1124 20:45:03.000219 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 20:45:03 crc kubenswrapper[4812]: I1124 20:45:03.000331 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" gracePeriod=600 Nov 24 20:45:03 crc kubenswrapper[4812]: E1124 20:45:03.141090 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:45:03 crc kubenswrapper[4812]: I1124 20:45:03.446904 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn" Nov 24 20:45:03 crc kubenswrapper[4812]: I1124 20:45:03.554913 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f58e63a-e7e2-498a-af27-92816aa53ba1-secret-volume\") pod \"9f58e63a-e7e2-498a-af27-92816aa53ba1\" (UID: \"9f58e63a-e7e2-498a-af27-92816aa53ba1\") " Nov 24 20:45:03 crc kubenswrapper[4812]: I1124 20:45:03.555004 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f58e63a-e7e2-498a-af27-92816aa53ba1-config-volume\") pod \"9f58e63a-e7e2-498a-af27-92816aa53ba1\" (UID: \"9f58e63a-e7e2-498a-af27-92816aa53ba1\") " Nov 24 20:45:03 crc kubenswrapper[4812]: I1124 20:45:03.555061 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg7p9\" (UniqueName: \"kubernetes.io/projected/9f58e63a-e7e2-498a-af27-92816aa53ba1-kube-api-access-bg7p9\") pod \"9f58e63a-e7e2-498a-af27-92816aa53ba1\" (UID: \"9f58e63a-e7e2-498a-af27-92816aa53ba1\") " Nov 24 20:45:03 crc kubenswrapper[4812]: I1124 20:45:03.556804 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f58e63a-e7e2-498a-af27-92816aa53ba1-config-volume" (OuterVolumeSpecName: "config-volume") pod "9f58e63a-e7e2-498a-af27-92816aa53ba1" (UID: "9f58e63a-e7e2-498a-af27-92816aa53ba1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:45:03 crc kubenswrapper[4812]: I1124 20:45:03.560976 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f58e63a-e7e2-498a-af27-92816aa53ba1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9f58e63a-e7e2-498a-af27-92816aa53ba1" (UID: "9f58e63a-e7e2-498a-af27-92816aa53ba1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:45:03 crc kubenswrapper[4812]: I1124 20:45:03.561541 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f58e63a-e7e2-498a-af27-92816aa53ba1-kube-api-access-bg7p9" (OuterVolumeSpecName: "kube-api-access-bg7p9") pod "9f58e63a-e7e2-498a-af27-92816aa53ba1" (UID: "9f58e63a-e7e2-498a-af27-92816aa53ba1"). InnerVolumeSpecName "kube-api-access-bg7p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:45:03 crc kubenswrapper[4812]: I1124 20:45:03.657038 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f58e63a-e7e2-498a-af27-92816aa53ba1-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 20:45:03 crc kubenswrapper[4812]: I1124 20:45:03.657073 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f58e63a-e7e2-498a-af27-92816aa53ba1-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 20:45:03 crc kubenswrapper[4812]: I1124 20:45:03.657085 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg7p9\" (UniqueName: \"kubernetes.io/projected/9f58e63a-e7e2-498a-af27-92816aa53ba1-kube-api-access-bg7p9\") on node \"crc\" DevicePath \"\"" Nov 24 20:45:04 crc kubenswrapper[4812]: I1124 20:45:04.082567 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" exitCode=0 Nov 24 20:45:04 crc kubenswrapper[4812]: I1124 20:45:04.082625 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445"} Nov 24 20:45:04 crc kubenswrapper[4812]: I1124 20:45:04.082730 4812 scope.go:117] "RemoveContainer" containerID="dbde7a21765fc5b63caec3c7e3d0f2beae4d92e2ef96dc251d6a1b6345e4d456" Nov 24 20:45:04 crc kubenswrapper[4812]: I1124 20:45:04.083702 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:45:04 crc kubenswrapper[4812]: E1124 20:45:04.085433 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:45:04 crc kubenswrapper[4812]: I1124 20:45:04.086315 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn" event={"ID":"9f58e63a-e7e2-498a-af27-92816aa53ba1","Type":"ContainerDied","Data":"45239cec1d1721d9fd74aba7166d14f0b79978eb277250aef4772490dc686e47"} Nov 24 20:45:04 crc kubenswrapper[4812]: I1124 20:45:04.086406 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45239cec1d1721d9fd74aba7166d14f0b79978eb277250aef4772490dc686e47" Nov 24 20:45:04 crc kubenswrapper[4812]: I1124 20:45:04.086513 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn" Nov 24 20:45:04 crc kubenswrapper[4812]: I1124 20:45:04.313325 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fz489" Nov 24 20:45:04 crc kubenswrapper[4812]: I1124 20:45:04.313415 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fz489" Nov 24 20:45:04 crc kubenswrapper[4812]: I1124 20:45:04.399219 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fz489" Nov 24 20:45:04 crc kubenswrapper[4812]: I1124 20:45:04.543888 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz"] Nov 24 20:45:04 crc kubenswrapper[4812]: I1124 20:45:04.552213 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400240-qknhz"] Nov 24 20:45:04 crc kubenswrapper[4812]: I1124 20:45:04.994608 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85bbfa18-5481-4840-a56f-2b2ba9da1bab" path="/var/lib/kubelet/pods/85bbfa18-5481-4840-a56f-2b2ba9da1bab/volumes" Nov 24 20:45:05 crc kubenswrapper[4812]: I1124 20:45:05.177043 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fz489" Nov 24 20:45:05 crc kubenswrapper[4812]: I1124 20:45:05.252989 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fz489"] Nov 24 20:45:07 crc kubenswrapper[4812]: I1124 20:45:07.125201 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fz489" podUID="19879df2-fb09-406f-9eee-21376ddc2a5e" containerName="registry-server" containerID="cri-o://64234e72daca0517b54ef0b9c7997beedf53a28fa523fb8ce8cfcf6999fc663c" gracePeriod=2 Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.126884 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fz489" Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.135710 4812 generic.go:334] "Generic (PLEG): container finished" podID="19879df2-fb09-406f-9eee-21376ddc2a5e" containerID="64234e72daca0517b54ef0b9c7997beedf53a28fa523fb8ce8cfcf6999fc663c" exitCode=0 Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.135767 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fz489" Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.135807 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fz489" event={"ID":"19879df2-fb09-406f-9eee-21376ddc2a5e","Type":"ContainerDied","Data":"64234e72daca0517b54ef0b9c7997beedf53a28fa523fb8ce8cfcf6999fc663c"} Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.135872 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fz489" event={"ID":"19879df2-fb09-406f-9eee-21376ddc2a5e","Type":"ContainerDied","Data":"4ec2136e9c99d70dfc7a4f99ad811c3ad983b79577dd6bab379c71a2933c4fb0"} Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.135891 4812 scope.go:117] "RemoveContainer" containerID="64234e72daca0517b54ef0b9c7997beedf53a28fa523fb8ce8cfcf6999fc663c" Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.168579 4812 scope.go:117] "RemoveContainer" containerID="4b2030a02f74e780139f3dfb0823c0eec825df78fef0a7565b37b6a1c7ca47d1" Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.195935 4812 scope.go:117] "RemoveContainer" containerID="88df8412f4c221ecaea9168acf620bbbfbfef4d306e281c03f4b0351c31aa676" Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.224146 4812 scope.go:117] "RemoveContainer" containerID="64234e72daca0517b54ef0b9c7997beedf53a28fa523fb8ce8cfcf6999fc663c" Nov 24 20:45:08 crc kubenswrapper[4812]: E1124 20:45:08.224589 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64234e72daca0517b54ef0b9c7997beedf53a28fa523fb8ce8cfcf6999fc663c\": container with ID starting with 64234e72daca0517b54ef0b9c7997beedf53a28fa523fb8ce8cfcf6999fc663c not found: ID does not exist" containerID="64234e72daca0517b54ef0b9c7997beedf53a28fa523fb8ce8cfcf6999fc663c" Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.224633 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64234e72daca0517b54ef0b9c7997beedf53a28fa523fb8ce8cfcf6999fc663c"} err="failed to get container status \"64234e72daca0517b54ef0b9c7997beedf53a28fa523fb8ce8cfcf6999fc663c\": rpc error: code = NotFound desc = could not find container \"64234e72daca0517b54ef0b9c7997beedf53a28fa523fb8ce8cfcf6999fc663c\": container with ID starting with 64234e72daca0517b54ef0b9c7997beedf53a28fa523fb8ce8cfcf6999fc663c not found: ID does not exist" Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.224662 4812 scope.go:117] "RemoveContainer" containerID="4b2030a02f74e780139f3dfb0823c0eec825df78fef0a7565b37b6a1c7ca47d1" Nov 24 20:45:08 crc kubenswrapper[4812]: E1124 20:45:08.225087 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b2030a02f74e780139f3dfb0823c0eec825df78fef0a7565b37b6a1c7ca47d1\": container with ID starting with 4b2030a02f74e780139f3dfb0823c0eec825df78fef0a7565b37b6a1c7ca47d1 not found: ID does not exist" containerID="4b2030a02f74e780139f3dfb0823c0eec825df78fef0a7565b37b6a1c7ca47d1" Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.225122 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b2030a02f74e780139f3dfb0823c0eec825df78fef0a7565b37b6a1c7ca47d1"} err="failed to get container status \"4b2030a02f74e780139f3dfb0823c0eec825df78fef0a7565b37b6a1c7ca47d1\": rpc error: code = NotFound desc = could not find container \"4b2030a02f74e780139f3dfb0823c0eec825df78fef0a7565b37b6a1c7ca47d1\": container with ID starting with 4b2030a02f74e780139f3dfb0823c0eec825df78fef0a7565b37b6a1c7ca47d1 not found: ID does not exist" Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.225175 4812 scope.go:117] "RemoveContainer" containerID="88df8412f4c221ecaea9168acf620bbbfbfef4d306e281c03f4b0351c31aa676" Nov 24 20:45:08 crc kubenswrapper[4812]: E1124 20:45:08.225729 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88df8412f4c221ecaea9168acf620bbbfbfef4d306e281c03f4b0351c31aa676\": container with ID starting with 88df8412f4c221ecaea9168acf620bbbfbfef4d306e281c03f4b0351c31aa676 not found: ID does not exist" containerID="88df8412f4c221ecaea9168acf620bbbfbfef4d306e281c03f4b0351c31aa676" Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.225755 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88df8412f4c221ecaea9168acf620bbbfbfef4d306e281c03f4b0351c31aa676"} err="failed to get container status \"88df8412f4c221ecaea9168acf620bbbfbfef4d306e281c03f4b0351c31aa676\": rpc error: code = NotFound desc = could not find container \"88df8412f4c221ecaea9168acf620bbbfbfef4d306e281c03f4b0351c31aa676\": container with ID starting with 88df8412f4c221ecaea9168acf620bbbfbfef4d306e281c03f4b0351c31aa676 not found: ID does not exist" Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.238415 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19879df2-fb09-406f-9eee-21376ddc2a5e-utilities\") pod \"19879df2-fb09-406f-9eee-21376ddc2a5e\" (UID: \"19879df2-fb09-406f-9eee-21376ddc2a5e\") " Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.238546 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19879df2-fb09-406f-9eee-21376ddc2a5e-catalog-content\") pod \"19879df2-fb09-406f-9eee-21376ddc2a5e\" (UID: \"19879df2-fb09-406f-9eee-21376ddc2a5e\") " Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.238587 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86ddp\" (UniqueName: \"kubernetes.io/projected/19879df2-fb09-406f-9eee-21376ddc2a5e-kube-api-access-86ddp\") pod \"19879df2-fb09-406f-9eee-21376ddc2a5e\" (UID: \"19879df2-fb09-406f-9eee-21376ddc2a5e\") " Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.239445 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19879df2-fb09-406f-9eee-21376ddc2a5e-utilities" (OuterVolumeSpecName: "utilities") pod "19879df2-fb09-406f-9eee-21376ddc2a5e" (UID: "19879df2-fb09-406f-9eee-21376ddc2a5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.245141 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19879df2-fb09-406f-9eee-21376ddc2a5e-kube-api-access-86ddp" (OuterVolumeSpecName: "kube-api-access-86ddp") pod "19879df2-fb09-406f-9eee-21376ddc2a5e" (UID: "19879df2-fb09-406f-9eee-21376ddc2a5e"). InnerVolumeSpecName "kube-api-access-86ddp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.285066 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19879df2-fb09-406f-9eee-21376ddc2a5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19879df2-fb09-406f-9eee-21376ddc2a5e" (UID: "19879df2-fb09-406f-9eee-21376ddc2a5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.340001 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19879df2-fb09-406f-9eee-21376ddc2a5e-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.340033 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19879df2-fb09-406f-9eee-21376ddc2a5e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.340045 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86ddp\" (UniqueName: \"kubernetes.io/projected/19879df2-fb09-406f-9eee-21376ddc2a5e-kube-api-access-86ddp\") on node \"crc\" DevicePath \"\"" Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.496510 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fz489"] Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.510427 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fz489"] Nov 24 20:45:08 crc kubenswrapper[4812]: I1124 20:45:08.987498 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19879df2-fb09-406f-9eee-21376ddc2a5e" path="/var/lib/kubelet/pods/19879df2-fb09-406f-9eee-21376ddc2a5e/volumes" Nov 24 20:45:17 crc kubenswrapper[4812]: I1124 20:45:17.966756 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:45:17 crc kubenswrapper[4812]: E1124 20:45:17.967961 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:45:26 crc kubenswrapper[4812]: I1124 20:45:26.670291 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Nov 24 20:45:26 crc kubenswrapper[4812]: E1124 20:45:26.671469 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19879df2-fb09-406f-9eee-21376ddc2a5e" containerName="extract-utilities" Nov 24 20:45:26 crc kubenswrapper[4812]: I1124 20:45:26.671494 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="19879df2-fb09-406f-9eee-21376ddc2a5e" containerName="extract-utilities" Nov 24 20:45:26 crc kubenswrapper[4812]: E1124 20:45:26.671515 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f58e63a-e7e2-498a-af27-92816aa53ba1" containerName="collect-profiles" Nov 24 20:45:26 crc kubenswrapper[4812]: I1124 20:45:26.671527 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f58e63a-e7e2-498a-af27-92816aa53ba1" containerName="collect-profiles" Nov 24 20:45:26 crc kubenswrapper[4812]: E1124 20:45:26.671559 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19879df2-fb09-406f-9eee-21376ddc2a5e" containerName="registry-server" Nov 24 20:45:26 crc kubenswrapper[4812]: I1124 20:45:26.671592 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="19879df2-fb09-406f-9eee-21376ddc2a5e" containerName="registry-server" Nov 24 20:45:26 crc kubenswrapper[4812]: E1124 20:45:26.671625 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19879df2-fb09-406f-9eee-21376ddc2a5e" containerName="extract-content" Nov 24 20:45:26 crc kubenswrapper[4812]: I1124 20:45:26.671637 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="19879df2-fb09-406f-9eee-21376ddc2a5e" containerName="extract-content" Nov 24 20:45:26 crc kubenswrapper[4812]: I1124 20:45:26.671875 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f58e63a-e7e2-498a-af27-92816aa53ba1" containerName="collect-profiles" Nov 24 20:45:26 crc kubenswrapper[4812]: I1124 20:45:26.671912 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="19879df2-fb09-406f-9eee-21376ddc2a5e" containerName="registry-server" Nov 24 20:45:26 crc kubenswrapper[4812]: I1124 20:45:26.672958 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 24 20:45:26 crc kubenswrapper[4812]: I1124 20:45:26.675531 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-pz88h" Nov 24 20:45:26 crc kubenswrapper[4812]: I1124 20:45:26.680907 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Nov 24 20:45:26 crc kubenswrapper[4812]: I1124 20:45:26.860326 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b3622206-f1f1-48f5-9c0a-ec5ee8848944\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3622206-f1f1-48f5-9c0a-ec5ee8848944\") pod \"mariadb-copy-data\" (UID: \"fd16f350-87b3-4ddc-baf8-fd693b722413\") " pod="openstack/mariadb-copy-data" Nov 24 20:45:26 crc kubenswrapper[4812]: I1124 20:45:26.860425 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzfnn\" (UniqueName: \"kubernetes.io/projected/fd16f350-87b3-4ddc-baf8-fd693b722413-kube-api-access-tzfnn\") pod \"mariadb-copy-data\" (UID: \"fd16f350-87b3-4ddc-baf8-fd693b722413\") " pod="openstack/mariadb-copy-data" Nov 24 20:45:26 crc kubenswrapper[4812]: I1124 20:45:26.962324 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzfnn\" (UniqueName: \"kubernetes.io/projected/fd16f350-87b3-4ddc-baf8-fd693b722413-kube-api-access-tzfnn\") pod \"mariadb-copy-data\" (UID: \"fd16f350-87b3-4ddc-baf8-fd693b722413\") " pod="openstack/mariadb-copy-data" Nov 24 20:45:26 crc kubenswrapper[4812]: I1124 20:45:26.962594 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b3622206-f1f1-48f5-9c0a-ec5ee8848944\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3622206-f1f1-48f5-9c0a-ec5ee8848944\") pod \"mariadb-copy-data\" (UID: \"fd16f350-87b3-4ddc-baf8-fd693b722413\") " pod="openstack/mariadb-copy-data" Nov 24 20:45:26 crc kubenswrapper[4812]: I1124 20:45:26.966461 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 20:45:26 crc kubenswrapper[4812]: I1124 20:45:26.966818 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b3622206-f1f1-48f5-9c0a-ec5ee8848944\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3622206-f1f1-48f5-9c0a-ec5ee8848944\") pod \"mariadb-copy-data\" (UID: \"fd16f350-87b3-4ddc-baf8-fd693b722413\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9e09f1ad1256e14521d210a8b3f469b16d6eefef4b05f2262643c9832af6c77c/globalmount\"" pod="openstack/mariadb-copy-data" Nov 24 20:45:26 crc kubenswrapper[4812]: I1124 20:45:26.997761 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzfnn\" (UniqueName: \"kubernetes.io/projected/fd16f350-87b3-4ddc-baf8-fd693b722413-kube-api-access-tzfnn\") pod \"mariadb-copy-data\" (UID: \"fd16f350-87b3-4ddc-baf8-fd693b722413\") " pod="openstack/mariadb-copy-data" Nov 24 20:45:27 crc kubenswrapper[4812]: I1124 20:45:27.011311 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b3622206-f1f1-48f5-9c0a-ec5ee8848944\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3622206-f1f1-48f5-9c0a-ec5ee8848944\") pod \"mariadb-copy-data\" (UID: \"fd16f350-87b3-4ddc-baf8-fd693b722413\") " pod="openstack/mariadb-copy-data" Nov 24 20:45:27 crc kubenswrapper[4812]: I1124 20:45:27.294512 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 24 20:45:27 crc kubenswrapper[4812]: I1124 20:45:27.977685 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Nov 24 20:45:28 crc kubenswrapper[4812]: I1124 20:45:28.342793 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"fd16f350-87b3-4ddc-baf8-fd693b722413","Type":"ContainerStarted","Data":"7220ae976c51ea972924f92b1b0a12ed85267f705b6dd4b01b7231de3f0727e3"} Nov 24 20:45:28 crc kubenswrapper[4812]: I1124 20:45:28.342873 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"fd16f350-87b3-4ddc-baf8-fd693b722413","Type":"ContainerStarted","Data":"17f8bccb77f77502a3c88c7702f56ed619f989224562b13677c1c76740c74daf"} Nov 24 20:45:28 crc kubenswrapper[4812]: I1124 20:45:28.373691 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.373659609 podStartE2EDuration="3.373659609s" podCreationTimestamp="2025-11-24 20:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:45:28.362958225 +0000 UTC m=+5322.151910646" watchObservedRunningTime="2025-11-24 20:45:28.373659609 +0000 UTC m=+5322.162612010" Nov 24 20:45:29 crc kubenswrapper[4812]: I1124 20:45:29.966510 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:45:29 crc kubenswrapper[4812]: E1124 20:45:29.967276 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:45:31 crc kubenswrapper[4812]: I1124 20:45:31.150999 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Nov 24 20:45:31 crc kubenswrapper[4812]: I1124 20:45:31.153124 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 24 20:45:31 crc kubenswrapper[4812]: I1124 20:45:31.161981 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp4c2\" (UniqueName: \"kubernetes.io/projected/b9de7c2e-87ab-4a83-a176-b953b5a5bc0f-kube-api-access-pp4c2\") pod \"mariadb-client\" (UID: \"b9de7c2e-87ab-4a83-a176-b953b5a5bc0f\") " pod="openstack/mariadb-client" Nov 24 20:45:31 crc kubenswrapper[4812]: I1124 20:45:31.171217 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 24 20:45:31 crc kubenswrapper[4812]: I1124 20:45:31.263717 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp4c2\" (UniqueName: \"kubernetes.io/projected/b9de7c2e-87ab-4a83-a176-b953b5a5bc0f-kube-api-access-pp4c2\") pod \"mariadb-client\" (UID: \"b9de7c2e-87ab-4a83-a176-b953b5a5bc0f\") " pod="openstack/mariadb-client" Nov 24 20:45:31 crc kubenswrapper[4812]: I1124 20:45:31.285819 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp4c2\" (UniqueName: \"kubernetes.io/projected/b9de7c2e-87ab-4a83-a176-b953b5a5bc0f-kube-api-access-pp4c2\") pod \"mariadb-client\" (UID: \"b9de7c2e-87ab-4a83-a176-b953b5a5bc0f\") " pod="openstack/mariadb-client" Nov 24 20:45:31 crc kubenswrapper[4812]: I1124 20:45:31.486222 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 24 20:45:31 crc kubenswrapper[4812]: I1124 20:45:31.797518 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 24 20:45:31 crc kubenswrapper[4812]: W1124 20:45:31.806293 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9de7c2e_87ab_4a83_a176_b953b5a5bc0f.slice/crio-89d9c6887afb372dd8fc3ed55e6ab302736a0b3a4a0b2bf2851bca6b5aab1545 WatchSource:0}: Error finding container 89d9c6887afb372dd8fc3ed55e6ab302736a0b3a4a0b2bf2851bca6b5aab1545: Status 404 returned error can't find the container with id 89d9c6887afb372dd8fc3ed55e6ab302736a0b3a4a0b2bf2851bca6b5aab1545 Nov 24 20:45:32 crc kubenswrapper[4812]: I1124 20:45:32.404088 4812 generic.go:334] "Generic (PLEG): container finished" podID="b9de7c2e-87ab-4a83-a176-b953b5a5bc0f" containerID="921c1f7cc32b005d7a429c998937e7d805af23852fb987f712c062dcd679d94b" exitCode=0 Nov 24 20:45:32 crc kubenswrapper[4812]: I1124 20:45:32.404210 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"b9de7c2e-87ab-4a83-a176-b953b5a5bc0f","Type":"ContainerDied","Data":"921c1f7cc32b005d7a429c998937e7d805af23852fb987f712c062dcd679d94b"} Nov 24 20:45:32 crc kubenswrapper[4812]: I1124 20:45:32.407416 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"b9de7c2e-87ab-4a83-a176-b953b5a5bc0f","Type":"ContainerStarted","Data":"89d9c6887afb372dd8fc3ed55e6ab302736a0b3a4a0b2bf2851bca6b5aab1545"} Nov 24 20:45:33 crc kubenswrapper[4812]: I1124 20:45:33.878803 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 24 20:45:33 crc kubenswrapper[4812]: I1124 20:45:33.908103 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_b9de7c2e-87ab-4a83-a176-b953b5a5bc0f/mariadb-client/0.log" Nov 24 20:45:33 crc kubenswrapper[4812]: I1124 20:45:33.915410 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp4c2\" (UniqueName: \"kubernetes.io/projected/b9de7c2e-87ab-4a83-a176-b953b5a5bc0f-kube-api-access-pp4c2\") pod \"b9de7c2e-87ab-4a83-a176-b953b5a5bc0f\" (UID: \"b9de7c2e-87ab-4a83-a176-b953b5a5bc0f\") " Nov 24 20:45:33 crc kubenswrapper[4812]: I1124 20:45:33.924130 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9de7c2e-87ab-4a83-a176-b953b5a5bc0f-kube-api-access-pp4c2" (OuterVolumeSpecName: "kube-api-access-pp4c2") pod "b9de7c2e-87ab-4a83-a176-b953b5a5bc0f" (UID: "b9de7c2e-87ab-4a83-a176-b953b5a5bc0f"). InnerVolumeSpecName "kube-api-access-pp4c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:45:33 crc kubenswrapper[4812]: I1124 20:45:33.935985 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Nov 24 20:45:33 crc kubenswrapper[4812]: I1124 20:45:33.940362 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Nov 24 20:45:34 crc kubenswrapper[4812]: I1124 20:45:34.017629 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp4c2\" (UniqueName: \"kubernetes.io/projected/b9de7c2e-87ab-4a83-a176-b953b5a5bc0f-kube-api-access-pp4c2\") on node \"crc\" DevicePath \"\"" Nov 24 20:45:34 crc kubenswrapper[4812]: I1124 20:45:34.077016 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Nov 24 20:45:34 crc kubenswrapper[4812]: E1124 20:45:34.077967 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9de7c2e-87ab-4a83-a176-b953b5a5bc0f" containerName="mariadb-client" Nov 24 20:45:34 crc kubenswrapper[4812]: I1124 20:45:34.078014 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9de7c2e-87ab-4a83-a176-b953b5a5bc0f" containerName="mariadb-client" Nov 24 20:45:34 crc kubenswrapper[4812]: I1124 20:45:34.078540 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9de7c2e-87ab-4a83-a176-b953b5a5bc0f" containerName="mariadb-client" Nov 24 20:45:34 crc kubenswrapper[4812]: I1124 20:45:34.079453 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 24 20:45:34 crc kubenswrapper[4812]: I1124 20:45:34.092480 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 24 20:45:34 crc kubenswrapper[4812]: I1124 20:45:34.121711 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6xzx\" (UniqueName: \"kubernetes.io/projected/aacb82dc-dde8-4b7f-bf00-96e8706da043-kube-api-access-z6xzx\") pod \"mariadb-client\" (UID: \"aacb82dc-dde8-4b7f-bf00-96e8706da043\") " pod="openstack/mariadb-client" Nov 24 20:45:34 crc kubenswrapper[4812]: I1124 20:45:34.224108 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6xzx\" (UniqueName: \"kubernetes.io/projected/aacb82dc-dde8-4b7f-bf00-96e8706da043-kube-api-access-z6xzx\") pod \"mariadb-client\" (UID: \"aacb82dc-dde8-4b7f-bf00-96e8706da043\") " pod="openstack/mariadb-client" Nov 24 20:45:34 crc kubenswrapper[4812]: I1124 20:45:34.246780 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6xzx\" (UniqueName: \"kubernetes.io/projected/aacb82dc-dde8-4b7f-bf00-96e8706da043-kube-api-access-z6xzx\") pod \"mariadb-client\" (UID: \"aacb82dc-dde8-4b7f-bf00-96e8706da043\") " pod="openstack/mariadb-client" Nov 24 20:45:34 crc kubenswrapper[4812]: I1124 20:45:34.421106 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 24 20:45:34 crc kubenswrapper[4812]: I1124 20:45:34.431696 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89d9c6887afb372dd8fc3ed55e6ab302736a0b3a4a0b2bf2851bca6b5aab1545" Nov 24 20:45:34 crc kubenswrapper[4812]: I1124 20:45:34.431785 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 24 20:45:34 crc kubenswrapper[4812]: I1124 20:45:34.455856 4812 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="b9de7c2e-87ab-4a83-a176-b953b5a5bc0f" podUID="aacb82dc-dde8-4b7f-bf00-96e8706da043" Nov 24 20:45:34 crc kubenswrapper[4812]: I1124 20:45:34.985764 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9de7c2e-87ab-4a83-a176-b953b5a5bc0f" path="/var/lib/kubelet/pods/b9de7c2e-87ab-4a83-a176-b953b5a5bc0f/volumes" Nov 24 20:45:34 crc kubenswrapper[4812]: W1124 20:45:34.988130 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaacb82dc_dde8_4b7f_bf00_96e8706da043.slice/crio-329e53fff7da83703323b6cce3cae98af2fbc6b1c4f1269728c379403c7e641b WatchSource:0}: Error finding container 329e53fff7da83703323b6cce3cae98af2fbc6b1c4f1269728c379403c7e641b: Status 404 returned error can't find the container with id 329e53fff7da83703323b6cce3cae98af2fbc6b1c4f1269728c379403c7e641b Nov 24 20:45:34 crc kubenswrapper[4812]: I1124 20:45:34.988581 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 24 20:45:35 crc kubenswrapper[4812]: I1124 20:45:35.446013 4812 generic.go:334] "Generic (PLEG): container finished" podID="aacb82dc-dde8-4b7f-bf00-96e8706da043" containerID="22d7f500648b128274b829631dfe96bca4557e20bd708550a5e8a8a3b235c194" exitCode=0 Nov 24 20:45:35 crc kubenswrapper[4812]: I1124 20:45:35.446414 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"aacb82dc-dde8-4b7f-bf00-96e8706da043","Type":"ContainerDied","Data":"22d7f500648b128274b829631dfe96bca4557e20bd708550a5e8a8a3b235c194"} Nov 24 20:45:35 crc kubenswrapper[4812]: I1124 20:45:35.446458 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"aacb82dc-dde8-4b7f-bf00-96e8706da043","Type":"ContainerStarted","Data":"329e53fff7da83703323b6cce3cae98af2fbc6b1c4f1269728c379403c7e641b"} Nov 24 20:45:36 crc kubenswrapper[4812]: I1124 20:45:36.866520 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 24 20:45:36 crc kubenswrapper[4812]: I1124 20:45:36.886880 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_aacb82dc-dde8-4b7f-bf00-96e8706da043/mariadb-client/0.log" Nov 24 20:45:36 crc kubenswrapper[4812]: I1124 20:45:36.923498 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Nov 24 20:45:36 crc kubenswrapper[4812]: I1124 20:45:36.930220 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Nov 24 20:45:37 crc kubenswrapper[4812]: I1124 20:45:37.073925 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6xzx\" (UniqueName: \"kubernetes.io/projected/aacb82dc-dde8-4b7f-bf00-96e8706da043-kube-api-access-z6xzx\") pod \"aacb82dc-dde8-4b7f-bf00-96e8706da043\" (UID: \"aacb82dc-dde8-4b7f-bf00-96e8706da043\") " Nov 24 20:45:37 crc kubenswrapper[4812]: I1124 20:45:37.081945 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aacb82dc-dde8-4b7f-bf00-96e8706da043-kube-api-access-z6xzx" (OuterVolumeSpecName: "kube-api-access-z6xzx") pod "aacb82dc-dde8-4b7f-bf00-96e8706da043" (UID: "aacb82dc-dde8-4b7f-bf00-96e8706da043"). InnerVolumeSpecName "kube-api-access-z6xzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:45:37 crc kubenswrapper[4812]: I1124 20:45:37.178506 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6xzx\" (UniqueName: \"kubernetes.io/projected/aacb82dc-dde8-4b7f-bf00-96e8706da043-kube-api-access-z6xzx\") on node \"crc\" DevicePath \"\"" Nov 24 20:45:37 crc kubenswrapper[4812]: I1124 20:45:37.471524 4812 scope.go:117] "RemoveContainer" containerID="22d7f500648b128274b829631dfe96bca4557e20bd708550a5e8a8a3b235c194" Nov 24 20:45:37 crc kubenswrapper[4812]: I1124 20:45:37.471574 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 24 20:45:38 crc kubenswrapper[4812]: I1124 20:45:38.986370 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aacb82dc-dde8-4b7f-bf00-96e8706da043" path="/var/lib/kubelet/pods/aacb82dc-dde8-4b7f-bf00-96e8706da043/volumes" Nov 24 20:45:43 crc kubenswrapper[4812]: I1124 20:45:43.004267 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:45:43 crc kubenswrapper[4812]: E1124 20:45:43.005363 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:45:53 crc kubenswrapper[4812]: I1124 20:45:53.192799 4812 scope.go:117] "RemoveContainer" containerID="fb24b21c7a2f5f6be724b3b06b23490ebbf4b8adabe30d4ac7b2bedadda869d9" Nov 24 20:45:54 crc kubenswrapper[4812]: I1124 20:45:54.966926 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:45:54 crc kubenswrapper[4812]: E1124 20:45:54.967862 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:46:06 crc kubenswrapper[4812]: I1124 20:46:06.969996 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:46:06 crc kubenswrapper[4812]: E1124 20:46:06.970700 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.157127 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 20:46:08 crc kubenswrapper[4812]: E1124 20:46:08.157491 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aacb82dc-dde8-4b7f-bf00-96e8706da043" containerName="mariadb-client" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.157511 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="aacb82dc-dde8-4b7f-bf00-96e8706da043" containerName="mariadb-client" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.157697 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="aacb82dc-dde8-4b7f-bf00-96e8706da043" containerName="mariadb-client" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.158644 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.160265 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.160509 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.160541 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.161246 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-kvmml" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.171188 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.172101 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.173185 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.178030 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.179296 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.199495 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.204937 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.213511 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.243419 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef13b518-4ed5-4eed-8777-14c768a0e2ce-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.243469 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.243494 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e559397-8cae-43cf-a277-20d09d01f986-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.243527 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.243554 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e559397-8cae-43cf-a277-20d09d01f986-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.243572 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e559397-8cae-43cf-a277-20d09d01f986-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.243624 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2dc72379-a117-4617-8c04-53a31bf95039\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2dc72379-a117-4617-8c04-53a31bf95039\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.243646 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef13b518-4ed5-4eed-8777-14c768a0e2ce-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.243672 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.243701 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-66d0d703-f190-4ec9-aa02-a40e69477d5a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d0d703-f190-4ec9-aa02-a40e69477d5a\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.243731 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2d528d82-e4ec-48ac-83d2-905b6e3f4675\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d528d82-e4ec-48ac-83d2-905b6e3f4675\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.243749 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj6ld\" (UniqueName: \"kubernetes.io/projected/2e559397-8cae-43cf-a277-20d09d01f986-kube-api-access-cj6ld\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.243769 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef13b518-4ed5-4eed-8777-14c768a0e2ce-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.243793 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e559397-8cae-43cf-a277-20d09d01f986-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.243840 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e559397-8cae-43cf-a277-20d09d01f986-config\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.243861 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q5jp\" (UniqueName: \"kubernetes.io/projected/ef13b518-4ed5-4eed-8777-14c768a0e2ce-kube-api-access-9q5jp\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.243880 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.243903 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef13b518-4ed5-4eed-8777-14c768a0e2ce-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.243925 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b-config\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.243946 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9w8v\" (UniqueName: \"kubernetes.io/projected/be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b-kube-api-access-n9w8v\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.243967 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef13b518-4ed5-4eed-8777-14c768a0e2ce-config\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.244007 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.244032 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e559397-8cae-43cf-a277-20d09d01f986-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.244063 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ef13b518-4ed5-4eed-8777-14c768a0e2ce-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.345818 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.345864 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e559397-8cae-43cf-a277-20d09d01f986-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.345900 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ef13b518-4ed5-4eed-8777-14c768a0e2ce-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.345930 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef13b518-4ed5-4eed-8777-14c768a0e2ce-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.345954 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.345974 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e559397-8cae-43cf-a277-20d09d01f986-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.345997 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.346017 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e559397-8cae-43cf-a277-20d09d01f986-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.346032 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e559397-8cae-43cf-a277-20d09d01f986-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.346067 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2dc72379-a117-4617-8c04-53a31bf95039\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2dc72379-a117-4617-8c04-53a31bf95039\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.346083 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef13b518-4ed5-4eed-8777-14c768a0e2ce-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.346102 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.346123 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-66d0d703-f190-4ec9-aa02-a40e69477d5a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d0d703-f190-4ec9-aa02-a40e69477d5a\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.346147 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2d528d82-e4ec-48ac-83d2-905b6e3f4675\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d528d82-e4ec-48ac-83d2-905b6e3f4675\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.346164 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj6ld\" (UniqueName: \"kubernetes.io/projected/2e559397-8cae-43cf-a277-20d09d01f986-kube-api-access-cj6ld\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.346184 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef13b518-4ed5-4eed-8777-14c768a0e2ce-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.346201 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e559397-8cae-43cf-a277-20d09d01f986-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.346253 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e559397-8cae-43cf-a277-20d09d01f986-config\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.346269 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q5jp\" (UniqueName: \"kubernetes.io/projected/ef13b518-4ed5-4eed-8777-14c768a0e2ce-kube-api-access-9q5jp\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.346286 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef13b518-4ed5-4eed-8777-14c768a0e2ce-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.346301 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.346314 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b-config\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.346348 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9w8v\" (UniqueName: \"kubernetes.io/projected/be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b-kube-api-access-n9w8v\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.346374 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef13b518-4ed5-4eed-8777-14c768a0e2ce-config\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.349264 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2e559397-8cae-43cf-a277-20d09d01f986-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.349786 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ef13b518-4ed5-4eed-8777-14c768a0e2ce-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.350205 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ef13b518-4ed5-4eed-8777-14c768a0e2ce-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.350349 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2e559397-8cae-43cf-a277-20d09d01f986-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.350996 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef13b518-4ed5-4eed-8777-14c768a0e2ce-config\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.352946 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.353525 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e559397-8cae-43cf-a277-20d09d01f986-config\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.353978 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.354584 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b-config\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.356608 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.357619 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.357930 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e559397-8cae-43cf-a277-20d09d01f986-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.358017 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef13b518-4ed5-4eed-8777-14c768a0e2ce-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.357958 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.358716 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e559397-8cae-43cf-a277-20d09d01f986-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.358781 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e559397-8cae-43cf-a277-20d09d01f986-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.365708 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.366169 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2d528d82-e4ec-48ac-83d2-905b6e3f4675\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d528d82-e4ec-48ac-83d2-905b6e3f4675\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/69f4da2b82da1fa233efed2c624efcc292d74d975f1f2dafce405425c4723c22/globalmount\"" pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.367049 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.367142 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2dc72379-a117-4617-8c04-53a31bf95039\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2dc72379-a117-4617-8c04-53a31bf95039\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/792c92b14c5ac3f1690585a1909b6436dce79651942bc8897aa081f0db1be32e/globalmount\"" pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.368037 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.368106 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-66d0d703-f190-4ec9-aa02-a40e69477d5a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d0d703-f190-4ec9-aa02-a40e69477d5a\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9d725d22c0d7e168f5653a5aefc81fc434c0d4f8b3153ec1e02970e10f5cd437/globalmount\"" pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.374784 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef13b518-4ed5-4eed-8777-14c768a0e2ce-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.378401 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q5jp\" (UniqueName: \"kubernetes.io/projected/ef13b518-4ed5-4eed-8777-14c768a0e2ce-kube-api-access-9q5jp\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.379865 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj6ld\" (UniqueName: \"kubernetes.io/projected/2e559397-8cae-43cf-a277-20d09d01f986-kube-api-access-cj6ld\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.388393 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef13b518-4ed5-4eed-8777-14c768a0e2ce-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.388410 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9w8v\" (UniqueName: \"kubernetes.io/projected/be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b-kube-api-access-n9w8v\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.417158 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-66d0d703-f190-4ec9-aa02-a40e69477d5a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-66d0d703-f190-4ec9-aa02-a40e69477d5a\") pod \"ovsdbserver-nb-1\" (UID: \"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b\") " pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.424275 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2d528d82-e4ec-48ac-83d2-905b6e3f4675\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2d528d82-e4ec-48ac-83d2-905b6e3f4675\") pod \"ovsdbserver-nb-0\" (UID: \"2e559397-8cae-43cf-a277-20d09d01f986\") " pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.424976 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2dc72379-a117-4617-8c04-53a31bf95039\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2dc72379-a117-4617-8c04-53a31bf95039\") pod \"ovsdbserver-nb-2\" (UID: \"ef13b518-4ed5-4eed-8777-14c768a0e2ce\") " pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.502009 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.515242 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.529772 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:08 crc kubenswrapper[4812]: I1124 20:46:08.831365 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 20:46:09 crc kubenswrapper[4812]: I1124 20:46:09.125170 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 24 20:46:09 crc kubenswrapper[4812]: W1124 20:46:09.131180 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef13b518_4ed5_4eed_8777_14c768a0e2ce.slice/crio-a7a3f56add471405013a31e28571404ccaa9f440e5a3c4c6afdf6cf4911ceaf8 WatchSource:0}: Error finding container a7a3f56add471405013a31e28571404ccaa9f440e5a3c4c6afdf6cf4911ceaf8: Status 404 returned error can't find the container with id a7a3f56add471405013a31e28571404ccaa9f440e5a3c4c6afdf6cf4911ceaf8 Nov 24 20:46:09 crc kubenswrapper[4812]: I1124 20:46:09.223154 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 24 20:46:09 crc kubenswrapper[4812]: W1124 20:46:09.235488 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe72f6f6_bdc0_4b52_8eaa_f07fdf94c66b.slice/crio-d5a07b00b4cbafa74b06b8e47e06f86dba84ce33f057d41eec3647be135541e7 WatchSource:0}: Error finding container d5a07b00b4cbafa74b06b8e47e06f86dba84ce33f057d41eec3647be135541e7: Status 404 returned error can't find the container with id d5a07b00b4cbafa74b06b8e47e06f86dba84ce33f057d41eec3647be135541e7 Nov 24 20:46:09 crc kubenswrapper[4812]: I1124 20:46:09.781803 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"ef13b518-4ed5-4eed-8777-14c768a0e2ce","Type":"ContainerStarted","Data":"f17bab1dda8fd53efb2193317a275dad01beba31538671bd39b40d76868a3cde"} Nov 24 20:46:09 crc kubenswrapper[4812]: I1124 20:46:09.782148 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"ef13b518-4ed5-4eed-8777-14c768a0e2ce","Type":"ContainerStarted","Data":"e9982cf895e9f52f5f79a04f591cfc33bc333bf835c2563680cc38838090f2cb"} Nov 24 20:46:09 crc kubenswrapper[4812]: I1124 20:46:09.782162 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"ef13b518-4ed5-4eed-8777-14c768a0e2ce","Type":"ContainerStarted","Data":"a7a3f56add471405013a31e28571404ccaa9f440e5a3c4c6afdf6cf4911ceaf8"} Nov 24 20:46:09 crc kubenswrapper[4812]: I1124 20:46:09.784120 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b","Type":"ContainerStarted","Data":"b2aeb4be7e03edcaf79b04cbd4c4a298df2b16c8b08aba9fce0ea6e45c2fac3d"} Nov 24 20:46:09 crc kubenswrapper[4812]: I1124 20:46:09.784145 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b","Type":"ContainerStarted","Data":"e4ce4ffb9aec4315ca48c751f4da0c52471767f45c3442782a4b97fc42b10fb6"} Nov 24 20:46:09 crc kubenswrapper[4812]: I1124 20:46:09.784157 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b","Type":"ContainerStarted","Data":"d5a07b00b4cbafa74b06b8e47e06f86dba84ce33f057d41eec3647be135541e7"} Nov 24 20:46:09 crc kubenswrapper[4812]: I1124 20:46:09.786161 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2e559397-8cae-43cf-a277-20d09d01f986","Type":"ContainerStarted","Data":"36b56a2e21d68b8cef32b9c914546e0ca4da3238996874c618bb6844e45e9267"} Nov 24 20:46:09 crc kubenswrapper[4812]: I1124 20:46:09.786207 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2e559397-8cae-43cf-a277-20d09d01f986","Type":"ContainerStarted","Data":"f78e86db9a006afdad536ad735493fb9e29ceed85096c1d1813335af8dccdc3b"} Nov 24 20:46:09 crc kubenswrapper[4812]: I1124 20:46:09.786218 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2e559397-8cae-43cf-a277-20d09d01f986","Type":"ContainerStarted","Data":"108c74a995cbba6dc586a3f32f3f06d48b2df467c4c6b77b3689e86ddd293941"} Nov 24 20:46:09 crc kubenswrapper[4812]: I1124 20:46:09.808052 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=2.8080400279999997 podStartE2EDuration="2.808040028s" podCreationTimestamp="2025-11-24 20:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:46:09.803577962 +0000 UTC m=+5363.592530343" watchObservedRunningTime="2025-11-24 20:46:09.808040028 +0000 UTC m=+5363.596992399" Nov 24 20:46:09 crc kubenswrapper[4812]: I1124 20:46:09.821539 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=2.821530941 podStartE2EDuration="2.821530941s" podCreationTimestamp="2025-11-24 20:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:46:09.818856605 +0000 UTC m=+5363.607808976" watchObservedRunningTime="2025-11-24 20:46:09.821530941 +0000 UTC m=+5363.610483312" Nov 24 20:46:09 crc kubenswrapper[4812]: I1124 20:46:09.839909 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=2.839889992 podStartE2EDuration="2.839889992s" podCreationTimestamp="2025-11-24 20:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:46:09.835992472 +0000 UTC m=+5363.624944843" watchObservedRunningTime="2025-11-24 20:46:09.839889992 +0000 UTC m=+5363.628842363" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.410672 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.413726 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.421293 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.421654 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.421874 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-5m2dm" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.422231 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.430377 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.432487 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.443261 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.450400 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.451991 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.476528 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.487434 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579168 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e355a3-e7d9-43a2-8b76-f0d8808a0f85-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579213 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/05fc892d-542f-483d-a083-ba3dac6f222b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579237 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/58438938-925a-4426-9405-e5d3a56db751-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579259 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e355a3-e7d9-43a2-8b76-f0d8808a0f85-config\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579287 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/58438938-925a-4426-9405-e5d3a56db751-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579309 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58438938-925a-4426-9405-e5d3a56db751-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579399 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d45d985f-bbba-4315-91e4-d59c171a4256\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d45d985f-bbba-4315-91e4-d59c171a4256\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579425 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e355a3-e7d9-43a2-8b76-f0d8808a0f85-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579442 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/05fc892d-542f-483d-a083-ba3dac6f222b-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579473 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58438938-925a-4426-9405-e5d3a56db751-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579490 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll7rp\" (UniqueName: \"kubernetes.io/projected/05fc892d-542f-483d-a083-ba3dac6f222b-kube-api-access-ll7rp\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579511 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58438938-925a-4426-9405-e5d3a56db751-config\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579533 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zgwv\" (UniqueName: \"kubernetes.io/projected/58438938-925a-4426-9405-e5d3a56db751-kube-api-access-8zgwv\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579572 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdq9c\" (UniqueName: \"kubernetes.io/projected/22e355a3-e7d9-43a2-8b76-f0d8808a0f85-kube-api-access-zdq9c\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579625 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05fc892d-542f-483d-a083-ba3dac6f222b-config\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579659 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58438938-925a-4426-9405-e5d3a56db751-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579691 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c7b202a8-d34c-4c3c-9267-4e049471790b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7b202a8-d34c-4c3c-9267-4e049471790b\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579732 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05fc892d-542f-483d-a083-ba3dac6f222b-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579784 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e8b62c45-53a7-42c3-9c97-1db4e23525c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e8b62c45-53a7-42c3-9c97-1db4e23525c8\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579825 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05fc892d-542f-483d-a083-ba3dac6f222b-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579865 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22e355a3-e7d9-43a2-8b76-f0d8808a0f85-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579893 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22e355a3-e7d9-43a2-8b76-f0d8808a0f85-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.579958 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e355a3-e7d9-43a2-8b76-f0d8808a0f85-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.580005 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/05fc892d-542f-483d-a083-ba3dac6f222b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.681516 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c7b202a8-d34c-4c3c-9267-4e049471790b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7b202a8-d34c-4c3c-9267-4e049471790b\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.681598 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05fc892d-542f-483d-a083-ba3dac6f222b-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.681668 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e8b62c45-53a7-42c3-9c97-1db4e23525c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e8b62c45-53a7-42c3-9c97-1db4e23525c8\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.681721 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05fc892d-542f-483d-a083-ba3dac6f222b-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.681781 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22e355a3-e7d9-43a2-8b76-f0d8808a0f85-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.681828 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22e355a3-e7d9-43a2-8b76-f0d8808a0f85-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.681871 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e355a3-e7d9-43a2-8b76-f0d8808a0f85-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.681927 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/05fc892d-542f-483d-a083-ba3dac6f222b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.682000 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e355a3-e7d9-43a2-8b76-f0d8808a0f85-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.682043 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/05fc892d-542f-483d-a083-ba3dac6f222b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.682087 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/58438938-925a-4426-9405-e5d3a56db751-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.682126 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e355a3-e7d9-43a2-8b76-f0d8808a0f85-config\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.682159 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/58438938-925a-4426-9405-e5d3a56db751-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.682190 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58438938-925a-4426-9405-e5d3a56db751-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.682228 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d45d985f-bbba-4315-91e4-d59c171a4256\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d45d985f-bbba-4315-91e4-d59c171a4256\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.682264 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e355a3-e7d9-43a2-8b76-f0d8808a0f85-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.682298 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/05fc892d-542f-483d-a083-ba3dac6f222b-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.682391 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58438938-925a-4426-9405-e5d3a56db751-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.682426 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll7rp\" (UniqueName: \"kubernetes.io/projected/05fc892d-542f-483d-a083-ba3dac6f222b-kube-api-access-ll7rp\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.682456 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58438938-925a-4426-9405-e5d3a56db751-config\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.682488 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zgwv\" (UniqueName: \"kubernetes.io/projected/58438938-925a-4426-9405-e5d3a56db751-kube-api-access-8zgwv\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.682526 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdq9c\" (UniqueName: \"kubernetes.io/projected/22e355a3-e7d9-43a2-8b76-f0d8808a0f85-kube-api-access-zdq9c\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.682561 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05fc892d-542f-483d-a083-ba3dac6f222b-config\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.682595 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58438938-925a-4426-9405-e5d3a56db751-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.682964 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05fc892d-542f-483d-a083-ba3dac6f222b-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.683418 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/58438938-925a-4426-9405-e5d3a56db751-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.683918 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22e355a3-e7d9-43a2-8b76-f0d8808a0f85-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.685603 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.685655 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c7b202a8-d34c-4c3c-9267-4e049471790b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7b202a8-d34c-4c3c-9267-4e049471790b\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1a21b4330f349d2a406af8e4a9dc4122704de1a9f6eef72fe78339be0b5288ad/globalmount\"" pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.685938 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.686005 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e8b62c45-53a7-42c3-9c97-1db4e23525c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e8b62c45-53a7-42c3-9c97-1db4e23525c8\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9f53376915bb0c9cece54215090b5ebe2c34b731f251de30e32831f7ae405efc/globalmount\"" pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.686100 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/05fc892d-542f-483d-a083-ba3dac6f222b-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.686422 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58438938-925a-4426-9405-e5d3a56db751-config\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.686498 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e355a3-e7d9-43a2-8b76-f0d8808a0f85-config\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.686746 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58438938-925a-4426-9405-e5d3a56db751-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.687975 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05fc892d-542f-483d-a083-ba3dac6f222b-config\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.688782 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.689018 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d45d985f-bbba-4315-91e4-d59c171a4256\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d45d985f-bbba-4315-91e4-d59c171a4256\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/08f7386b9173548a7721a4b746e42ad7b4bc97b20e35d20c03c6679a6d506bd4/globalmount\"" pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.692197 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22e355a3-e7d9-43a2-8b76-f0d8808a0f85-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.696482 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/05fc892d-542f-483d-a083-ba3dac6f222b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.696774 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e355a3-e7d9-43a2-8b76-f0d8808a0f85-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.696911 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e355a3-e7d9-43a2-8b76-f0d8808a0f85-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.696976 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58438938-925a-4426-9405-e5d3a56db751-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.698092 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/58438938-925a-4426-9405-e5d3a56db751-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.699425 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/05fc892d-542f-483d-a083-ba3dac6f222b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.703639 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05fc892d-542f-483d-a083-ba3dac6f222b-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.703774 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e355a3-e7d9-43a2-8b76-f0d8808a0f85-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.712239 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/58438938-925a-4426-9405-e5d3a56db751-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.719962 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zgwv\" (UniqueName: \"kubernetes.io/projected/58438938-925a-4426-9405-e5d3a56db751-kube-api-access-8zgwv\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.721680 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdq9c\" (UniqueName: \"kubernetes.io/projected/22e355a3-e7d9-43a2-8b76-f0d8808a0f85-kube-api-access-zdq9c\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.736747 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll7rp\" (UniqueName: \"kubernetes.io/projected/05fc892d-542f-483d-a083-ba3dac6f222b-kube-api-access-ll7rp\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.764381 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e8b62c45-53a7-42c3-9c97-1db4e23525c8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e8b62c45-53a7-42c3-9c97-1db4e23525c8\") pod \"ovsdbserver-sb-2\" (UID: \"58438938-925a-4426-9405-e5d3a56db751\") " pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.766451 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d45d985f-bbba-4315-91e4-d59c171a4256\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d45d985f-bbba-4315-91e4-d59c171a4256\") pod \"ovsdbserver-sb-0\" (UID: \"22e355a3-e7d9-43a2-8b76-f0d8808a0f85\") " pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.774681 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c7b202a8-d34c-4c3c-9267-4e049471790b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c7b202a8-d34c-4c3c-9267-4e049471790b\") pod \"ovsdbserver-sb-1\" (UID: \"05fc892d-542f-483d-a083-ba3dac6f222b\") " pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.791327 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:10 crc kubenswrapper[4812]: I1124 20:46:10.806210 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:11 crc kubenswrapper[4812]: I1124 20:46:11.058669 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:11 crc kubenswrapper[4812]: I1124 20:46:11.237214 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 24 20:46:11 crc kubenswrapper[4812]: I1124 20:46:11.336433 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 20:46:11 crc kubenswrapper[4812]: I1124 20:46:11.444424 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 24 20:46:11 crc kubenswrapper[4812]: I1124 20:46:11.502761 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:11 crc kubenswrapper[4812]: I1124 20:46:11.515715 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:11 crc kubenswrapper[4812]: I1124 20:46:11.530764 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:11 crc kubenswrapper[4812]: I1124 20:46:11.557856 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:11 crc kubenswrapper[4812]: I1124 20:46:11.811404 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"22e355a3-e7d9-43a2-8b76-f0d8808a0f85","Type":"ContainerStarted","Data":"9886f34481d0b2ca8e2bd1e7a9ac8e92f4b3f37e4fcc8eaefc9ee063f0584fae"} Nov 24 20:46:11 crc kubenswrapper[4812]: I1124 20:46:11.811788 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"22e355a3-e7d9-43a2-8b76-f0d8808a0f85","Type":"ContainerStarted","Data":"234293de47040a51a2e60eb141d0dee1943656ee8eae6e64316f205b6c000285"} Nov 24 20:46:11 crc kubenswrapper[4812]: I1124 20:46:11.811804 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"22e355a3-e7d9-43a2-8b76-f0d8808a0f85","Type":"ContainerStarted","Data":"127a86eae8e002040d3f84279dd39e8581c608f8b2ff4a013868187d469fcf3e"} Nov 24 20:46:11 crc kubenswrapper[4812]: I1124 20:46:11.813520 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"05fc892d-542f-483d-a083-ba3dac6f222b","Type":"ContainerStarted","Data":"c2b0e16aa7785efc1b52ad02713586fd826984eb01fb8516438c680dda638535"} Nov 24 20:46:11 crc kubenswrapper[4812]: I1124 20:46:11.813559 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"05fc892d-542f-483d-a083-ba3dac6f222b","Type":"ContainerStarted","Data":"c49e5583b35a065c9134c7df7feb158000d466a23fd246112dd5172cba840493"} Nov 24 20:46:11 crc kubenswrapper[4812]: I1124 20:46:11.817569 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"58438938-925a-4426-9405-e5d3a56db751","Type":"ContainerStarted","Data":"faa43b11f009790bec5da307f3d1d6cd00cca51ddffcb70b232f00a7354a98fe"} Nov 24 20:46:11 crc kubenswrapper[4812]: I1124 20:46:11.817610 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"58438938-925a-4426-9405-e5d3a56db751","Type":"ContainerStarted","Data":"d98078b2b752ff3195fb04acb6eabda7135ceab042d14efb5d70f60655bce9c9"} Nov 24 20:46:11 crc kubenswrapper[4812]: I1124 20:46:11.817624 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"58438938-925a-4426-9405-e5d3a56db751","Type":"ContainerStarted","Data":"3b66d2b52ccbd09bfa799452d751b34aab45902e9c45e812befd69ce1db73666"} Nov 24 20:46:11 crc kubenswrapper[4812]: I1124 20:46:11.818187 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:11 crc kubenswrapper[4812]: I1124 20:46:11.841638 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=2.841618006 podStartE2EDuration="2.841618006s" podCreationTimestamp="2025-11-24 20:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:46:11.835660617 +0000 UTC m=+5365.624612988" watchObservedRunningTime="2025-11-24 20:46:11.841618006 +0000 UTC m=+5365.630570377" Nov 24 20:46:11 crc kubenswrapper[4812]: I1124 20:46:11.865184 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=2.865167004 podStartE2EDuration="2.865167004s" podCreationTimestamp="2025-11-24 20:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:46:11.861830199 +0000 UTC m=+5365.650782570" watchObservedRunningTime="2025-11-24 20:46:11.865167004 +0000 UTC m=+5365.654119375" Nov 24 20:46:12 crc kubenswrapper[4812]: I1124 20:46:12.826349 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"05fc892d-542f-483d-a083-ba3dac6f222b","Type":"ContainerStarted","Data":"18d4d7c2b5fbf0ef285ce3f2a7f14ef38a8875e17edf95160fd2f49d446fef7c"} Nov 24 20:46:12 crc kubenswrapper[4812]: I1124 20:46:12.848687 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.8486575419999998 podStartE2EDuration="3.848657542s" podCreationTimestamp="2025-11-24 20:46:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:46:12.843103344 +0000 UTC m=+5366.632055745" watchObservedRunningTime="2025-11-24 20:46:12.848657542 +0000 UTC m=+5366.637609913" Nov 24 20:46:13 crc kubenswrapper[4812]: I1124 20:46:13.515593 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:13 crc kubenswrapper[4812]: I1124 20:46:13.530787 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:13 crc kubenswrapper[4812]: I1124 20:46:13.573861 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 24 20:46:13 crc kubenswrapper[4812]: I1124 20:46:13.791697 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:13 crc kubenswrapper[4812]: I1124 20:46:13.806887 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:13 crc kubenswrapper[4812]: I1124 20:46:13.846803 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58cb56488c-fx429"] Nov 24 20:46:13 crc kubenswrapper[4812]: I1124 20:46:13.851586 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58cb56488c-fx429" Nov 24 20:46:13 crc kubenswrapper[4812]: I1124 20:46:13.853426 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 24 20:46:13 crc kubenswrapper[4812]: I1124 20:46:13.904773 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58cb56488c-fx429"] Nov 24 20:46:13 crc kubenswrapper[4812]: I1124 20:46:13.945753 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l459\" (UniqueName: \"kubernetes.io/projected/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-kube-api-access-7l459\") pod \"dnsmasq-dns-58cb56488c-fx429\" (UID: \"95e3e2dd-1fc0-432d-ada5-f28aa93b587f\") " pod="openstack/dnsmasq-dns-58cb56488c-fx429" Nov 24 20:46:13 crc kubenswrapper[4812]: I1124 20:46:13.945872 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-dns-svc\") pod \"dnsmasq-dns-58cb56488c-fx429\" (UID: \"95e3e2dd-1fc0-432d-ada5-f28aa93b587f\") " pod="openstack/dnsmasq-dns-58cb56488c-fx429" Nov 24 20:46:13 crc kubenswrapper[4812]: I1124 20:46:13.945935 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-config\") pod \"dnsmasq-dns-58cb56488c-fx429\" (UID: \"95e3e2dd-1fc0-432d-ada5-f28aa93b587f\") " pod="openstack/dnsmasq-dns-58cb56488c-fx429" Nov 24 20:46:13 crc kubenswrapper[4812]: I1124 20:46:13.946005 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-ovsdbserver-nb\") pod \"dnsmasq-dns-58cb56488c-fx429\" (UID: \"95e3e2dd-1fc0-432d-ada5-f28aa93b587f\") " pod="openstack/dnsmasq-dns-58cb56488c-fx429" Nov 24 20:46:14 crc kubenswrapper[4812]: I1124 20:46:14.047154 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-config\") pod \"dnsmasq-dns-58cb56488c-fx429\" (UID: \"95e3e2dd-1fc0-432d-ada5-f28aa93b587f\") " pod="openstack/dnsmasq-dns-58cb56488c-fx429" Nov 24 20:46:14 crc kubenswrapper[4812]: I1124 20:46:14.047951 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-config\") pod \"dnsmasq-dns-58cb56488c-fx429\" (UID: \"95e3e2dd-1fc0-432d-ada5-f28aa93b587f\") " pod="openstack/dnsmasq-dns-58cb56488c-fx429" Nov 24 20:46:14 crc kubenswrapper[4812]: I1124 20:46:14.048026 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-ovsdbserver-nb\") pod \"dnsmasq-dns-58cb56488c-fx429\" (UID: \"95e3e2dd-1fc0-432d-ada5-f28aa93b587f\") " pod="openstack/dnsmasq-dns-58cb56488c-fx429" Nov 24 20:46:14 crc kubenswrapper[4812]: I1124 20:46:14.048582 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-ovsdbserver-nb\") pod \"dnsmasq-dns-58cb56488c-fx429\" (UID: \"95e3e2dd-1fc0-432d-ada5-f28aa93b587f\") " pod="openstack/dnsmasq-dns-58cb56488c-fx429" Nov 24 20:46:14 crc kubenswrapper[4812]: I1124 20:46:14.048741 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l459\" (UniqueName: \"kubernetes.io/projected/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-kube-api-access-7l459\") pod \"dnsmasq-dns-58cb56488c-fx429\" (UID: \"95e3e2dd-1fc0-432d-ada5-f28aa93b587f\") " pod="openstack/dnsmasq-dns-58cb56488c-fx429" Nov 24 20:46:14 crc kubenswrapper[4812]: I1124 20:46:14.048775 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-dns-svc\") pod \"dnsmasq-dns-58cb56488c-fx429\" (UID: \"95e3e2dd-1fc0-432d-ada5-f28aa93b587f\") " pod="openstack/dnsmasq-dns-58cb56488c-fx429" Nov 24 20:46:14 crc kubenswrapper[4812]: I1124 20:46:14.049429 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-dns-svc\") pod \"dnsmasq-dns-58cb56488c-fx429\" (UID: \"95e3e2dd-1fc0-432d-ada5-f28aa93b587f\") " pod="openstack/dnsmasq-dns-58cb56488c-fx429" Nov 24 20:46:14 crc kubenswrapper[4812]: I1124 20:46:14.059741 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:14 crc kubenswrapper[4812]: I1124 20:46:14.073900 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l459\" (UniqueName: \"kubernetes.io/projected/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-kube-api-access-7l459\") pod \"dnsmasq-dns-58cb56488c-fx429\" (UID: \"95e3e2dd-1fc0-432d-ada5-f28aa93b587f\") " pod="openstack/dnsmasq-dns-58cb56488c-fx429" Nov 24 20:46:14 crc kubenswrapper[4812]: I1124 20:46:14.116520 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:14 crc kubenswrapper[4812]: I1124 20:46:14.181622 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58cb56488c-fx429" Nov 24 20:46:14 crc kubenswrapper[4812]: I1124 20:46:14.436576 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58cb56488c-fx429"] Nov 24 20:46:14 crc kubenswrapper[4812]: I1124 20:46:14.575184 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:14 crc kubenswrapper[4812]: I1124 20:46:14.578819 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:14 crc kubenswrapper[4812]: I1124 20:46:14.646714 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Nov 24 20:46:14 crc kubenswrapper[4812]: I1124 20:46:14.647045 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Nov 24 20:46:14 crc kubenswrapper[4812]: I1124 20:46:14.841365 4812 generic.go:334] "Generic (PLEG): container finished" podID="95e3e2dd-1fc0-432d-ada5-f28aa93b587f" containerID="67c55c0a3d9a38997526444fac2857e74f54b86eb42c71dfd7cfc87a4feba565" exitCode=0 Nov 24 20:46:14 crc kubenswrapper[4812]: I1124 20:46:14.842514 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58cb56488c-fx429" event={"ID":"95e3e2dd-1fc0-432d-ada5-f28aa93b587f","Type":"ContainerDied","Data":"67c55c0a3d9a38997526444fac2857e74f54b86eb42c71dfd7cfc87a4feba565"} Nov 24 20:46:14 crc kubenswrapper[4812]: I1124 20:46:14.842549 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58cb56488c-fx429" event={"ID":"95e3e2dd-1fc0-432d-ada5-f28aa93b587f","Type":"ContainerStarted","Data":"c2000d6086d5ffcde522c78a2a3070e1fe34111c205832e9225726438c9b17f3"} Nov 24 20:46:14 crc kubenswrapper[4812]: I1124 20:46:14.843217 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:15 crc kubenswrapper[4812]: I1124 20:46:15.792147 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:15 crc kubenswrapper[4812]: I1124 20:46:15.807141 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:15 crc kubenswrapper[4812]: I1124 20:46:15.858266 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58cb56488c-fx429" event={"ID":"95e3e2dd-1fc0-432d-ada5-f28aa93b587f","Type":"ContainerStarted","Data":"cd1098291a61e6be20a8070b73c4424b5f20f4d4654f15c6898b2162ddcb2bb2"} Nov 24 20:46:15 crc kubenswrapper[4812]: I1124 20:46:15.919781 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58cb56488c-fx429" podStartSLOduration=2.919758921 podStartE2EDuration="2.919758921s" podCreationTimestamp="2025-11-24 20:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:46:15.917797335 +0000 UTC m=+5369.706749716" watchObservedRunningTime="2025-11-24 20:46:15.919758921 +0000 UTC m=+5369.708711292" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.109679 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.399310 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58cb56488c-fx429"] Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.424073 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57f9f8f7d7-5z5lr"] Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.425434 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.427504 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.435979 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57f9f8f7d7-5z5lr"] Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.552637 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-ovsdbserver-nb\") pod \"dnsmasq-dns-57f9f8f7d7-5z5lr\" (UID: \"5505d922-48b8-4b5e-9c7f-97446de3025c\") " pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.552715 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4rzk\" (UniqueName: \"kubernetes.io/projected/5505d922-48b8-4b5e-9c7f-97446de3025c-kube-api-access-b4rzk\") pod \"dnsmasq-dns-57f9f8f7d7-5z5lr\" (UID: \"5505d922-48b8-4b5e-9c7f-97446de3025c\") " pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.552733 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-config\") pod \"dnsmasq-dns-57f9f8f7d7-5z5lr\" (UID: \"5505d922-48b8-4b5e-9c7f-97446de3025c\") " pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.553280 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-dns-svc\") pod \"dnsmasq-dns-57f9f8f7d7-5z5lr\" (UID: \"5505d922-48b8-4b5e-9c7f-97446de3025c\") " pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.553308 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-ovsdbserver-sb\") pod \"dnsmasq-dns-57f9f8f7d7-5z5lr\" (UID: \"5505d922-48b8-4b5e-9c7f-97446de3025c\") " pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.655474 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-ovsdbserver-nb\") pod \"dnsmasq-dns-57f9f8f7d7-5z5lr\" (UID: \"5505d922-48b8-4b5e-9c7f-97446de3025c\") " pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.655638 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4rzk\" (UniqueName: \"kubernetes.io/projected/5505d922-48b8-4b5e-9c7f-97446de3025c-kube-api-access-b4rzk\") pod \"dnsmasq-dns-57f9f8f7d7-5z5lr\" (UID: \"5505d922-48b8-4b5e-9c7f-97446de3025c\") " pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.655702 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-config\") pod \"dnsmasq-dns-57f9f8f7d7-5z5lr\" (UID: \"5505d922-48b8-4b5e-9c7f-97446de3025c\") " pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.655768 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-dns-svc\") pod \"dnsmasq-dns-57f9f8f7d7-5z5lr\" (UID: \"5505d922-48b8-4b5e-9c7f-97446de3025c\") " pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.655807 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-ovsdbserver-sb\") pod \"dnsmasq-dns-57f9f8f7d7-5z5lr\" (UID: \"5505d922-48b8-4b5e-9c7f-97446de3025c\") " pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.656378 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-ovsdbserver-nb\") pod \"dnsmasq-dns-57f9f8f7d7-5z5lr\" (UID: \"5505d922-48b8-4b5e-9c7f-97446de3025c\") " pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.657023 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-ovsdbserver-sb\") pod \"dnsmasq-dns-57f9f8f7d7-5z5lr\" (UID: \"5505d922-48b8-4b5e-9c7f-97446de3025c\") " pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.657021 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-dns-svc\") pod \"dnsmasq-dns-57f9f8f7d7-5z5lr\" (UID: \"5505d922-48b8-4b5e-9c7f-97446de3025c\") " pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.657364 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-config\") pod \"dnsmasq-dns-57f9f8f7d7-5z5lr\" (UID: \"5505d922-48b8-4b5e-9c7f-97446de3025c\") " pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.690790 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4rzk\" (UniqueName: \"kubernetes.io/projected/5505d922-48b8-4b5e-9c7f-97446de3025c-kube-api-access-b4rzk\") pod \"dnsmasq-dns-57f9f8f7d7-5z5lr\" (UID: \"5505d922-48b8-4b5e-9c7f-97446de3025c\") " pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.750770 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.841022 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.869147 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58cb56488c-fx429" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.947557 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.995283 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Nov 24 20:46:16 crc kubenswrapper[4812]: I1124 20:46:16.995797 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Nov 24 20:46:17 crc kubenswrapper[4812]: I1124 20:46:17.139401 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57f9f8f7d7-5z5lr"] Nov 24 20:46:17 crc kubenswrapper[4812]: I1124 20:46:17.883058 4812 generic.go:334] "Generic (PLEG): container finished" podID="5505d922-48b8-4b5e-9c7f-97446de3025c" containerID="43b9ca739fe947f209fc405a546a2f68e85b4f9b5cc91656b8683fc0597f2874" exitCode=0 Nov 24 20:46:17 crc kubenswrapper[4812]: I1124 20:46:17.884045 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" event={"ID":"5505d922-48b8-4b5e-9c7f-97446de3025c","Type":"ContainerDied","Data":"43b9ca739fe947f209fc405a546a2f68e85b4f9b5cc91656b8683fc0597f2874"} Nov 24 20:46:17 crc kubenswrapper[4812]: I1124 20:46:17.884119 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" event={"ID":"5505d922-48b8-4b5e-9c7f-97446de3025c","Type":"ContainerStarted","Data":"267d17aeeb27e6be703ac3753c42e1cbf8d5ca05d06b5dbba95026d4c11cfbf4"} Nov 24 20:46:17 crc kubenswrapper[4812]: I1124 20:46:17.884760 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58cb56488c-fx429" podUID="95e3e2dd-1fc0-432d-ada5-f28aa93b587f" containerName="dnsmasq-dns" containerID="cri-o://cd1098291a61e6be20a8070b73c4424b5f20f4d4654f15c6898b2162ddcb2bb2" gracePeriod=10 Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.434995 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58cb56488c-fx429" Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.606850 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-ovsdbserver-nb\") pod \"95e3e2dd-1fc0-432d-ada5-f28aa93b587f\" (UID: \"95e3e2dd-1fc0-432d-ada5-f28aa93b587f\") " Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.606931 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-config\") pod \"95e3e2dd-1fc0-432d-ada5-f28aa93b587f\" (UID: \"95e3e2dd-1fc0-432d-ada5-f28aa93b587f\") " Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.607104 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-dns-svc\") pod \"95e3e2dd-1fc0-432d-ada5-f28aa93b587f\" (UID: \"95e3e2dd-1fc0-432d-ada5-f28aa93b587f\") " Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.607163 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l459\" (UniqueName: \"kubernetes.io/projected/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-kube-api-access-7l459\") pod \"95e3e2dd-1fc0-432d-ada5-f28aa93b587f\" (UID: \"95e3e2dd-1fc0-432d-ada5-f28aa93b587f\") " Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.612197 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-kube-api-access-7l459" (OuterVolumeSpecName: "kube-api-access-7l459") pod "95e3e2dd-1fc0-432d-ada5-f28aa93b587f" (UID: "95e3e2dd-1fc0-432d-ada5-f28aa93b587f"). InnerVolumeSpecName "kube-api-access-7l459". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.647548 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "95e3e2dd-1fc0-432d-ada5-f28aa93b587f" (UID: "95e3e2dd-1fc0-432d-ada5-f28aa93b587f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.653593 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "95e3e2dd-1fc0-432d-ada5-f28aa93b587f" (UID: "95e3e2dd-1fc0-432d-ada5-f28aa93b587f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.655555 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-config" (OuterVolumeSpecName: "config") pod "95e3e2dd-1fc0-432d-ada5-f28aa93b587f" (UID: "95e3e2dd-1fc0-432d-ada5-f28aa93b587f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.708710 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.708741 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l459\" (UniqueName: \"kubernetes.io/projected/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-kube-api-access-7l459\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.708754 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.708768 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95e3e2dd-1fc0-432d-ada5-f28aa93b587f-config\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.894135 4812 generic.go:334] "Generic (PLEG): container finished" podID="95e3e2dd-1fc0-432d-ada5-f28aa93b587f" containerID="cd1098291a61e6be20a8070b73c4424b5f20f4d4654f15c6898b2162ddcb2bb2" exitCode=0 Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.894209 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58cb56488c-fx429" event={"ID":"95e3e2dd-1fc0-432d-ada5-f28aa93b587f","Type":"ContainerDied","Data":"cd1098291a61e6be20a8070b73c4424b5f20f4d4654f15c6898b2162ddcb2bb2"} Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.894245 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58cb56488c-fx429" event={"ID":"95e3e2dd-1fc0-432d-ada5-f28aa93b587f","Type":"ContainerDied","Data":"c2000d6086d5ffcde522c78a2a3070e1fe34111c205832e9225726438c9b17f3"} Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.894271 4812 scope.go:117] "RemoveContainer" containerID="cd1098291a61e6be20a8070b73c4424b5f20f4d4654f15c6898b2162ddcb2bb2" Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.894447 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58cb56488c-fx429" Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.904182 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" event={"ID":"5505d922-48b8-4b5e-9c7f-97446de3025c","Type":"ContainerStarted","Data":"460f643affba54e33cf7556f4ca44b8ef3960d2ac166b17fa23e7d4d373c7dc8"} Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.904532 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.927713 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" podStartSLOduration=2.927684146 podStartE2EDuration="2.927684146s" podCreationTimestamp="2025-11-24 20:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:46:18.923202519 +0000 UTC m=+5372.712154910" watchObservedRunningTime="2025-11-24 20:46:18.927684146 +0000 UTC m=+5372.716636537" Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.934755 4812 scope.go:117] "RemoveContainer" containerID="67c55c0a3d9a38997526444fac2857e74f54b86eb42c71dfd7cfc87a4feba565" Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.949792 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58cb56488c-fx429"] Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.955794 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58cb56488c-fx429"] Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.967384 4812 scope.go:117] "RemoveContainer" containerID="cd1098291a61e6be20a8070b73c4424b5f20f4d4654f15c6898b2162ddcb2bb2" Nov 24 20:46:18 crc kubenswrapper[4812]: E1124 20:46:18.967877 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd1098291a61e6be20a8070b73c4424b5f20f4d4654f15c6898b2162ddcb2bb2\": container with ID starting with cd1098291a61e6be20a8070b73c4424b5f20f4d4654f15c6898b2162ddcb2bb2 not found: ID does not exist" containerID="cd1098291a61e6be20a8070b73c4424b5f20f4d4654f15c6898b2162ddcb2bb2" Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.967914 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd1098291a61e6be20a8070b73c4424b5f20f4d4654f15c6898b2162ddcb2bb2"} err="failed to get container status \"cd1098291a61e6be20a8070b73c4424b5f20f4d4654f15c6898b2162ddcb2bb2\": rpc error: code = NotFound desc = could not find container \"cd1098291a61e6be20a8070b73c4424b5f20f4d4654f15c6898b2162ddcb2bb2\": container with ID starting with cd1098291a61e6be20a8070b73c4424b5f20f4d4654f15c6898b2162ddcb2bb2 not found: ID does not exist" Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.967939 4812 scope.go:117] "RemoveContainer" containerID="67c55c0a3d9a38997526444fac2857e74f54b86eb42c71dfd7cfc87a4feba565" Nov 24 20:46:18 crc kubenswrapper[4812]: E1124 20:46:18.968891 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67c55c0a3d9a38997526444fac2857e74f54b86eb42c71dfd7cfc87a4feba565\": container with ID starting with 67c55c0a3d9a38997526444fac2857e74f54b86eb42c71dfd7cfc87a4feba565 not found: ID does not exist" containerID="67c55c0a3d9a38997526444fac2857e74f54b86eb42c71dfd7cfc87a4feba565" Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.968923 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67c55c0a3d9a38997526444fac2857e74f54b86eb42c71dfd7cfc87a4feba565"} err="failed to get container status \"67c55c0a3d9a38997526444fac2857e74f54b86eb42c71dfd7cfc87a4feba565\": rpc error: code = NotFound desc = could not find container \"67c55c0a3d9a38997526444fac2857e74f54b86eb42c71dfd7cfc87a4feba565\": container with ID starting with 67c55c0a3d9a38997526444fac2857e74f54b86eb42c71dfd7cfc87a4feba565 not found: ID does not exist" Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.969312 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:46:18 crc kubenswrapper[4812]: E1124 20:46:18.969600 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:46:18 crc kubenswrapper[4812]: I1124 20:46:18.981673 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95e3e2dd-1fc0-432d-ada5-f28aa93b587f" path="/var/lib/kubelet/pods/95e3e2dd-1fc0-432d-ada5-f28aa93b587f/volumes" Nov 24 20:46:19 crc kubenswrapper[4812]: I1124 20:46:19.530548 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Nov 24 20:46:19 crc kubenswrapper[4812]: E1124 20:46:19.531462 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e3e2dd-1fc0-432d-ada5-f28aa93b587f" containerName="dnsmasq-dns" Nov 24 20:46:19 crc kubenswrapper[4812]: I1124 20:46:19.531497 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e3e2dd-1fc0-432d-ada5-f28aa93b587f" containerName="dnsmasq-dns" Nov 24 20:46:19 crc kubenswrapper[4812]: E1124 20:46:19.531532 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e3e2dd-1fc0-432d-ada5-f28aa93b587f" containerName="init" Nov 24 20:46:19 crc kubenswrapper[4812]: I1124 20:46:19.531544 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e3e2dd-1fc0-432d-ada5-f28aa93b587f" containerName="init" Nov 24 20:46:19 crc kubenswrapper[4812]: I1124 20:46:19.531883 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="95e3e2dd-1fc0-432d-ada5-f28aa93b587f" containerName="dnsmasq-dns" Nov 24 20:46:19 crc kubenswrapper[4812]: I1124 20:46:19.532799 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 24 20:46:19 crc kubenswrapper[4812]: I1124 20:46:19.535403 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Nov 24 20:46:19 crc kubenswrapper[4812]: I1124 20:46:19.552885 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Nov 24 20:46:19 crc kubenswrapper[4812]: I1124 20:46:19.625149 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3b26af0a-ffd2-46ec-8c5b-1e30489c4ffe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b26af0a-ffd2-46ec-8c5b-1e30489c4ffe\") pod \"ovn-copy-data\" (UID: \"21f4d7d0-d494-4c0e-8a72-5920867ba86d\") " pod="openstack/ovn-copy-data" Nov 24 20:46:19 crc kubenswrapper[4812]: I1124 20:46:19.625228 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27m9f\" (UniqueName: \"kubernetes.io/projected/21f4d7d0-d494-4c0e-8a72-5920867ba86d-kube-api-access-27m9f\") pod \"ovn-copy-data\" (UID: \"21f4d7d0-d494-4c0e-8a72-5920867ba86d\") " pod="openstack/ovn-copy-data" Nov 24 20:46:19 crc kubenswrapper[4812]: I1124 20:46:19.625266 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/21f4d7d0-d494-4c0e-8a72-5920867ba86d-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"21f4d7d0-d494-4c0e-8a72-5920867ba86d\") " pod="openstack/ovn-copy-data" Nov 24 20:46:19 crc kubenswrapper[4812]: I1124 20:46:19.726612 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/21f4d7d0-d494-4c0e-8a72-5920867ba86d-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"21f4d7d0-d494-4c0e-8a72-5920867ba86d\") " pod="openstack/ovn-copy-data" Nov 24 20:46:19 crc kubenswrapper[4812]: I1124 20:46:19.726737 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3b26af0a-ffd2-46ec-8c5b-1e30489c4ffe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b26af0a-ffd2-46ec-8c5b-1e30489c4ffe\") pod \"ovn-copy-data\" (UID: \"21f4d7d0-d494-4c0e-8a72-5920867ba86d\") " pod="openstack/ovn-copy-data" Nov 24 20:46:19 crc kubenswrapper[4812]: I1124 20:46:19.726795 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27m9f\" (UniqueName: \"kubernetes.io/projected/21f4d7d0-d494-4c0e-8a72-5920867ba86d-kube-api-access-27m9f\") pod \"ovn-copy-data\" (UID: \"21f4d7d0-d494-4c0e-8a72-5920867ba86d\") " pod="openstack/ovn-copy-data" Nov 24 20:46:19 crc kubenswrapper[4812]: I1124 20:46:19.730663 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 20:46:19 crc kubenswrapper[4812]: I1124 20:46:19.730730 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3b26af0a-ffd2-46ec-8c5b-1e30489c4ffe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b26af0a-ffd2-46ec-8c5b-1e30489c4ffe\") pod \"ovn-copy-data\" (UID: \"21f4d7d0-d494-4c0e-8a72-5920867ba86d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b77b111e42035186add24991b52e6a59d2f03c303f3a028df88cfebd9034aec1/globalmount\"" pod="openstack/ovn-copy-data" Nov 24 20:46:19 crc kubenswrapper[4812]: I1124 20:46:19.733646 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/21f4d7d0-d494-4c0e-8a72-5920867ba86d-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"21f4d7d0-d494-4c0e-8a72-5920867ba86d\") " pod="openstack/ovn-copy-data" Nov 24 20:46:19 crc kubenswrapper[4812]: I1124 20:46:19.750729 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27m9f\" (UniqueName: \"kubernetes.io/projected/21f4d7d0-d494-4c0e-8a72-5920867ba86d-kube-api-access-27m9f\") pod \"ovn-copy-data\" (UID: \"21f4d7d0-d494-4c0e-8a72-5920867ba86d\") " pod="openstack/ovn-copy-data" Nov 24 20:46:19 crc kubenswrapper[4812]: I1124 20:46:19.777018 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3b26af0a-ffd2-46ec-8c5b-1e30489c4ffe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b26af0a-ffd2-46ec-8c5b-1e30489c4ffe\") pod \"ovn-copy-data\" (UID: \"21f4d7d0-d494-4c0e-8a72-5920867ba86d\") " pod="openstack/ovn-copy-data" Nov 24 20:46:19 crc kubenswrapper[4812]: I1124 20:46:19.887213 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 24 20:46:20 crc kubenswrapper[4812]: I1124 20:46:20.470709 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Nov 24 20:46:20 crc kubenswrapper[4812]: W1124 20:46:20.511667 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21f4d7d0_d494_4c0e_8a72_5920867ba86d.slice/crio-be0a9268e7482ce7355c550419baddae4844a795bd5444d26c50ef4b8143e6a4 WatchSource:0}: Error finding container be0a9268e7482ce7355c550419baddae4844a795bd5444d26c50ef4b8143e6a4: Status 404 returned error can't find the container with id be0a9268e7482ce7355c550419baddae4844a795bd5444d26c50ef4b8143e6a4 Nov 24 20:46:20 crc kubenswrapper[4812]: I1124 20:46:20.932017 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"21f4d7d0-d494-4c0e-8a72-5920867ba86d","Type":"ContainerStarted","Data":"be0a9268e7482ce7355c550419baddae4844a795bd5444d26c50ef4b8143e6a4"} Nov 24 20:46:21 crc kubenswrapper[4812]: I1124 20:46:21.945977 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"21f4d7d0-d494-4c0e-8a72-5920867ba86d","Type":"ContainerStarted","Data":"52296d3c9a774ef743b4c069f03eb78366aedce02ccfe46129409317ba18fdfa"} Nov 24 20:46:21 crc kubenswrapper[4812]: I1124 20:46:21.970446 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.442422107 podStartE2EDuration="3.97042074s" podCreationTimestamp="2025-11-24 20:46:18 +0000 UTC" firstStartedPulling="2025-11-24 20:46:20.518318263 +0000 UTC m=+5374.307270634" lastFinishedPulling="2025-11-24 20:46:21.046316896 +0000 UTC m=+5374.835269267" observedRunningTime="2025-11-24 20:46:21.96690716 +0000 UTC m=+5375.755859561" watchObservedRunningTime="2025-11-24 20:46:21.97042074 +0000 UTC m=+5375.759373141" Nov 24 20:46:26 crc kubenswrapper[4812]: E1124 20:46:26.746641 4812 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.36:52680->38.102.83.36:46073: write tcp 38.102.83.36:52680->38.102.83.36:46073: write: connection reset by peer Nov 24 20:46:26 crc kubenswrapper[4812]: I1124 20:46:26.753889 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" Nov 24 20:46:26 crc kubenswrapper[4812]: I1124 20:46:26.827220 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54564445dc-jx4gp"] Nov 24 20:46:26 crc kubenswrapper[4812]: I1124 20:46:26.827634 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54564445dc-jx4gp" podUID="beccfcee-f6c2-418c-a598-40f78a06b03c" containerName="dnsmasq-dns" containerID="cri-o://13669e15f6c5f931c7bcc344d214d76bce6f192720e0785f9d085109d5c790aa" gracePeriod=10 Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.012761 4812 generic.go:334] "Generic (PLEG): container finished" podID="beccfcee-f6c2-418c-a598-40f78a06b03c" containerID="13669e15f6c5f931c7bcc344d214d76bce6f192720e0785f9d085109d5c790aa" exitCode=0 Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.012801 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54564445dc-jx4gp" event={"ID":"beccfcee-f6c2-418c-a598-40f78a06b03c","Type":"ContainerDied","Data":"13669e15f6c5f931c7bcc344d214d76bce6f192720e0785f9d085109d5c790aa"} Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.355832 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54564445dc-jx4gp" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.370179 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/beccfcee-f6c2-418c-a598-40f78a06b03c-dns-svc\") pod \"beccfcee-f6c2-418c-a598-40f78a06b03c\" (UID: \"beccfcee-f6c2-418c-a598-40f78a06b03c\") " Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.370417 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/beccfcee-f6c2-418c-a598-40f78a06b03c-config\") pod \"beccfcee-f6c2-418c-a598-40f78a06b03c\" (UID: \"beccfcee-f6c2-418c-a598-40f78a06b03c\") " Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.371235 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6mwx\" (UniqueName: \"kubernetes.io/projected/beccfcee-f6c2-418c-a598-40f78a06b03c-kube-api-access-k6mwx\") pod \"beccfcee-f6c2-418c-a598-40f78a06b03c\" (UID: \"beccfcee-f6c2-418c-a598-40f78a06b03c\") " Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.379900 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beccfcee-f6c2-418c-a598-40f78a06b03c-kube-api-access-k6mwx" (OuterVolumeSpecName: "kube-api-access-k6mwx") pod "beccfcee-f6c2-418c-a598-40f78a06b03c" (UID: "beccfcee-f6c2-418c-a598-40f78a06b03c"). InnerVolumeSpecName "kube-api-access-k6mwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.433193 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beccfcee-f6c2-418c-a598-40f78a06b03c-config" (OuterVolumeSpecName: "config") pod "beccfcee-f6c2-418c-a598-40f78a06b03c" (UID: "beccfcee-f6c2-418c-a598-40f78a06b03c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.445166 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beccfcee-f6c2-418c-a598-40f78a06b03c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "beccfcee-f6c2-418c-a598-40f78a06b03c" (UID: "beccfcee-f6c2-418c-a598-40f78a06b03c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.474114 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/beccfcee-f6c2-418c-a598-40f78a06b03c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.474154 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/beccfcee-f6c2-418c-a598-40f78a06b03c-config\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.474165 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6mwx\" (UniqueName: \"kubernetes.io/projected/beccfcee-f6c2-418c-a598-40f78a06b03c-kube-api-access-k6mwx\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.496069 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 24 20:46:27 crc kubenswrapper[4812]: E1124 20:46:27.496372 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beccfcee-f6c2-418c-a598-40f78a06b03c" containerName="init" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.496388 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="beccfcee-f6c2-418c-a598-40f78a06b03c" containerName="init" Nov 24 20:46:27 crc kubenswrapper[4812]: E1124 20:46:27.496407 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beccfcee-f6c2-418c-a598-40f78a06b03c" containerName="dnsmasq-dns" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.496414 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="beccfcee-f6c2-418c-a598-40f78a06b03c" containerName="dnsmasq-dns" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.496544 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="beccfcee-f6c2-418c-a598-40f78a06b03c" containerName="dnsmasq-dns" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.497385 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.498824 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.498997 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-lc6mt" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.499704 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.502023 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.521259 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.574789 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53\") " pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.574862 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53-config\") pod \"ovn-northd-0\" (UID: \"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53\") " pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.574945 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53-scripts\") pod \"ovn-northd-0\" (UID: \"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53\") " pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.574975 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzlvf\" (UniqueName: \"kubernetes.io/projected/7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53-kube-api-access-nzlvf\") pod \"ovn-northd-0\" (UID: \"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53\") " pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.575009 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53\") " pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.575059 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53\") " pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.575186 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53\") " pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.677026 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53-config\") pod \"ovn-northd-0\" (UID: \"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53\") " pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.677107 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53-scripts\") pod \"ovn-northd-0\" (UID: \"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53\") " pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.677130 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzlvf\" (UniqueName: \"kubernetes.io/projected/7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53-kube-api-access-nzlvf\") pod \"ovn-northd-0\" (UID: \"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53\") " pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.677159 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53\") " pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.677199 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53\") " pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.677242 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53\") " pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.677259 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53\") " pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.677992 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53\") " pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.678277 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53-config\") pod \"ovn-northd-0\" (UID: \"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53\") " pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.678649 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53-scripts\") pod \"ovn-northd-0\" (UID: \"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53\") " pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.680381 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53\") " pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.680482 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53\") " pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.681271 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53\") " pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.692601 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzlvf\" (UniqueName: \"kubernetes.io/projected/7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53-kube-api-access-nzlvf\") pod \"ovn-northd-0\" (UID: \"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53\") " pod="openstack/ovn-northd-0" Nov 24 20:46:27 crc kubenswrapper[4812]: I1124 20:46:27.814742 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 20:46:28 crc kubenswrapper[4812]: I1124 20:46:28.033638 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54564445dc-jx4gp" event={"ID":"beccfcee-f6c2-418c-a598-40f78a06b03c","Type":"ContainerDied","Data":"4059444df73457378bad8dc50097dec731278a3bdcc243092e7aef13f718dc4c"} Nov 24 20:46:28 crc kubenswrapper[4812]: I1124 20:46:28.033892 4812 scope.go:117] "RemoveContainer" containerID="13669e15f6c5f931c7bcc344d214d76bce6f192720e0785f9d085109d5c790aa" Nov 24 20:46:28 crc kubenswrapper[4812]: I1124 20:46:28.033685 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54564445dc-jx4gp" Nov 24 20:46:28 crc kubenswrapper[4812]: I1124 20:46:28.066611 4812 scope.go:117] "RemoveContainer" containerID="9e740797b895af05e04286334ca8268a6ec08e25da9d4f5320fef38bf57551dc" Nov 24 20:46:28 crc kubenswrapper[4812]: I1124 20:46:28.091033 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54564445dc-jx4gp"] Nov 24 20:46:28 crc kubenswrapper[4812]: I1124 20:46:28.098045 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54564445dc-jx4gp"] Nov 24 20:46:28 crc kubenswrapper[4812]: I1124 20:46:28.322915 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 20:46:28 crc kubenswrapper[4812]: W1124 20:46:28.330704 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d2ac6f4_f8d8_4a31_a4db_bf95c1b2da53.slice/crio-8dfe0b7f85873a4d568ef3d32115defac75550a07865f91cd4d3540d879f55c4 WatchSource:0}: Error finding container 8dfe0b7f85873a4d568ef3d32115defac75550a07865f91cd4d3540d879f55c4: Status 404 returned error can't find the container with id 8dfe0b7f85873a4d568ef3d32115defac75550a07865f91cd4d3540d879f55c4 Nov 24 20:46:28 crc kubenswrapper[4812]: I1124 20:46:28.981903 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beccfcee-f6c2-418c-a598-40f78a06b03c" path="/var/lib/kubelet/pods/beccfcee-f6c2-418c-a598-40f78a06b03c/volumes" Nov 24 20:46:29 crc kubenswrapper[4812]: I1124 20:46:29.047018 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53","Type":"ContainerStarted","Data":"9a475364b4421d5b0485101108ea04aedf9edad0d2c29cfb327ae943585efd2a"} Nov 24 20:46:29 crc kubenswrapper[4812]: I1124 20:46:29.047080 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53","Type":"ContainerStarted","Data":"bfa80ba4876962101f3a1a39f58922d73a7e9e866225c50f0b0bb59ac800b462"} Nov 24 20:46:29 crc kubenswrapper[4812]: I1124 20:46:29.047099 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53","Type":"ContainerStarted","Data":"8dfe0b7f85873a4d568ef3d32115defac75550a07865f91cd4d3540d879f55c4"} Nov 24 20:46:29 crc kubenswrapper[4812]: I1124 20:46:29.047234 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 24 20:46:29 crc kubenswrapper[4812]: I1124 20:46:29.071015 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.070992812 podStartE2EDuration="2.070992812s" podCreationTimestamp="2025-11-24 20:46:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:46:29.067217385 +0000 UTC m=+5382.856169796" watchObservedRunningTime="2025-11-24 20:46:29.070992812 +0000 UTC m=+5382.859945223" Nov 24 20:46:32 crc kubenswrapper[4812]: I1124 20:46:32.894651 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-h5786"] Nov 24 20:46:32 crc kubenswrapper[4812]: I1124 20:46:32.896427 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-h5786" Nov 24 20:46:32 crc kubenswrapper[4812]: I1124 20:46:32.903515 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-h5786"] Nov 24 20:46:32 crc kubenswrapper[4812]: I1124 20:46:32.908127 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-688e-account-create-6wp5w"] Nov 24 20:46:32 crc kubenswrapper[4812]: I1124 20:46:32.909133 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-688e-account-create-6wp5w" Nov 24 20:46:32 crc kubenswrapper[4812]: I1124 20:46:32.910844 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 24 20:46:32 crc kubenswrapper[4812]: I1124 20:46:32.924218 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-688e-account-create-6wp5w"] Nov 24 20:46:32 crc kubenswrapper[4812]: I1124 20:46:32.965991 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:46:32 crc kubenswrapper[4812]: E1124 20:46:32.966291 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:46:32 crc kubenswrapper[4812]: I1124 20:46:32.980730 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs88d\" (UniqueName: \"kubernetes.io/projected/dc8526b1-356f-4e7f-bee9-4af150a24496-kube-api-access-cs88d\") pod \"keystone-688e-account-create-6wp5w\" (UID: \"dc8526b1-356f-4e7f-bee9-4af150a24496\") " pod="openstack/keystone-688e-account-create-6wp5w" Nov 24 20:46:32 crc kubenswrapper[4812]: I1124 20:46:32.980795 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc8526b1-356f-4e7f-bee9-4af150a24496-operator-scripts\") pod \"keystone-688e-account-create-6wp5w\" (UID: \"dc8526b1-356f-4e7f-bee9-4af150a24496\") " pod="openstack/keystone-688e-account-create-6wp5w" Nov 24 20:46:32 crc kubenswrapper[4812]: I1124 20:46:32.981261 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0-operator-scripts\") pod \"keystone-db-create-h5786\" (UID: \"119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0\") " pod="openstack/keystone-db-create-h5786" Nov 24 20:46:32 crc kubenswrapper[4812]: I1124 20:46:32.981373 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxmj7\" (UniqueName: \"kubernetes.io/projected/119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0-kube-api-access-qxmj7\") pod \"keystone-db-create-h5786\" (UID: \"119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0\") " pod="openstack/keystone-db-create-h5786" Nov 24 20:46:33 crc kubenswrapper[4812]: I1124 20:46:33.083380 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0-operator-scripts\") pod \"keystone-db-create-h5786\" (UID: \"119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0\") " pod="openstack/keystone-db-create-h5786" Nov 24 20:46:33 crc kubenswrapper[4812]: I1124 20:46:33.083428 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxmj7\" (UniqueName: \"kubernetes.io/projected/119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0-kube-api-access-qxmj7\") pod \"keystone-db-create-h5786\" (UID: \"119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0\") " pod="openstack/keystone-db-create-h5786" Nov 24 20:46:33 crc kubenswrapper[4812]: I1124 20:46:33.083451 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs88d\" (UniqueName: \"kubernetes.io/projected/dc8526b1-356f-4e7f-bee9-4af150a24496-kube-api-access-cs88d\") pod \"keystone-688e-account-create-6wp5w\" (UID: \"dc8526b1-356f-4e7f-bee9-4af150a24496\") " pod="openstack/keystone-688e-account-create-6wp5w" Nov 24 20:46:33 crc kubenswrapper[4812]: I1124 20:46:33.083482 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc8526b1-356f-4e7f-bee9-4af150a24496-operator-scripts\") pod \"keystone-688e-account-create-6wp5w\" (UID: \"dc8526b1-356f-4e7f-bee9-4af150a24496\") " pod="openstack/keystone-688e-account-create-6wp5w" Nov 24 20:46:33 crc kubenswrapper[4812]: I1124 20:46:33.084154 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc8526b1-356f-4e7f-bee9-4af150a24496-operator-scripts\") pod \"keystone-688e-account-create-6wp5w\" (UID: \"dc8526b1-356f-4e7f-bee9-4af150a24496\") " pod="openstack/keystone-688e-account-create-6wp5w" Nov 24 20:46:33 crc kubenswrapper[4812]: I1124 20:46:33.084639 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0-operator-scripts\") pod \"keystone-db-create-h5786\" (UID: \"119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0\") " pod="openstack/keystone-db-create-h5786" Nov 24 20:46:33 crc kubenswrapper[4812]: I1124 20:46:33.102597 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs88d\" (UniqueName: \"kubernetes.io/projected/dc8526b1-356f-4e7f-bee9-4af150a24496-kube-api-access-cs88d\") pod \"keystone-688e-account-create-6wp5w\" (UID: \"dc8526b1-356f-4e7f-bee9-4af150a24496\") " pod="openstack/keystone-688e-account-create-6wp5w" Nov 24 20:46:33 crc kubenswrapper[4812]: I1124 20:46:33.110196 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxmj7\" (UniqueName: \"kubernetes.io/projected/119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0-kube-api-access-qxmj7\") pod \"keystone-db-create-h5786\" (UID: \"119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0\") " pod="openstack/keystone-db-create-h5786" Nov 24 20:46:33 crc kubenswrapper[4812]: I1124 20:46:33.224119 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-h5786" Nov 24 20:46:33 crc kubenswrapper[4812]: I1124 20:46:33.236442 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-688e-account-create-6wp5w" Nov 24 20:46:33 crc kubenswrapper[4812]: I1124 20:46:33.727923 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-688e-account-create-6wp5w"] Nov 24 20:46:33 crc kubenswrapper[4812]: W1124 20:46:33.735508 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc8526b1_356f_4e7f_bee9_4af150a24496.slice/crio-cccc19d7fbfda43f6a472e528d4c811881203535ad2e1d07a6ce74ee47c2ddb4 WatchSource:0}: Error finding container cccc19d7fbfda43f6a472e528d4c811881203535ad2e1d07a6ce74ee47c2ddb4: Status 404 returned error can't find the container with id cccc19d7fbfda43f6a472e528d4c811881203535ad2e1d07a6ce74ee47c2ddb4 Nov 24 20:46:33 crc kubenswrapper[4812]: I1124 20:46:33.791577 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-h5786"] Nov 24 20:46:34 crc kubenswrapper[4812]: I1124 20:46:34.093590 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-688e-account-create-6wp5w" event={"ID":"dc8526b1-356f-4e7f-bee9-4af150a24496","Type":"ContainerStarted","Data":"b3716cd45b7c1fb7cbc2ef23c00686aba0549080fe89d82dcb88541da2068703"} Nov 24 20:46:34 crc kubenswrapper[4812]: I1124 20:46:34.093637 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-688e-account-create-6wp5w" event={"ID":"dc8526b1-356f-4e7f-bee9-4af150a24496","Type":"ContainerStarted","Data":"cccc19d7fbfda43f6a472e528d4c811881203535ad2e1d07a6ce74ee47c2ddb4"} Nov 24 20:46:34 crc kubenswrapper[4812]: I1124 20:46:34.095919 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-h5786" event={"ID":"119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0","Type":"ContainerStarted","Data":"b7bf7d9941ec8bdd867374a0ad2cf73d0c1929935331809190c4e54a3056cb05"} Nov 24 20:46:34 crc kubenswrapper[4812]: I1124 20:46:34.095957 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-h5786" event={"ID":"119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0","Type":"ContainerStarted","Data":"ef6cf32413b1ed7401e157f06f7fcd2320cf01ed94ce548357dade88425963ad"} Nov 24 20:46:34 crc kubenswrapper[4812]: I1124 20:46:34.115300 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-688e-account-create-6wp5w" podStartSLOduration=2.115283863 podStartE2EDuration="2.115283863s" podCreationTimestamp="2025-11-24 20:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:46:34.108237803 +0000 UTC m=+5387.897190184" watchObservedRunningTime="2025-11-24 20:46:34.115283863 +0000 UTC m=+5387.904236224" Nov 24 20:46:34 crc kubenswrapper[4812]: I1124 20:46:34.122942 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-h5786" podStartSLOduration=2.1229229099999998 podStartE2EDuration="2.12292291s" podCreationTimestamp="2025-11-24 20:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:46:34.122545159 +0000 UTC m=+5387.911497540" watchObservedRunningTime="2025-11-24 20:46:34.12292291 +0000 UTC m=+5387.911875281" Nov 24 20:46:35 crc kubenswrapper[4812]: I1124 20:46:35.106210 4812 generic.go:334] "Generic (PLEG): container finished" podID="119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0" containerID="b7bf7d9941ec8bdd867374a0ad2cf73d0c1929935331809190c4e54a3056cb05" exitCode=0 Nov 24 20:46:35 crc kubenswrapper[4812]: I1124 20:46:35.106382 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-h5786" event={"ID":"119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0","Type":"ContainerDied","Data":"b7bf7d9941ec8bdd867374a0ad2cf73d0c1929935331809190c4e54a3056cb05"} Nov 24 20:46:35 crc kubenswrapper[4812]: I1124 20:46:35.109153 4812 generic.go:334] "Generic (PLEG): container finished" podID="dc8526b1-356f-4e7f-bee9-4af150a24496" containerID="b3716cd45b7c1fb7cbc2ef23c00686aba0549080fe89d82dcb88541da2068703" exitCode=0 Nov 24 20:46:35 crc kubenswrapper[4812]: I1124 20:46:35.109196 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-688e-account-create-6wp5w" event={"ID":"dc8526b1-356f-4e7f-bee9-4af150a24496","Type":"ContainerDied","Data":"b3716cd45b7c1fb7cbc2ef23c00686aba0549080fe89d82dcb88541da2068703"} Nov 24 20:46:36 crc kubenswrapper[4812]: I1124 20:46:36.605595 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-h5786" Nov 24 20:46:36 crc kubenswrapper[4812]: I1124 20:46:36.611027 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-688e-account-create-6wp5w" Nov 24 20:46:36 crc kubenswrapper[4812]: I1124 20:46:36.646546 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc8526b1-356f-4e7f-bee9-4af150a24496-operator-scripts\") pod \"dc8526b1-356f-4e7f-bee9-4af150a24496\" (UID: \"dc8526b1-356f-4e7f-bee9-4af150a24496\") " Nov 24 20:46:36 crc kubenswrapper[4812]: I1124 20:46:36.646600 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxmj7\" (UniqueName: \"kubernetes.io/projected/119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0-kube-api-access-qxmj7\") pod \"119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0\" (UID: \"119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0\") " Nov 24 20:46:36 crc kubenswrapper[4812]: I1124 20:46:36.647402 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc8526b1-356f-4e7f-bee9-4af150a24496-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc8526b1-356f-4e7f-bee9-4af150a24496" (UID: "dc8526b1-356f-4e7f-bee9-4af150a24496"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:46:36 crc kubenswrapper[4812]: I1124 20:46:36.658316 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0-kube-api-access-qxmj7" (OuterVolumeSpecName: "kube-api-access-qxmj7") pod "119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0" (UID: "119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0"). InnerVolumeSpecName "kube-api-access-qxmj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:46:36 crc kubenswrapper[4812]: I1124 20:46:36.747994 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs88d\" (UniqueName: \"kubernetes.io/projected/dc8526b1-356f-4e7f-bee9-4af150a24496-kube-api-access-cs88d\") pod \"dc8526b1-356f-4e7f-bee9-4af150a24496\" (UID: \"dc8526b1-356f-4e7f-bee9-4af150a24496\") " Nov 24 20:46:36 crc kubenswrapper[4812]: I1124 20:46:36.748064 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0-operator-scripts\") pod \"119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0\" (UID: \"119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0\") " Nov 24 20:46:36 crc kubenswrapper[4812]: I1124 20:46:36.748708 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc8526b1-356f-4e7f-bee9-4af150a24496-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:36 crc kubenswrapper[4812]: I1124 20:46:36.748740 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxmj7\" (UniqueName: \"kubernetes.io/projected/119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0-kube-api-access-qxmj7\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:36 crc kubenswrapper[4812]: I1124 20:46:36.749031 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0" (UID: "119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:46:36 crc kubenswrapper[4812]: I1124 20:46:36.794021 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc8526b1-356f-4e7f-bee9-4af150a24496-kube-api-access-cs88d" (OuterVolumeSpecName: "kube-api-access-cs88d") pod "dc8526b1-356f-4e7f-bee9-4af150a24496" (UID: "dc8526b1-356f-4e7f-bee9-4af150a24496"). InnerVolumeSpecName "kube-api-access-cs88d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:46:36 crc kubenswrapper[4812]: I1124 20:46:36.850581 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs88d\" (UniqueName: \"kubernetes.io/projected/dc8526b1-356f-4e7f-bee9-4af150a24496-kube-api-access-cs88d\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:36 crc kubenswrapper[4812]: I1124 20:46:36.850638 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:37 crc kubenswrapper[4812]: I1124 20:46:37.138499 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-h5786" Nov 24 20:46:37 crc kubenswrapper[4812]: I1124 20:46:37.141177 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-h5786" event={"ID":"119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0","Type":"ContainerDied","Data":"ef6cf32413b1ed7401e157f06f7fcd2320cf01ed94ce548357dade88425963ad"} Nov 24 20:46:37 crc kubenswrapper[4812]: I1124 20:46:37.141252 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef6cf32413b1ed7401e157f06f7fcd2320cf01ed94ce548357dade88425963ad" Nov 24 20:46:37 crc kubenswrapper[4812]: I1124 20:46:37.146081 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-688e-account-create-6wp5w" Nov 24 20:46:37 crc kubenswrapper[4812]: I1124 20:46:37.150097 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-688e-account-create-6wp5w" event={"ID":"dc8526b1-356f-4e7f-bee9-4af150a24496","Type":"ContainerDied","Data":"cccc19d7fbfda43f6a472e528d4c811881203535ad2e1d07a6ce74ee47c2ddb4"} Nov 24 20:46:37 crc kubenswrapper[4812]: I1124 20:46:37.150185 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cccc19d7fbfda43f6a472e528d4c811881203535ad2e1d07a6ce74ee47c2ddb4" Nov 24 20:46:38 crc kubenswrapper[4812]: I1124 20:46:38.400109 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-x6dpw"] Nov 24 20:46:38 crc kubenswrapper[4812]: E1124 20:46:38.400736 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc8526b1-356f-4e7f-bee9-4af150a24496" containerName="mariadb-account-create" Nov 24 20:46:38 crc kubenswrapper[4812]: I1124 20:46:38.400751 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc8526b1-356f-4e7f-bee9-4af150a24496" containerName="mariadb-account-create" Nov 24 20:46:38 crc kubenswrapper[4812]: E1124 20:46:38.400782 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0" containerName="mariadb-database-create" Nov 24 20:46:38 crc kubenswrapper[4812]: I1124 20:46:38.400791 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0" containerName="mariadb-database-create" Nov 24 20:46:38 crc kubenswrapper[4812]: I1124 20:46:38.400990 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc8526b1-356f-4e7f-bee9-4af150a24496" containerName="mariadb-account-create" Nov 24 20:46:38 crc kubenswrapper[4812]: I1124 20:46:38.401005 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0" containerName="mariadb-database-create" Nov 24 20:46:38 crc kubenswrapper[4812]: I1124 20:46:38.401607 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x6dpw" Nov 24 20:46:38 crc kubenswrapper[4812]: I1124 20:46:38.404440 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 20:46:38 crc kubenswrapper[4812]: I1124 20:46:38.404628 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 20:46:38 crc kubenswrapper[4812]: I1124 20:46:38.405015 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 20:46:38 crc kubenswrapper[4812]: I1124 20:46:38.406171 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-c4htr" Nov 24 20:46:38 crc kubenswrapper[4812]: I1124 20:46:38.439442 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x6dpw"] Nov 24 20:46:38 crc kubenswrapper[4812]: I1124 20:46:38.580923 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349153cd-04ae-4ca4-8a0d-f688bb60c77e-combined-ca-bundle\") pod \"keystone-db-sync-x6dpw\" (UID: \"349153cd-04ae-4ca4-8a0d-f688bb60c77e\") " pod="openstack/keystone-db-sync-x6dpw" Nov 24 20:46:38 crc kubenswrapper[4812]: I1124 20:46:38.581009 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcqhz\" (UniqueName: \"kubernetes.io/projected/349153cd-04ae-4ca4-8a0d-f688bb60c77e-kube-api-access-rcqhz\") pod \"keystone-db-sync-x6dpw\" (UID: \"349153cd-04ae-4ca4-8a0d-f688bb60c77e\") " pod="openstack/keystone-db-sync-x6dpw" Nov 24 20:46:38 crc kubenswrapper[4812]: I1124 20:46:38.581043 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349153cd-04ae-4ca4-8a0d-f688bb60c77e-config-data\") pod \"keystone-db-sync-x6dpw\" (UID: \"349153cd-04ae-4ca4-8a0d-f688bb60c77e\") " pod="openstack/keystone-db-sync-x6dpw" Nov 24 20:46:38 crc kubenswrapper[4812]: I1124 20:46:38.683028 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349153cd-04ae-4ca4-8a0d-f688bb60c77e-config-data\") pod \"keystone-db-sync-x6dpw\" (UID: \"349153cd-04ae-4ca4-8a0d-f688bb60c77e\") " pod="openstack/keystone-db-sync-x6dpw" Nov 24 20:46:38 crc kubenswrapper[4812]: I1124 20:46:38.683219 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349153cd-04ae-4ca4-8a0d-f688bb60c77e-combined-ca-bundle\") pod \"keystone-db-sync-x6dpw\" (UID: \"349153cd-04ae-4ca4-8a0d-f688bb60c77e\") " pod="openstack/keystone-db-sync-x6dpw" Nov 24 20:46:38 crc kubenswrapper[4812]: I1124 20:46:38.683257 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcqhz\" (UniqueName: \"kubernetes.io/projected/349153cd-04ae-4ca4-8a0d-f688bb60c77e-kube-api-access-rcqhz\") pod \"keystone-db-sync-x6dpw\" (UID: \"349153cd-04ae-4ca4-8a0d-f688bb60c77e\") " pod="openstack/keystone-db-sync-x6dpw" Nov 24 20:46:38 crc kubenswrapper[4812]: I1124 20:46:38.690806 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349153cd-04ae-4ca4-8a0d-f688bb60c77e-combined-ca-bundle\") pod \"keystone-db-sync-x6dpw\" (UID: \"349153cd-04ae-4ca4-8a0d-f688bb60c77e\") " pod="openstack/keystone-db-sync-x6dpw" Nov 24 20:46:38 crc kubenswrapper[4812]: I1124 20:46:38.691028 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349153cd-04ae-4ca4-8a0d-f688bb60c77e-config-data\") pod \"keystone-db-sync-x6dpw\" (UID: \"349153cd-04ae-4ca4-8a0d-f688bb60c77e\") " pod="openstack/keystone-db-sync-x6dpw" Nov 24 20:46:38 crc kubenswrapper[4812]: I1124 20:46:38.700020 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcqhz\" (UniqueName: \"kubernetes.io/projected/349153cd-04ae-4ca4-8a0d-f688bb60c77e-kube-api-access-rcqhz\") pod \"keystone-db-sync-x6dpw\" (UID: \"349153cd-04ae-4ca4-8a0d-f688bb60c77e\") " pod="openstack/keystone-db-sync-x6dpw" Nov 24 20:46:38 crc kubenswrapper[4812]: I1124 20:46:38.739883 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x6dpw" Nov 24 20:46:39 crc kubenswrapper[4812]: I1124 20:46:39.234627 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-x6dpw"] Nov 24 20:46:39 crc kubenswrapper[4812]: W1124 20:46:39.246435 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod349153cd_04ae_4ca4_8a0d_f688bb60c77e.slice/crio-5cfaa7c6326888ceabfd6073076cdf45c2c4046e043bba65a2dd653b8826f569 WatchSource:0}: Error finding container 5cfaa7c6326888ceabfd6073076cdf45c2c4046e043bba65a2dd653b8826f569: Status 404 returned error can't find the container with id 5cfaa7c6326888ceabfd6073076cdf45c2c4046e043bba65a2dd653b8826f569 Nov 24 20:46:40 crc kubenswrapper[4812]: I1124 20:46:40.184979 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x6dpw" event={"ID":"349153cd-04ae-4ca4-8a0d-f688bb60c77e","Type":"ContainerStarted","Data":"417df04c1243c6e3c908c9f8f3bab6db48031f739393c4e59a0f4a4ed814df74"} Nov 24 20:46:40 crc kubenswrapper[4812]: I1124 20:46:40.185307 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x6dpw" event={"ID":"349153cd-04ae-4ca4-8a0d-f688bb60c77e","Type":"ContainerStarted","Data":"5cfaa7c6326888ceabfd6073076cdf45c2c4046e043bba65a2dd653b8826f569"} Nov 24 20:46:40 crc kubenswrapper[4812]: I1124 20:46:40.214911 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-x6dpw" podStartSLOduration=2.214883922 podStartE2EDuration="2.214883922s" podCreationTimestamp="2025-11-24 20:46:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:46:40.207979076 +0000 UTC m=+5393.996931507" watchObservedRunningTime="2025-11-24 20:46:40.214883922 +0000 UTC m=+5394.003836323" Nov 24 20:46:41 crc kubenswrapper[4812]: I1124 20:46:41.197048 4812 generic.go:334] "Generic (PLEG): container finished" podID="349153cd-04ae-4ca4-8a0d-f688bb60c77e" containerID="417df04c1243c6e3c908c9f8f3bab6db48031f739393c4e59a0f4a4ed814df74" exitCode=0 Nov 24 20:46:41 crc kubenswrapper[4812]: I1124 20:46:41.197092 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x6dpw" event={"ID":"349153cd-04ae-4ca4-8a0d-f688bb60c77e","Type":"ContainerDied","Data":"417df04c1243c6e3c908c9f8f3bab6db48031f739393c4e59a0f4a4ed814df74"} Nov 24 20:46:42 crc kubenswrapper[4812]: I1124 20:46:42.662022 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x6dpw" Nov 24 20:46:42 crc kubenswrapper[4812]: I1124 20:46:42.790018 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349153cd-04ae-4ca4-8a0d-f688bb60c77e-combined-ca-bundle\") pod \"349153cd-04ae-4ca4-8a0d-f688bb60c77e\" (UID: \"349153cd-04ae-4ca4-8a0d-f688bb60c77e\") " Nov 24 20:46:42 crc kubenswrapper[4812]: I1124 20:46:42.790382 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349153cd-04ae-4ca4-8a0d-f688bb60c77e-config-data\") pod \"349153cd-04ae-4ca4-8a0d-f688bb60c77e\" (UID: \"349153cd-04ae-4ca4-8a0d-f688bb60c77e\") " Nov 24 20:46:42 crc kubenswrapper[4812]: I1124 20:46:42.790474 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcqhz\" (UniqueName: \"kubernetes.io/projected/349153cd-04ae-4ca4-8a0d-f688bb60c77e-kube-api-access-rcqhz\") pod \"349153cd-04ae-4ca4-8a0d-f688bb60c77e\" (UID: \"349153cd-04ae-4ca4-8a0d-f688bb60c77e\") " Nov 24 20:46:42 crc kubenswrapper[4812]: I1124 20:46:42.796938 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/349153cd-04ae-4ca4-8a0d-f688bb60c77e-kube-api-access-rcqhz" (OuterVolumeSpecName: "kube-api-access-rcqhz") pod "349153cd-04ae-4ca4-8a0d-f688bb60c77e" (UID: "349153cd-04ae-4ca4-8a0d-f688bb60c77e"). InnerVolumeSpecName "kube-api-access-rcqhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:46:42 crc kubenswrapper[4812]: I1124 20:46:42.819555 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/349153cd-04ae-4ca4-8a0d-f688bb60c77e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "349153cd-04ae-4ca4-8a0d-f688bb60c77e" (UID: "349153cd-04ae-4ca4-8a0d-f688bb60c77e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:46:42 crc kubenswrapper[4812]: I1124 20:46:42.839037 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/349153cd-04ae-4ca4-8a0d-f688bb60c77e-config-data" (OuterVolumeSpecName: "config-data") pod "349153cd-04ae-4ca4-8a0d-f688bb60c77e" (UID: "349153cd-04ae-4ca4-8a0d-f688bb60c77e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:46:42 crc kubenswrapper[4812]: I1124 20:46:42.888999 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 24 20:46:42 crc kubenswrapper[4812]: I1124 20:46:42.895132 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349153cd-04ae-4ca4-8a0d-f688bb60c77e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:42 crc kubenswrapper[4812]: I1124 20:46:42.895186 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349153cd-04ae-4ca4-8a0d-f688bb60c77e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:42 crc kubenswrapper[4812]: I1124 20:46:42.895205 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcqhz\" (UniqueName: \"kubernetes.io/projected/349153cd-04ae-4ca4-8a0d-f688bb60c77e-kube-api-access-rcqhz\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.225073 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-x6dpw" event={"ID":"349153cd-04ae-4ca4-8a0d-f688bb60c77e","Type":"ContainerDied","Data":"5cfaa7c6326888ceabfd6073076cdf45c2c4046e043bba65a2dd653b8826f569"} Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.225142 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cfaa7c6326888ceabfd6073076cdf45c2c4046e043bba65a2dd653b8826f569" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.225227 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-x6dpw" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.411724 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c5dc88467-bm87w"] Nov 24 20:46:43 crc kubenswrapper[4812]: E1124 20:46:43.412473 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349153cd-04ae-4ca4-8a0d-f688bb60c77e" containerName="keystone-db-sync" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.412489 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="349153cd-04ae-4ca4-8a0d-f688bb60c77e" containerName="keystone-db-sync" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.412666 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="349153cd-04ae-4ca4-8a0d-f688bb60c77e" containerName="keystone-db-sync" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.413530 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.437714 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nsn72"] Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.438825 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.440444 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.440686 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.440719 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-c4htr" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.440791 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.440948 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.469485 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c5dc88467-bm87w"] Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.494385 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nsn72"] Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.507586 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-scripts\") pod \"keystone-bootstrap-nsn72\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.507633 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-config\") pod \"dnsmasq-dns-6c5dc88467-bm87w\" (UID: \"cdcce122-6441-49d4-8051-b4836f8f41d4\") " pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.507653 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-fernet-keys\") pod \"keystone-bootstrap-nsn72\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.507676 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-credential-keys\") pod \"keystone-bootstrap-nsn72\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.507700 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8brfv\" (UniqueName: \"kubernetes.io/projected/daf91159-3454-4ddc-96d2-9ef78ad16e9c-kube-api-access-8brfv\") pod \"keystone-bootstrap-nsn72\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.507742 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-dns-svc\") pod \"dnsmasq-dns-6c5dc88467-bm87w\" (UID: \"cdcce122-6441-49d4-8051-b4836f8f41d4\") " pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.507760 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-config-data\") pod \"keystone-bootstrap-nsn72\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.507783 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-combined-ca-bundle\") pod \"keystone-bootstrap-nsn72\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.507822 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-ovsdbserver-sb\") pod \"dnsmasq-dns-6c5dc88467-bm87w\" (UID: \"cdcce122-6441-49d4-8051-b4836f8f41d4\") " pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.507845 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-ovsdbserver-nb\") pod \"dnsmasq-dns-6c5dc88467-bm87w\" (UID: \"cdcce122-6441-49d4-8051-b4836f8f41d4\") " pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.507881 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlbhl\" (UniqueName: \"kubernetes.io/projected/cdcce122-6441-49d4-8051-b4836f8f41d4-kube-api-access-hlbhl\") pod \"dnsmasq-dns-6c5dc88467-bm87w\" (UID: \"cdcce122-6441-49d4-8051-b4836f8f41d4\") " pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.609221 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-credential-keys\") pod \"keystone-bootstrap-nsn72\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.609274 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8brfv\" (UniqueName: \"kubernetes.io/projected/daf91159-3454-4ddc-96d2-9ef78ad16e9c-kube-api-access-8brfv\") pod \"keystone-bootstrap-nsn72\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.609315 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-dns-svc\") pod \"dnsmasq-dns-6c5dc88467-bm87w\" (UID: \"cdcce122-6441-49d4-8051-b4836f8f41d4\") " pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.609334 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-config-data\") pod \"keystone-bootstrap-nsn72\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.609377 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-combined-ca-bundle\") pod \"keystone-bootstrap-nsn72\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.609416 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-ovsdbserver-sb\") pod \"dnsmasq-dns-6c5dc88467-bm87w\" (UID: \"cdcce122-6441-49d4-8051-b4836f8f41d4\") " pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.609435 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-ovsdbserver-nb\") pod \"dnsmasq-dns-6c5dc88467-bm87w\" (UID: \"cdcce122-6441-49d4-8051-b4836f8f41d4\") " pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.609475 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlbhl\" (UniqueName: \"kubernetes.io/projected/cdcce122-6441-49d4-8051-b4836f8f41d4-kube-api-access-hlbhl\") pod \"dnsmasq-dns-6c5dc88467-bm87w\" (UID: \"cdcce122-6441-49d4-8051-b4836f8f41d4\") " pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.609493 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-scripts\") pod \"keystone-bootstrap-nsn72\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.609522 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-config\") pod \"dnsmasq-dns-6c5dc88467-bm87w\" (UID: \"cdcce122-6441-49d4-8051-b4836f8f41d4\") " pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.609539 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-fernet-keys\") pod \"keystone-bootstrap-nsn72\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.613016 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-ovsdbserver-nb\") pod \"dnsmasq-dns-6c5dc88467-bm87w\" (UID: \"cdcce122-6441-49d4-8051-b4836f8f41d4\") " pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.613515 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-ovsdbserver-sb\") pod \"dnsmasq-dns-6c5dc88467-bm87w\" (UID: \"cdcce122-6441-49d4-8051-b4836f8f41d4\") " pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.613924 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-scripts\") pod \"keystone-bootstrap-nsn72\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.614010 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-credential-keys\") pod \"keystone-bootstrap-nsn72\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.614140 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-dns-svc\") pod \"dnsmasq-dns-6c5dc88467-bm87w\" (UID: \"cdcce122-6441-49d4-8051-b4836f8f41d4\") " pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.616217 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-combined-ca-bundle\") pod \"keystone-bootstrap-nsn72\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.623088 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-config-data\") pod \"keystone-bootstrap-nsn72\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.624067 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-config\") pod \"dnsmasq-dns-6c5dc88467-bm87w\" (UID: \"cdcce122-6441-49d4-8051-b4836f8f41d4\") " pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.624980 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-fernet-keys\") pod \"keystone-bootstrap-nsn72\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.633379 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlbhl\" (UniqueName: \"kubernetes.io/projected/cdcce122-6441-49d4-8051-b4836f8f41d4-kube-api-access-hlbhl\") pod \"dnsmasq-dns-6c5dc88467-bm87w\" (UID: \"cdcce122-6441-49d4-8051-b4836f8f41d4\") " pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.638944 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8brfv\" (UniqueName: \"kubernetes.io/projected/daf91159-3454-4ddc-96d2-9ef78ad16e9c-kube-api-access-8brfv\") pod \"keystone-bootstrap-nsn72\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.731153 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" Nov 24 20:46:43 crc kubenswrapper[4812]: I1124 20:46:43.764662 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:44 crc kubenswrapper[4812]: I1124 20:46:44.235882 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nsn72"] Nov 24 20:46:44 crc kubenswrapper[4812]: W1124 20:46:44.241410 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaf91159_3454_4ddc_96d2_9ef78ad16e9c.slice/crio-1956bcfd181c6abecfb8daf70c3774388808170c3885a36647501616da8c2471 WatchSource:0}: Error finding container 1956bcfd181c6abecfb8daf70c3774388808170c3885a36647501616da8c2471: Status 404 returned error can't find the container with id 1956bcfd181c6abecfb8daf70c3774388808170c3885a36647501616da8c2471 Nov 24 20:46:44 crc kubenswrapper[4812]: W1124 20:46:44.270057 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdcce122_6441_49d4_8051_b4836f8f41d4.slice/crio-5f05e4e77f60aa86d5de3fd69df6ba8fbdd396249778ebf730d378b9e94b43db WatchSource:0}: Error finding container 5f05e4e77f60aa86d5de3fd69df6ba8fbdd396249778ebf730d378b9e94b43db: Status 404 returned error can't find the container with id 5f05e4e77f60aa86d5de3fd69df6ba8fbdd396249778ebf730d378b9e94b43db Nov 24 20:46:44 crc kubenswrapper[4812]: I1124 20:46:44.271216 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c5dc88467-bm87w"] Nov 24 20:46:45 crc kubenswrapper[4812]: I1124 20:46:45.247677 4812 generic.go:334] "Generic (PLEG): container finished" podID="cdcce122-6441-49d4-8051-b4836f8f41d4" containerID="6913aac632e096748fa34e803f443ea3495dbfce7873545e4efa3e3fb5acec7e" exitCode=0 Nov 24 20:46:45 crc kubenswrapper[4812]: I1124 20:46:45.248156 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" event={"ID":"cdcce122-6441-49d4-8051-b4836f8f41d4","Type":"ContainerDied","Data":"6913aac632e096748fa34e803f443ea3495dbfce7873545e4efa3e3fb5acec7e"} Nov 24 20:46:45 crc kubenswrapper[4812]: I1124 20:46:45.248188 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" event={"ID":"cdcce122-6441-49d4-8051-b4836f8f41d4","Type":"ContainerStarted","Data":"5f05e4e77f60aa86d5de3fd69df6ba8fbdd396249778ebf730d378b9e94b43db"} Nov 24 20:46:45 crc kubenswrapper[4812]: I1124 20:46:45.252941 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nsn72" event={"ID":"daf91159-3454-4ddc-96d2-9ef78ad16e9c","Type":"ContainerStarted","Data":"759c0b19a97ef6c1657ae321955603f1f6fe10c9fb2c3ff5a45759cfa1e13292"} Nov 24 20:46:45 crc kubenswrapper[4812]: I1124 20:46:45.252976 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nsn72" event={"ID":"daf91159-3454-4ddc-96d2-9ef78ad16e9c","Type":"ContainerStarted","Data":"1956bcfd181c6abecfb8daf70c3774388808170c3885a36647501616da8c2471"} Nov 24 20:46:45 crc kubenswrapper[4812]: I1124 20:46:45.300667 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nsn72" podStartSLOduration=2.300649249 podStartE2EDuration="2.300649249s" podCreationTimestamp="2025-11-24 20:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:46:45.295212245 +0000 UTC m=+5399.084164616" watchObservedRunningTime="2025-11-24 20:46:45.300649249 +0000 UTC m=+5399.089601620" Nov 24 20:46:46 crc kubenswrapper[4812]: I1124 20:46:46.275149 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" event={"ID":"cdcce122-6441-49d4-8051-b4836f8f41d4","Type":"ContainerStarted","Data":"5a0c663e423acdb666c9ed5ed757c544bcb4868d0fc1983beb188ab8da26e35a"} Nov 24 20:46:46 crc kubenswrapper[4812]: I1124 20:46:46.275558 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" Nov 24 20:46:46 crc kubenswrapper[4812]: I1124 20:46:46.325599 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" podStartSLOduration=3.3255730039999998 podStartE2EDuration="3.325573004s" podCreationTimestamp="2025-11-24 20:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:46:46.313123131 +0000 UTC m=+5400.102075542" watchObservedRunningTime="2025-11-24 20:46:46.325573004 +0000 UTC m=+5400.114525415" Nov 24 20:46:46 crc kubenswrapper[4812]: I1124 20:46:46.976703 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:46:46 crc kubenswrapper[4812]: E1124 20:46:46.977105 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:46:48 crc kubenswrapper[4812]: I1124 20:46:48.297690 4812 generic.go:334] "Generic (PLEG): container finished" podID="daf91159-3454-4ddc-96d2-9ef78ad16e9c" containerID="759c0b19a97ef6c1657ae321955603f1f6fe10c9fb2c3ff5a45759cfa1e13292" exitCode=0 Nov 24 20:46:48 crc kubenswrapper[4812]: I1124 20:46:48.297809 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nsn72" event={"ID":"daf91159-3454-4ddc-96d2-9ef78ad16e9c","Type":"ContainerDied","Data":"759c0b19a97ef6c1657ae321955603f1f6fe10c9fb2c3ff5a45759cfa1e13292"} Nov 24 20:46:49 crc kubenswrapper[4812]: I1124 20:46:49.748927 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:49 crc kubenswrapper[4812]: I1124 20:46:49.849407 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-combined-ca-bundle\") pod \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " Nov 24 20:46:49 crc kubenswrapper[4812]: I1124 20:46:49.849555 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-config-data\") pod \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " Nov 24 20:46:49 crc kubenswrapper[4812]: I1124 20:46:49.849586 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-credential-keys\") pod \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " Nov 24 20:46:49 crc kubenswrapper[4812]: I1124 20:46:49.849659 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-scripts\") pod \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " Nov 24 20:46:49 crc kubenswrapper[4812]: I1124 20:46:49.849696 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-fernet-keys\") pod \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " Nov 24 20:46:49 crc kubenswrapper[4812]: I1124 20:46:49.849730 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8brfv\" (UniqueName: \"kubernetes.io/projected/daf91159-3454-4ddc-96d2-9ef78ad16e9c-kube-api-access-8brfv\") pod \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\" (UID: \"daf91159-3454-4ddc-96d2-9ef78ad16e9c\") " Nov 24 20:46:49 crc kubenswrapper[4812]: I1124 20:46:49.855838 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-scripts" (OuterVolumeSpecName: "scripts") pod "daf91159-3454-4ddc-96d2-9ef78ad16e9c" (UID: "daf91159-3454-4ddc-96d2-9ef78ad16e9c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:46:49 crc kubenswrapper[4812]: I1124 20:46:49.856314 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "daf91159-3454-4ddc-96d2-9ef78ad16e9c" (UID: "daf91159-3454-4ddc-96d2-9ef78ad16e9c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:46:49 crc kubenswrapper[4812]: I1124 20:46:49.856355 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daf91159-3454-4ddc-96d2-9ef78ad16e9c-kube-api-access-8brfv" (OuterVolumeSpecName: "kube-api-access-8brfv") pod "daf91159-3454-4ddc-96d2-9ef78ad16e9c" (UID: "daf91159-3454-4ddc-96d2-9ef78ad16e9c"). InnerVolumeSpecName "kube-api-access-8brfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:46:49 crc kubenswrapper[4812]: I1124 20:46:49.857071 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "daf91159-3454-4ddc-96d2-9ef78ad16e9c" (UID: "daf91159-3454-4ddc-96d2-9ef78ad16e9c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:46:49 crc kubenswrapper[4812]: I1124 20:46:49.876512 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-config-data" (OuterVolumeSpecName: "config-data") pod "daf91159-3454-4ddc-96d2-9ef78ad16e9c" (UID: "daf91159-3454-4ddc-96d2-9ef78ad16e9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:46:49 crc kubenswrapper[4812]: I1124 20:46:49.888547 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "daf91159-3454-4ddc-96d2-9ef78ad16e9c" (UID: "daf91159-3454-4ddc-96d2-9ef78ad16e9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:46:49 crc kubenswrapper[4812]: I1124 20:46:49.951278 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:49 crc kubenswrapper[4812]: I1124 20:46:49.951306 4812 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:49 crc kubenswrapper[4812]: I1124 20:46:49.951319 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8brfv\" (UniqueName: \"kubernetes.io/projected/daf91159-3454-4ddc-96d2-9ef78ad16e9c-kube-api-access-8brfv\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:49 crc kubenswrapper[4812]: I1124 20:46:49.951333 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:49 crc kubenswrapper[4812]: I1124 20:46:49.951359 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:49 crc kubenswrapper[4812]: I1124 20:46:49.951367 4812 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/daf91159-3454-4ddc-96d2-9ef78ad16e9c-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.338599 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nsn72" event={"ID":"daf91159-3454-4ddc-96d2-9ef78ad16e9c","Type":"ContainerDied","Data":"1956bcfd181c6abecfb8daf70c3774388808170c3885a36647501616da8c2471"} Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.338652 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1956bcfd181c6abecfb8daf70c3774388808170c3885a36647501616da8c2471" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.338742 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nsn72" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.429038 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nsn72"] Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.441410 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nsn72"] Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.515138 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-78hhg"] Nov 24 20:46:50 crc kubenswrapper[4812]: E1124 20:46:50.515676 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daf91159-3454-4ddc-96d2-9ef78ad16e9c" containerName="keystone-bootstrap" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.515722 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="daf91159-3454-4ddc-96d2-9ef78ad16e9c" containerName="keystone-bootstrap" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.516062 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="daf91159-3454-4ddc-96d2-9ef78ad16e9c" containerName="keystone-bootstrap" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.517021 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.522448 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.522867 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.523187 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-c4htr" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.523759 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.524088 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.537937 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-78hhg"] Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.588108 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-config-data\") pod \"keystone-bootstrap-78hhg\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.588259 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-combined-ca-bundle\") pod \"keystone-bootstrap-78hhg\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.588451 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-scripts\") pod \"keystone-bootstrap-78hhg\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.588534 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-credential-keys\") pod \"keystone-bootstrap-78hhg\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.588594 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-fernet-keys\") pod \"keystone-bootstrap-78hhg\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.588641 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw897\" (UniqueName: \"kubernetes.io/projected/a84172ce-260b-4d2e-ae44-f247a870acdf-kube-api-access-pw897\") pod \"keystone-bootstrap-78hhg\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.691050 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-combined-ca-bundle\") pod \"keystone-bootstrap-78hhg\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.691194 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-scripts\") pod \"keystone-bootstrap-78hhg\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.691253 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-credential-keys\") pod \"keystone-bootstrap-78hhg\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.691296 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-fernet-keys\") pod \"keystone-bootstrap-78hhg\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.691341 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw897\" (UniqueName: \"kubernetes.io/projected/a84172ce-260b-4d2e-ae44-f247a870acdf-kube-api-access-pw897\") pod \"keystone-bootstrap-78hhg\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.691556 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-config-data\") pod \"keystone-bootstrap-78hhg\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.696885 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-fernet-keys\") pod \"keystone-bootstrap-78hhg\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.697642 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-combined-ca-bundle\") pod \"keystone-bootstrap-78hhg\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.697973 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-config-data\") pod \"keystone-bootstrap-78hhg\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.698772 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-scripts\") pod \"keystone-bootstrap-78hhg\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.703475 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-credential-keys\") pod \"keystone-bootstrap-78hhg\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.712899 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw897\" (UniqueName: \"kubernetes.io/projected/a84172ce-260b-4d2e-ae44-f247a870acdf-kube-api-access-pw897\") pod \"keystone-bootstrap-78hhg\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.906731 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:50 crc kubenswrapper[4812]: I1124 20:46:50.998749 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daf91159-3454-4ddc-96d2-9ef78ad16e9c" path="/var/lib/kubelet/pods/daf91159-3454-4ddc-96d2-9ef78ad16e9c/volumes" Nov 24 20:46:51 crc kubenswrapper[4812]: I1124 20:46:51.396523 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-78hhg"] Nov 24 20:46:52 crc kubenswrapper[4812]: I1124 20:46:52.358467 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-78hhg" event={"ID":"a84172ce-260b-4d2e-ae44-f247a870acdf","Type":"ContainerStarted","Data":"fe80deb1c02551173cd990b9ad31f041cf3f3c706a986f035fde6869e9b4d5f4"} Nov 24 20:46:52 crc kubenswrapper[4812]: I1124 20:46:52.358516 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-78hhg" event={"ID":"a84172ce-260b-4d2e-ae44-f247a870acdf","Type":"ContainerStarted","Data":"50e0aa8a7eae7f35037c19a60c7bae9b7ceb04cb676ba57365b463d80ad1a914"} Nov 24 20:46:52 crc kubenswrapper[4812]: I1124 20:46:52.387571 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-78hhg" podStartSLOduration=2.387544213 podStartE2EDuration="2.387544213s" podCreationTimestamp="2025-11-24 20:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:46:52.375716398 +0000 UTC m=+5406.164668769" watchObservedRunningTime="2025-11-24 20:46:52.387544213 +0000 UTC m=+5406.176496614" Nov 24 20:46:53 crc kubenswrapper[4812]: I1124 20:46:53.733582 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" Nov 24 20:46:53 crc kubenswrapper[4812]: I1124 20:46:53.814632 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57f9f8f7d7-5z5lr"] Nov 24 20:46:53 crc kubenswrapper[4812]: I1124 20:46:53.814992 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" podUID="5505d922-48b8-4b5e-9c7f-97446de3025c" containerName="dnsmasq-dns" containerID="cri-o://460f643affba54e33cf7556f4ca44b8ef3960d2ac166b17fa23e7d4d373c7dc8" gracePeriod=10 Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.358364 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.387229 4812 generic.go:334] "Generic (PLEG): container finished" podID="5505d922-48b8-4b5e-9c7f-97446de3025c" containerID="460f643affba54e33cf7556f4ca44b8ef3960d2ac166b17fa23e7d4d373c7dc8" exitCode=0 Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.387540 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" event={"ID":"5505d922-48b8-4b5e-9c7f-97446de3025c","Type":"ContainerDied","Data":"460f643affba54e33cf7556f4ca44b8ef3960d2ac166b17fa23e7d4d373c7dc8"} Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.387659 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" event={"ID":"5505d922-48b8-4b5e-9c7f-97446de3025c","Type":"ContainerDied","Data":"267d17aeeb27e6be703ac3753c42e1cbf8d5ca05d06b5dbba95026d4c11cfbf4"} Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.387692 4812 scope.go:117] "RemoveContainer" containerID="460f643affba54e33cf7556f4ca44b8ef3960d2ac166b17fa23e7d4d373c7dc8" Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.387561 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57f9f8f7d7-5z5lr" Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.395733 4812 generic.go:334] "Generic (PLEG): container finished" podID="a84172ce-260b-4d2e-ae44-f247a870acdf" containerID="fe80deb1c02551173cd990b9ad31f041cf3f3c706a986f035fde6869e9b4d5f4" exitCode=0 Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.395768 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-78hhg" event={"ID":"a84172ce-260b-4d2e-ae44-f247a870acdf","Type":"ContainerDied","Data":"fe80deb1c02551173cd990b9ad31f041cf3f3c706a986f035fde6869e9b4d5f4"} Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.430660 4812 scope.go:117] "RemoveContainer" containerID="43b9ca739fe947f209fc405a546a2f68e85b4f9b5cc91656b8683fc0597f2874" Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.455275 4812 scope.go:117] "RemoveContainer" containerID="460f643affba54e33cf7556f4ca44b8ef3960d2ac166b17fa23e7d4d373c7dc8" Nov 24 20:46:54 crc kubenswrapper[4812]: E1124 20:46:54.455925 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"460f643affba54e33cf7556f4ca44b8ef3960d2ac166b17fa23e7d4d373c7dc8\": container with ID starting with 460f643affba54e33cf7556f4ca44b8ef3960d2ac166b17fa23e7d4d373c7dc8 not found: ID does not exist" containerID="460f643affba54e33cf7556f4ca44b8ef3960d2ac166b17fa23e7d4d373c7dc8" Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.455975 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"460f643affba54e33cf7556f4ca44b8ef3960d2ac166b17fa23e7d4d373c7dc8"} err="failed to get container status \"460f643affba54e33cf7556f4ca44b8ef3960d2ac166b17fa23e7d4d373c7dc8\": rpc error: code = NotFound desc = could not find container \"460f643affba54e33cf7556f4ca44b8ef3960d2ac166b17fa23e7d4d373c7dc8\": container with ID starting with 460f643affba54e33cf7556f4ca44b8ef3960d2ac166b17fa23e7d4d373c7dc8 not found: ID does not exist" Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.456006 4812 scope.go:117] "RemoveContainer" containerID="43b9ca739fe947f209fc405a546a2f68e85b4f9b5cc91656b8683fc0597f2874" Nov 24 20:46:54 crc kubenswrapper[4812]: E1124 20:46:54.458501 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43b9ca739fe947f209fc405a546a2f68e85b4f9b5cc91656b8683fc0597f2874\": container with ID starting with 43b9ca739fe947f209fc405a546a2f68e85b4f9b5cc91656b8683fc0597f2874 not found: ID does not exist" containerID="43b9ca739fe947f209fc405a546a2f68e85b4f9b5cc91656b8683fc0597f2874" Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.458534 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43b9ca739fe947f209fc405a546a2f68e85b4f9b5cc91656b8683fc0597f2874"} err="failed to get container status \"43b9ca739fe947f209fc405a546a2f68e85b4f9b5cc91656b8683fc0597f2874\": rpc error: code = NotFound desc = could not find container \"43b9ca739fe947f209fc405a546a2f68e85b4f9b5cc91656b8683fc0597f2874\": container with ID starting with 43b9ca739fe947f209fc405a546a2f68e85b4f9b5cc91656b8683fc0597f2874 not found: ID does not exist" Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.492165 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4rzk\" (UniqueName: \"kubernetes.io/projected/5505d922-48b8-4b5e-9c7f-97446de3025c-kube-api-access-b4rzk\") pod \"5505d922-48b8-4b5e-9c7f-97446de3025c\" (UID: \"5505d922-48b8-4b5e-9c7f-97446de3025c\") " Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.492244 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-config\") pod \"5505d922-48b8-4b5e-9c7f-97446de3025c\" (UID: \"5505d922-48b8-4b5e-9c7f-97446de3025c\") " Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.492282 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-ovsdbserver-nb\") pod \"5505d922-48b8-4b5e-9c7f-97446de3025c\" (UID: \"5505d922-48b8-4b5e-9c7f-97446de3025c\") " Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.492427 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-ovsdbserver-sb\") pod \"5505d922-48b8-4b5e-9c7f-97446de3025c\" (UID: \"5505d922-48b8-4b5e-9c7f-97446de3025c\") " Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.492517 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-dns-svc\") pod \"5505d922-48b8-4b5e-9c7f-97446de3025c\" (UID: \"5505d922-48b8-4b5e-9c7f-97446de3025c\") " Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.500255 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5505d922-48b8-4b5e-9c7f-97446de3025c-kube-api-access-b4rzk" (OuterVolumeSpecName: "kube-api-access-b4rzk") pod "5505d922-48b8-4b5e-9c7f-97446de3025c" (UID: "5505d922-48b8-4b5e-9c7f-97446de3025c"). InnerVolumeSpecName "kube-api-access-b4rzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.537564 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-config" (OuterVolumeSpecName: "config") pod "5505d922-48b8-4b5e-9c7f-97446de3025c" (UID: "5505d922-48b8-4b5e-9c7f-97446de3025c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.539053 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5505d922-48b8-4b5e-9c7f-97446de3025c" (UID: "5505d922-48b8-4b5e-9c7f-97446de3025c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.539763 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5505d922-48b8-4b5e-9c7f-97446de3025c" (UID: "5505d922-48b8-4b5e-9c7f-97446de3025c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.540976 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5505d922-48b8-4b5e-9c7f-97446de3025c" (UID: "5505d922-48b8-4b5e-9c7f-97446de3025c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.594484 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.594933 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4rzk\" (UniqueName: \"kubernetes.io/projected/5505d922-48b8-4b5e-9c7f-97446de3025c-kube-api-access-b4rzk\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.594992 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-config\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.595053 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.595109 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5505d922-48b8-4b5e-9c7f-97446de3025c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.733994 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57f9f8f7d7-5z5lr"] Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.744724 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57f9f8f7d7-5z5lr"] Nov 24 20:46:54 crc kubenswrapper[4812]: I1124 20:46:54.979315 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5505d922-48b8-4b5e-9c7f-97446de3025c" path="/var/lib/kubelet/pods/5505d922-48b8-4b5e-9c7f-97446de3025c/volumes" Nov 24 20:46:55 crc kubenswrapper[4812]: I1124 20:46:55.903548 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.022109 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-fernet-keys\") pod \"a84172ce-260b-4d2e-ae44-f247a870acdf\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.022215 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-combined-ca-bundle\") pod \"a84172ce-260b-4d2e-ae44-f247a870acdf\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.022257 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-config-data\") pod \"a84172ce-260b-4d2e-ae44-f247a870acdf\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.022369 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-scripts\") pod \"a84172ce-260b-4d2e-ae44-f247a870acdf\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.022406 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-credential-keys\") pod \"a84172ce-260b-4d2e-ae44-f247a870acdf\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.022428 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw897\" (UniqueName: \"kubernetes.io/projected/a84172ce-260b-4d2e-ae44-f247a870acdf-kube-api-access-pw897\") pod \"a84172ce-260b-4d2e-ae44-f247a870acdf\" (UID: \"a84172ce-260b-4d2e-ae44-f247a870acdf\") " Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.029024 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-scripts" (OuterVolumeSpecName: "scripts") pod "a84172ce-260b-4d2e-ae44-f247a870acdf" (UID: "a84172ce-260b-4d2e-ae44-f247a870acdf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.029436 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a84172ce-260b-4d2e-ae44-f247a870acdf" (UID: "a84172ce-260b-4d2e-ae44-f247a870acdf"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.029950 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a84172ce-260b-4d2e-ae44-f247a870acdf-kube-api-access-pw897" (OuterVolumeSpecName: "kube-api-access-pw897") pod "a84172ce-260b-4d2e-ae44-f247a870acdf" (UID: "a84172ce-260b-4d2e-ae44-f247a870acdf"). InnerVolumeSpecName "kube-api-access-pw897". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.030832 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "a84172ce-260b-4d2e-ae44-f247a870acdf" (UID: "a84172ce-260b-4d2e-ae44-f247a870acdf"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.064045 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-config-data" (OuterVolumeSpecName: "config-data") pod "a84172ce-260b-4d2e-ae44-f247a870acdf" (UID: "a84172ce-260b-4d2e-ae44-f247a870acdf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.064110 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a84172ce-260b-4d2e-ae44-f247a870acdf" (UID: "a84172ce-260b-4d2e-ae44-f247a870acdf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.125040 4812 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.125106 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.125138 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.125161 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.125183 4812 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a84172ce-260b-4d2e-ae44-f247a870acdf-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.125208 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw897\" (UniqueName: \"kubernetes.io/projected/a84172ce-260b-4d2e-ae44-f247a870acdf-kube-api-access-pw897\") on node \"crc\" DevicePath \"\"" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.425111 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-78hhg" event={"ID":"a84172ce-260b-4d2e-ae44-f247a870acdf","Type":"ContainerDied","Data":"50e0aa8a7eae7f35037c19a60c7bae9b7ceb04cb676ba57365b463d80ad1a914"} Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.425151 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50e0aa8a7eae7f35037c19a60c7bae9b7ceb04cb676ba57365b463d80ad1a914" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.425211 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-78hhg" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.548944 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-65845ffd66-mhr2r"] Nov 24 20:46:56 crc kubenswrapper[4812]: E1124 20:46:56.557576 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5505d922-48b8-4b5e-9c7f-97446de3025c" containerName="dnsmasq-dns" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.557609 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5505d922-48b8-4b5e-9c7f-97446de3025c" containerName="dnsmasq-dns" Nov 24 20:46:56 crc kubenswrapper[4812]: E1124 20:46:56.557644 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a84172ce-260b-4d2e-ae44-f247a870acdf" containerName="keystone-bootstrap" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.557652 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a84172ce-260b-4d2e-ae44-f247a870acdf" containerName="keystone-bootstrap" Nov 24 20:46:56 crc kubenswrapper[4812]: E1124 20:46:56.557666 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5505d922-48b8-4b5e-9c7f-97446de3025c" containerName="init" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.557674 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5505d922-48b8-4b5e-9c7f-97446de3025c" containerName="init" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.557881 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a84172ce-260b-4d2e-ae44-f247a870acdf" containerName="keystone-bootstrap" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.557889 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="5505d922-48b8-4b5e-9c7f-97446de3025c" containerName="dnsmasq-dns" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.558510 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.566028 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-c4htr" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.566184 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.566234 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.566439 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.566558 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.566659 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.577373 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-65845ffd66-mhr2r"] Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.635096 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf884fd3-5b39-4003-9246-e340b17bc43f-config-data\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.635136 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf884fd3-5b39-4003-9246-e340b17bc43f-scripts\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.635224 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf884fd3-5b39-4003-9246-e340b17bc43f-internal-tls-certs\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.635254 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf884fd3-5b39-4003-9246-e340b17bc43f-credential-keys\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.635277 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj647\" (UniqueName: \"kubernetes.io/projected/cf884fd3-5b39-4003-9246-e340b17bc43f-kube-api-access-jj647\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.635291 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf884fd3-5b39-4003-9246-e340b17bc43f-fernet-keys\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.635308 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf884fd3-5b39-4003-9246-e340b17bc43f-public-tls-certs\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.635324 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf884fd3-5b39-4003-9246-e340b17bc43f-combined-ca-bundle\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.736923 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf884fd3-5b39-4003-9246-e340b17bc43f-credential-keys\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.736981 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj647\" (UniqueName: \"kubernetes.io/projected/cf884fd3-5b39-4003-9246-e340b17bc43f-kube-api-access-jj647\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.737005 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf884fd3-5b39-4003-9246-e340b17bc43f-fernet-keys\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.737025 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf884fd3-5b39-4003-9246-e340b17bc43f-public-tls-certs\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.737042 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf884fd3-5b39-4003-9246-e340b17bc43f-combined-ca-bundle\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.737076 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf884fd3-5b39-4003-9246-e340b17bc43f-config-data\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.737098 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf884fd3-5b39-4003-9246-e340b17bc43f-scripts\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.737182 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf884fd3-5b39-4003-9246-e340b17bc43f-internal-tls-certs\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.740757 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf884fd3-5b39-4003-9246-e340b17bc43f-public-tls-certs\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.740805 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf884fd3-5b39-4003-9246-e340b17bc43f-config-data\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.741246 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf884fd3-5b39-4003-9246-e340b17bc43f-fernet-keys\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.742173 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf884fd3-5b39-4003-9246-e340b17bc43f-scripts\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.742421 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf884fd3-5b39-4003-9246-e340b17bc43f-combined-ca-bundle\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.742546 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf884fd3-5b39-4003-9246-e340b17bc43f-credential-keys\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.753145 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf884fd3-5b39-4003-9246-e340b17bc43f-internal-tls-certs\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.753943 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj647\" (UniqueName: \"kubernetes.io/projected/cf884fd3-5b39-4003-9246-e340b17bc43f-kube-api-access-jj647\") pod \"keystone-65845ffd66-mhr2r\" (UID: \"cf884fd3-5b39-4003-9246-e340b17bc43f\") " pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:56 crc kubenswrapper[4812]: I1124 20:46:56.892548 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:57 crc kubenswrapper[4812]: W1124 20:46:57.196219 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf884fd3_5b39_4003_9246_e340b17bc43f.slice/crio-7ac20288aa46cd7a67ffc8b2c452480d36ab06a4a71b39e6f121b3e5688d834d WatchSource:0}: Error finding container 7ac20288aa46cd7a67ffc8b2c452480d36ab06a4a71b39e6f121b3e5688d834d: Status 404 returned error can't find the container with id 7ac20288aa46cd7a67ffc8b2c452480d36ab06a4a71b39e6f121b3e5688d834d Nov 24 20:46:57 crc kubenswrapper[4812]: I1124 20:46:57.199233 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-65845ffd66-mhr2r"] Nov 24 20:46:57 crc kubenswrapper[4812]: I1124 20:46:57.442083 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-65845ffd66-mhr2r" event={"ID":"cf884fd3-5b39-4003-9246-e340b17bc43f","Type":"ContainerStarted","Data":"7ac20288aa46cd7a67ffc8b2c452480d36ab06a4a71b39e6f121b3e5688d834d"} Nov 24 20:46:57 crc kubenswrapper[4812]: I1124 20:46:57.966607 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:46:57 crc kubenswrapper[4812]: E1124 20:46:57.966990 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:46:58 crc kubenswrapper[4812]: I1124 20:46:58.458366 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-65845ffd66-mhr2r" event={"ID":"cf884fd3-5b39-4003-9246-e340b17bc43f","Type":"ContainerStarted","Data":"d4c83eb95dbd2c12d64096886933271ef5fc98d4f25fd8f6ee984ec63e352bbb"} Nov 24 20:46:58 crc kubenswrapper[4812]: I1124 20:46:58.460498 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:46:58 crc kubenswrapper[4812]: I1124 20:46:58.494195 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-65845ffd66-mhr2r" podStartSLOduration=2.494166672 podStartE2EDuration="2.494166672s" podCreationTimestamp="2025-11-24 20:46:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:46:58.488214983 +0000 UTC m=+5412.277167394" watchObservedRunningTime="2025-11-24 20:46:58.494166672 +0000 UTC m=+5412.283119053" Nov 24 20:47:11 crc kubenswrapper[4812]: I1124 20:47:11.966204 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:47:11 crc kubenswrapper[4812]: E1124 20:47:11.967464 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:47:22 crc kubenswrapper[4812]: I1124 20:47:22.966052 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:47:22 crc kubenswrapper[4812]: E1124 20:47:22.967108 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:47:28 crc kubenswrapper[4812]: I1124 20:47:28.615913 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-65845ffd66-mhr2r" Nov 24 20:47:32 crc kubenswrapper[4812]: I1124 20:47:32.845230 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 24 20:47:32 crc kubenswrapper[4812]: I1124 20:47:32.847389 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 20:47:32 crc kubenswrapper[4812]: I1124 20:47:32.851633 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 24 20:47:32 crc kubenswrapper[4812]: I1124 20:47:32.851916 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-qnn94" Nov 24 20:47:32 crc kubenswrapper[4812]: I1124 20:47:32.852040 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 24 20:47:32 crc kubenswrapper[4812]: I1124 20:47:32.859190 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 20:47:32 crc kubenswrapper[4812]: I1124 20:47:32.910185 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 24 20:47:32 crc kubenswrapper[4812]: E1124 20:47:32.910851 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-95h94 openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="8de5de0e-6c41-4e4a-a978-20318da4a9e8" Nov 24 20:47:32 crc kubenswrapper[4812]: I1124 20:47:32.922970 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 24 20:47:32 crc kubenswrapper[4812]: I1124 20:47:32.955803 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 24 20:47:32 crc kubenswrapper[4812]: I1124 20:47:32.957500 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 20:47:32 crc kubenswrapper[4812]: I1124 20:47:32.984850 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8de5de0e-6c41-4e4a-a978-20318da4a9e8" path="/var/lib/kubelet/pods/8de5de0e-6c41-4e4a-a978-20318da4a9e8/volumes" Nov 24 20:47:32 crc kubenswrapper[4812]: I1124 20:47:32.985183 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 20:47:32 crc kubenswrapper[4812]: I1124 20:47:32.990263 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8de5de0e-6c41-4e4a-a978-20318da4a9e8-openstack-config\") pod \"openstackclient\" (UID: \"8de5de0e-6c41-4e4a-a978-20318da4a9e8\") " pod="openstack/openstackclient" Nov 24 20:47:32 crc kubenswrapper[4812]: I1124 20:47:32.990305 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-openstack-config-secret\") pod \"openstackclient\" (UID: \"bee308e6-8ad3-4d4b-b3e4-34ba092d429e\") " pod="openstack/openstackclient" Nov 24 20:47:32 crc kubenswrapper[4812]: I1124 20:47:32.990409 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-openstack-config\") pod \"openstackclient\" (UID: \"bee308e6-8ad3-4d4b-b3e4-34ba092d429e\") " pod="openstack/openstackclient" Nov 24 20:47:32 crc kubenswrapper[4812]: I1124 20:47:32.990432 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de5de0e-6c41-4e4a-a978-20318da4a9e8-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8de5de0e-6c41-4e4a-a978-20318da4a9e8\") " pod="openstack/openstackclient" Nov 24 20:47:32 crc kubenswrapper[4812]: I1124 20:47:32.990491 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8de5de0e-6c41-4e4a-a978-20318da4a9e8-openstack-config-secret\") pod \"openstackclient\" (UID: \"8de5de0e-6c41-4e4a-a978-20318da4a9e8\") " pod="openstack/openstackclient" Nov 24 20:47:32 crc kubenswrapper[4812]: I1124 20:47:32.990512 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95h94\" (UniqueName: \"kubernetes.io/projected/8de5de0e-6c41-4e4a-a978-20318da4a9e8-kube-api-access-95h94\") pod \"openstackclient\" (UID: \"8de5de0e-6c41-4e4a-a978-20318da4a9e8\") " pod="openstack/openstackclient" Nov 24 20:47:32 crc kubenswrapper[4812]: I1124 20:47:32.990544 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd2nl\" (UniqueName: \"kubernetes.io/projected/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-kube-api-access-xd2nl\") pod \"openstackclient\" (UID: \"bee308e6-8ad3-4d4b-b3e4-34ba092d429e\") " pod="openstack/openstackclient" Nov 24 20:47:32 crc kubenswrapper[4812]: I1124 20:47:32.990586 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bee308e6-8ad3-4d4b-b3e4-34ba092d429e\") " pod="openstack/openstackclient" Nov 24 20:47:33 crc kubenswrapper[4812]: I1124 20:47:33.044240 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 20:47:33 crc kubenswrapper[4812]: I1124 20:47:33.052164 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 20:47:33 crc kubenswrapper[4812]: I1124 20:47:33.063332 4812 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8de5de0e-6c41-4e4a-a978-20318da4a9e8" podUID="bee308e6-8ad3-4d4b-b3e4-34ba092d429e" Nov 24 20:47:33 crc kubenswrapper[4812]: I1124 20:47:33.091141 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bee308e6-8ad3-4d4b-b3e4-34ba092d429e\") " pod="openstack/openstackclient" Nov 24 20:47:33 crc kubenswrapper[4812]: I1124 20:47:33.091246 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-openstack-config-secret\") pod \"openstackclient\" (UID: \"bee308e6-8ad3-4d4b-b3e4-34ba092d429e\") " pod="openstack/openstackclient" Nov 24 20:47:33 crc kubenswrapper[4812]: I1124 20:47:33.091285 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-openstack-config\") pod \"openstackclient\" (UID: \"bee308e6-8ad3-4d4b-b3e4-34ba092d429e\") " pod="openstack/openstackclient" Nov 24 20:47:33 crc kubenswrapper[4812]: I1124 20:47:33.091353 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd2nl\" (UniqueName: \"kubernetes.io/projected/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-kube-api-access-xd2nl\") pod \"openstackclient\" (UID: \"bee308e6-8ad3-4d4b-b3e4-34ba092d429e\") " pod="openstack/openstackclient" Nov 24 20:47:33 crc kubenswrapper[4812]: I1124 20:47:33.091406 4812 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8de5de0e-6c41-4e4a-a978-20318da4a9e8-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 24 20:47:33 crc kubenswrapper[4812]: I1124 20:47:33.091417 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8de5de0e-6c41-4e4a-a978-20318da4a9e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:47:33 crc kubenswrapper[4812]: I1124 20:47:33.091425 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95h94\" (UniqueName: \"kubernetes.io/projected/8de5de0e-6c41-4e4a-a978-20318da4a9e8-kube-api-access-95h94\") on node \"crc\" DevicePath \"\"" Nov 24 20:47:33 crc kubenswrapper[4812]: I1124 20:47:33.091437 4812 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8de5de0e-6c41-4e4a-a978-20318da4a9e8-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 24 20:47:33 crc kubenswrapper[4812]: I1124 20:47:33.092669 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-openstack-config\") pod \"openstackclient\" (UID: \"bee308e6-8ad3-4d4b-b3e4-34ba092d429e\") " pod="openstack/openstackclient" Nov 24 20:47:33 crc kubenswrapper[4812]: I1124 20:47:33.101889 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-openstack-config-secret\") pod \"openstackclient\" (UID: \"bee308e6-8ad3-4d4b-b3e4-34ba092d429e\") " pod="openstack/openstackclient" Nov 24 20:47:33 crc kubenswrapper[4812]: I1124 20:47:33.119988 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bee308e6-8ad3-4d4b-b3e4-34ba092d429e\") " pod="openstack/openstackclient" Nov 24 20:47:33 crc kubenswrapper[4812]: I1124 20:47:33.131892 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd2nl\" (UniqueName: \"kubernetes.io/projected/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-kube-api-access-xd2nl\") pod \"openstackclient\" (UID: \"bee308e6-8ad3-4d4b-b3e4-34ba092d429e\") " pod="openstack/openstackclient" Nov 24 20:47:33 crc kubenswrapper[4812]: I1124 20:47:33.278637 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 20:47:33 crc kubenswrapper[4812]: I1124 20:47:33.728282 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 20:47:34 crc kubenswrapper[4812]: I1124 20:47:34.065144 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"bee308e6-8ad3-4d4b-b3e4-34ba092d429e","Type":"ContainerStarted","Data":"70b4872d25528ca1e916889a66f055a1028698286e91ea8753a2497d4c45c466"} Nov 24 20:47:34 crc kubenswrapper[4812]: I1124 20:47:34.065174 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 20:47:34 crc kubenswrapper[4812]: I1124 20:47:34.082782 4812 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8de5de0e-6c41-4e4a-a978-20318da4a9e8" podUID="bee308e6-8ad3-4d4b-b3e4-34ba092d429e" Nov 24 20:47:35 crc kubenswrapper[4812]: I1124 20:47:35.079168 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"bee308e6-8ad3-4d4b-b3e4-34ba092d429e","Type":"ContainerStarted","Data":"3d43920bccf804994c005a7a93b0e12f40753803693800250156e11dd515ef1e"} Nov 24 20:47:35 crc kubenswrapper[4812]: I1124 20:47:35.104086 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.104061587 podStartE2EDuration="3.104061587s" podCreationTimestamp="2025-11-24 20:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:47:35.100884797 +0000 UTC m=+5448.889837218" watchObservedRunningTime="2025-11-24 20:47:35.104061587 +0000 UTC m=+5448.893013988" Nov 24 20:47:37 crc kubenswrapper[4812]: I1124 20:47:37.966274 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:47:37 crc kubenswrapper[4812]: E1124 20:47:37.967100 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:47:49 crc kubenswrapper[4812]: I1124 20:47:49.965720 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:47:49 crc kubenswrapper[4812]: E1124 20:47:49.966872 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:47:53 crc kubenswrapper[4812]: I1124 20:47:53.379702 4812 scope.go:117] "RemoveContainer" containerID="30bec8470eb2fad5fa3632a9fe05bf6d8957bc96667337016d2bedc67f57555e" Nov 24 20:47:53 crc kubenswrapper[4812]: I1124 20:47:53.405105 4812 scope.go:117] "RemoveContainer" containerID="032b9716509cb826cd2c9fa9280cdde971c0c32ffa278f6528739b37176aadb7" Nov 24 20:47:53 crc kubenswrapper[4812]: I1124 20:47:53.458430 4812 scope.go:117] "RemoveContainer" containerID="ee17754759a0a69b2f56d0e7308ee2342ced3889325ec9e623c64c85f05476d7" Nov 24 20:47:53 crc kubenswrapper[4812]: I1124 20:47:53.492987 4812 scope.go:117] "RemoveContainer" containerID="20a8f0f36c2f2d6f0ae28a7f82c46f198393607d99c2dbe2b18541246a316305" Nov 24 20:47:53 crc kubenswrapper[4812]: I1124 20:47:53.528929 4812 scope.go:117] "RemoveContainer" containerID="e627b21f9b4742f510890b23e21a47f55cee0a5bb5b493e3d8b3f607cd9c8de2" Nov 24 20:47:53 crc kubenswrapper[4812]: I1124 20:47:53.571940 4812 scope.go:117] "RemoveContainer" containerID="eb6309e6554adc9c1e72bd5de294bb46615395e03cc255f4ad72a11b941457ec" Nov 24 20:47:53 crc kubenswrapper[4812]: I1124 20:47:53.595115 4812 scope.go:117] "RemoveContainer" containerID="21f3ce6f18737be1eb3e1d1ffd0b21ae7c6433e60b2d3bf839f5f09b78105d07" Nov 24 20:47:53 crc kubenswrapper[4812]: I1124 20:47:53.611495 4812 scope.go:117] "RemoveContainer" containerID="dc8d22dfff79c932edf45fdc8a10bf030f76c8c92aabdb16755aa411fd900d4b" Nov 24 20:48:02 crc kubenswrapper[4812]: I1124 20:48:02.965486 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:48:02 crc kubenswrapper[4812]: E1124 20:48:02.966442 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:48:15 crc kubenswrapper[4812]: I1124 20:48:15.966534 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:48:15 crc kubenswrapper[4812]: E1124 20:48:15.968259 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:48:30 crc kubenswrapper[4812]: I1124 20:48:30.965685 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:48:30 crc kubenswrapper[4812]: E1124 20:48:30.966456 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:48:45 crc kubenswrapper[4812]: I1124 20:48:45.966154 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:48:45 crc kubenswrapper[4812]: E1124 20:48:45.966869 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:48:57 crc kubenswrapper[4812]: I1124 20:48:57.965870 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:48:57 crc kubenswrapper[4812]: E1124 20:48:57.967227 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:49:03 crc kubenswrapper[4812]: E1124 20:49:03.686425 4812 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.36:43270->38.102.83.36:46073: write tcp 38.102.83.36:43270->38.102.83.36:46073: write: broken pipe Nov 24 20:49:07 crc kubenswrapper[4812]: I1124 20:49:07.786888 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-fl42b"] Nov 24 20:49:07 crc kubenswrapper[4812]: I1124 20:49:07.788395 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fl42b" Nov 24 20:49:07 crc kubenswrapper[4812]: I1124 20:49:07.793856 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-fl42b"] Nov 24 20:49:07 crc kubenswrapper[4812]: I1124 20:49:07.885596 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-7473-account-create-mg6kt"] Nov 24 20:49:07 crc kubenswrapper[4812]: I1124 20:49:07.887568 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7473-account-create-mg6kt" Nov 24 20:49:07 crc kubenswrapper[4812]: I1124 20:49:07.889898 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 24 20:49:07 crc kubenswrapper[4812]: I1124 20:49:07.896671 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7473-account-create-mg6kt"] Nov 24 20:49:07 crc kubenswrapper[4812]: I1124 20:49:07.929832 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcf16335-ccec-4697-ae06-28ee97e365bf-operator-scripts\") pod \"barbican-7473-account-create-mg6kt\" (UID: \"dcf16335-ccec-4697-ae06-28ee97e365bf\") " pod="openstack/barbican-7473-account-create-mg6kt" Nov 24 20:49:07 crc kubenswrapper[4812]: I1124 20:49:07.930109 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxnxq\" (UniqueName: \"kubernetes.io/projected/b1aeaff7-a31a-4a80-9513-2046a638e838-kube-api-access-kxnxq\") pod \"barbican-db-create-fl42b\" (UID: \"b1aeaff7-a31a-4a80-9513-2046a638e838\") " pod="openstack/barbican-db-create-fl42b" Nov 24 20:49:07 crc kubenswrapper[4812]: I1124 20:49:07.930233 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9f8t\" (UniqueName: \"kubernetes.io/projected/dcf16335-ccec-4697-ae06-28ee97e365bf-kube-api-access-z9f8t\") pod \"barbican-7473-account-create-mg6kt\" (UID: \"dcf16335-ccec-4697-ae06-28ee97e365bf\") " pod="openstack/barbican-7473-account-create-mg6kt" Nov 24 20:49:07 crc kubenswrapper[4812]: I1124 20:49:07.931268 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1aeaff7-a31a-4a80-9513-2046a638e838-operator-scripts\") pod \"barbican-db-create-fl42b\" (UID: \"b1aeaff7-a31a-4a80-9513-2046a638e838\") " pod="openstack/barbican-db-create-fl42b" Nov 24 20:49:08 crc kubenswrapper[4812]: I1124 20:49:08.032931 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1aeaff7-a31a-4a80-9513-2046a638e838-operator-scripts\") pod \"barbican-db-create-fl42b\" (UID: \"b1aeaff7-a31a-4a80-9513-2046a638e838\") " pod="openstack/barbican-db-create-fl42b" Nov 24 20:49:08 crc kubenswrapper[4812]: I1124 20:49:08.033000 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcf16335-ccec-4697-ae06-28ee97e365bf-operator-scripts\") pod \"barbican-7473-account-create-mg6kt\" (UID: \"dcf16335-ccec-4697-ae06-28ee97e365bf\") " pod="openstack/barbican-7473-account-create-mg6kt" Nov 24 20:49:08 crc kubenswrapper[4812]: I1124 20:49:08.033026 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxnxq\" (UniqueName: \"kubernetes.io/projected/b1aeaff7-a31a-4a80-9513-2046a638e838-kube-api-access-kxnxq\") pod \"barbican-db-create-fl42b\" (UID: \"b1aeaff7-a31a-4a80-9513-2046a638e838\") " pod="openstack/barbican-db-create-fl42b" Nov 24 20:49:08 crc kubenswrapper[4812]: I1124 20:49:08.033049 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9f8t\" (UniqueName: \"kubernetes.io/projected/dcf16335-ccec-4697-ae06-28ee97e365bf-kube-api-access-z9f8t\") pod \"barbican-7473-account-create-mg6kt\" (UID: \"dcf16335-ccec-4697-ae06-28ee97e365bf\") " pod="openstack/barbican-7473-account-create-mg6kt" Nov 24 20:49:08 crc kubenswrapper[4812]: I1124 20:49:08.033833 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1aeaff7-a31a-4a80-9513-2046a638e838-operator-scripts\") pod \"barbican-db-create-fl42b\" (UID: \"b1aeaff7-a31a-4a80-9513-2046a638e838\") " pod="openstack/barbican-db-create-fl42b" Nov 24 20:49:08 crc kubenswrapper[4812]: I1124 20:49:08.034081 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcf16335-ccec-4697-ae06-28ee97e365bf-operator-scripts\") pod \"barbican-7473-account-create-mg6kt\" (UID: \"dcf16335-ccec-4697-ae06-28ee97e365bf\") " pod="openstack/barbican-7473-account-create-mg6kt" Nov 24 20:49:08 crc kubenswrapper[4812]: I1124 20:49:08.057112 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9f8t\" (UniqueName: \"kubernetes.io/projected/dcf16335-ccec-4697-ae06-28ee97e365bf-kube-api-access-z9f8t\") pod \"barbican-7473-account-create-mg6kt\" (UID: \"dcf16335-ccec-4697-ae06-28ee97e365bf\") " pod="openstack/barbican-7473-account-create-mg6kt" Nov 24 20:49:08 crc kubenswrapper[4812]: I1124 20:49:08.058256 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxnxq\" (UniqueName: \"kubernetes.io/projected/b1aeaff7-a31a-4a80-9513-2046a638e838-kube-api-access-kxnxq\") pod \"barbican-db-create-fl42b\" (UID: \"b1aeaff7-a31a-4a80-9513-2046a638e838\") " pod="openstack/barbican-db-create-fl42b" Nov 24 20:49:08 crc kubenswrapper[4812]: I1124 20:49:08.110233 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fl42b" Nov 24 20:49:08 crc kubenswrapper[4812]: I1124 20:49:08.209715 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7473-account-create-mg6kt" Nov 24 20:49:08 crc kubenswrapper[4812]: I1124 20:49:08.349979 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-fl42b"] Nov 24 20:49:08 crc kubenswrapper[4812]: I1124 20:49:08.658219 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7473-account-create-mg6kt"] Nov 24 20:49:08 crc kubenswrapper[4812]: W1124 20:49:08.658512 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcf16335_ccec_4697_ae06_28ee97e365bf.slice/crio-648086d89840217b4601a8b28f94a6911a064d3568672f7a43d7e8cf14781186 WatchSource:0}: Error finding container 648086d89840217b4601a8b28f94a6911a064d3568672f7a43d7e8cf14781186: Status 404 returned error can't find the container with id 648086d89840217b4601a8b28f94a6911a064d3568672f7a43d7e8cf14781186 Nov 24 20:49:09 crc kubenswrapper[4812]: I1124 20:49:09.187934 4812 generic.go:334] "Generic (PLEG): container finished" podID="b1aeaff7-a31a-4a80-9513-2046a638e838" containerID="c402721e22973a18d9629a69d828e7794fb82e4df24161c5af42d0d888ff6ac3" exitCode=0 Nov 24 20:49:09 crc kubenswrapper[4812]: I1124 20:49:09.188004 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fl42b" event={"ID":"b1aeaff7-a31a-4a80-9513-2046a638e838","Type":"ContainerDied","Data":"c402721e22973a18d9629a69d828e7794fb82e4df24161c5af42d0d888ff6ac3"} Nov 24 20:49:09 crc kubenswrapper[4812]: I1124 20:49:09.188363 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fl42b" event={"ID":"b1aeaff7-a31a-4a80-9513-2046a638e838","Type":"ContainerStarted","Data":"5e5635c5a7607f10302040dc304f494de0df93105d098db3352550a19722935d"} Nov 24 20:49:09 crc kubenswrapper[4812]: I1124 20:49:09.191059 4812 generic.go:334] "Generic (PLEG): container finished" podID="dcf16335-ccec-4697-ae06-28ee97e365bf" containerID="d163a649335407618204991a880eba9f773f354555673b85f6ec510bcfec42f0" exitCode=0 Nov 24 20:49:09 crc kubenswrapper[4812]: I1124 20:49:09.191102 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7473-account-create-mg6kt" event={"ID":"dcf16335-ccec-4697-ae06-28ee97e365bf","Type":"ContainerDied","Data":"d163a649335407618204991a880eba9f773f354555673b85f6ec510bcfec42f0"} Nov 24 20:49:09 crc kubenswrapper[4812]: I1124 20:49:09.191135 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7473-account-create-mg6kt" event={"ID":"dcf16335-ccec-4697-ae06-28ee97e365bf","Type":"ContainerStarted","Data":"648086d89840217b4601a8b28f94a6911a064d3568672f7a43d7e8cf14781186"} Nov 24 20:49:10 crc kubenswrapper[4812]: I1124 20:49:10.649362 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7473-account-create-mg6kt" Nov 24 20:49:10 crc kubenswrapper[4812]: I1124 20:49:10.662477 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fl42b" Nov 24 20:49:10 crc kubenswrapper[4812]: I1124 20:49:10.790022 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxnxq\" (UniqueName: \"kubernetes.io/projected/b1aeaff7-a31a-4a80-9513-2046a638e838-kube-api-access-kxnxq\") pod \"b1aeaff7-a31a-4a80-9513-2046a638e838\" (UID: \"b1aeaff7-a31a-4a80-9513-2046a638e838\") " Nov 24 20:49:10 crc kubenswrapper[4812]: I1124 20:49:10.790130 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1aeaff7-a31a-4a80-9513-2046a638e838-operator-scripts\") pod \"b1aeaff7-a31a-4a80-9513-2046a638e838\" (UID: \"b1aeaff7-a31a-4a80-9513-2046a638e838\") " Nov 24 20:49:10 crc kubenswrapper[4812]: I1124 20:49:10.790157 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcf16335-ccec-4697-ae06-28ee97e365bf-operator-scripts\") pod \"dcf16335-ccec-4697-ae06-28ee97e365bf\" (UID: \"dcf16335-ccec-4697-ae06-28ee97e365bf\") " Nov 24 20:49:10 crc kubenswrapper[4812]: I1124 20:49:10.790389 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9f8t\" (UniqueName: \"kubernetes.io/projected/dcf16335-ccec-4697-ae06-28ee97e365bf-kube-api-access-z9f8t\") pod \"dcf16335-ccec-4697-ae06-28ee97e365bf\" (UID: \"dcf16335-ccec-4697-ae06-28ee97e365bf\") " Nov 24 20:49:10 crc kubenswrapper[4812]: I1124 20:49:10.791367 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1aeaff7-a31a-4a80-9513-2046a638e838-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1aeaff7-a31a-4a80-9513-2046a638e838" (UID: "b1aeaff7-a31a-4a80-9513-2046a638e838"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:49:10 crc kubenswrapper[4812]: I1124 20:49:10.792121 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcf16335-ccec-4697-ae06-28ee97e365bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dcf16335-ccec-4697-ae06-28ee97e365bf" (UID: "dcf16335-ccec-4697-ae06-28ee97e365bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:49:10 crc kubenswrapper[4812]: I1124 20:49:10.795830 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcf16335-ccec-4697-ae06-28ee97e365bf-kube-api-access-z9f8t" (OuterVolumeSpecName: "kube-api-access-z9f8t") pod "dcf16335-ccec-4697-ae06-28ee97e365bf" (UID: "dcf16335-ccec-4697-ae06-28ee97e365bf"). InnerVolumeSpecName "kube-api-access-z9f8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:49:10 crc kubenswrapper[4812]: I1124 20:49:10.803534 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1aeaff7-a31a-4a80-9513-2046a638e838-kube-api-access-kxnxq" (OuterVolumeSpecName: "kube-api-access-kxnxq") pod "b1aeaff7-a31a-4a80-9513-2046a638e838" (UID: "b1aeaff7-a31a-4a80-9513-2046a638e838"). InnerVolumeSpecName "kube-api-access-kxnxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:49:10 crc kubenswrapper[4812]: I1124 20:49:10.891937 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9f8t\" (UniqueName: \"kubernetes.io/projected/dcf16335-ccec-4697-ae06-28ee97e365bf-kube-api-access-z9f8t\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:10 crc kubenswrapper[4812]: I1124 20:49:10.891972 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxnxq\" (UniqueName: \"kubernetes.io/projected/b1aeaff7-a31a-4a80-9513-2046a638e838-kube-api-access-kxnxq\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:10 crc kubenswrapper[4812]: I1124 20:49:10.891986 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1aeaff7-a31a-4a80-9513-2046a638e838-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:10 crc kubenswrapper[4812]: I1124 20:49:10.891997 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcf16335-ccec-4697-ae06-28ee97e365bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:10 crc kubenswrapper[4812]: I1124 20:49:10.965597 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:49:10 crc kubenswrapper[4812]: E1124 20:49:10.965873 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:49:11 crc kubenswrapper[4812]: I1124 20:49:11.214052 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7473-account-create-mg6kt" event={"ID":"dcf16335-ccec-4697-ae06-28ee97e365bf","Type":"ContainerDied","Data":"648086d89840217b4601a8b28f94a6911a064d3568672f7a43d7e8cf14781186"} Nov 24 20:49:11 crc kubenswrapper[4812]: I1124 20:49:11.214114 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="648086d89840217b4601a8b28f94a6911a064d3568672f7a43d7e8cf14781186" Nov 24 20:49:11 crc kubenswrapper[4812]: I1124 20:49:11.214116 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7473-account-create-mg6kt" Nov 24 20:49:11 crc kubenswrapper[4812]: I1124 20:49:11.216946 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-fl42b" event={"ID":"b1aeaff7-a31a-4a80-9513-2046a638e838","Type":"ContainerDied","Data":"5e5635c5a7607f10302040dc304f494de0df93105d098db3352550a19722935d"} Nov 24 20:49:11 crc kubenswrapper[4812]: I1124 20:49:11.216996 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e5635c5a7607f10302040dc304f494de0df93105d098db3352550a19722935d" Nov 24 20:49:11 crc kubenswrapper[4812]: I1124 20:49:11.217001 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-fl42b" Nov 24 20:49:13 crc kubenswrapper[4812]: I1124 20:49:13.197998 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-p2j52"] Nov 24 20:49:13 crc kubenswrapper[4812]: E1124 20:49:13.199688 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcf16335-ccec-4697-ae06-28ee97e365bf" containerName="mariadb-account-create" Nov 24 20:49:13 crc kubenswrapper[4812]: I1124 20:49:13.199735 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcf16335-ccec-4697-ae06-28ee97e365bf" containerName="mariadb-account-create" Nov 24 20:49:13 crc kubenswrapper[4812]: E1124 20:49:13.199788 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1aeaff7-a31a-4a80-9513-2046a638e838" containerName="mariadb-database-create" Nov 24 20:49:13 crc kubenswrapper[4812]: I1124 20:49:13.199801 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1aeaff7-a31a-4a80-9513-2046a638e838" containerName="mariadb-database-create" Nov 24 20:49:13 crc kubenswrapper[4812]: I1124 20:49:13.200105 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcf16335-ccec-4697-ae06-28ee97e365bf" containerName="mariadb-account-create" Nov 24 20:49:13 crc kubenswrapper[4812]: I1124 20:49:13.200166 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1aeaff7-a31a-4a80-9513-2046a638e838" containerName="mariadb-database-create" Nov 24 20:49:13 crc kubenswrapper[4812]: I1124 20:49:13.201170 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-p2j52" Nov 24 20:49:13 crc kubenswrapper[4812]: I1124 20:49:13.203735 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 20:49:13 crc kubenswrapper[4812]: I1124 20:49:13.210183 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fnsnw" Nov 24 20:49:13 crc kubenswrapper[4812]: I1124 20:49:13.212075 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-p2j52"] Nov 24 20:49:13 crc kubenswrapper[4812]: I1124 20:49:13.336452 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/19dfd708-252a-4b23-9d6c-2615078ce723-db-sync-config-data\") pod \"barbican-db-sync-p2j52\" (UID: \"19dfd708-252a-4b23-9d6c-2615078ce723\") " pod="openstack/barbican-db-sync-p2j52" Nov 24 20:49:13 crc kubenswrapper[4812]: I1124 20:49:13.336732 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19dfd708-252a-4b23-9d6c-2615078ce723-combined-ca-bundle\") pod \"barbican-db-sync-p2j52\" (UID: \"19dfd708-252a-4b23-9d6c-2615078ce723\") " pod="openstack/barbican-db-sync-p2j52" Nov 24 20:49:13 crc kubenswrapper[4812]: I1124 20:49:13.336925 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvcrl\" (UniqueName: \"kubernetes.io/projected/19dfd708-252a-4b23-9d6c-2615078ce723-kube-api-access-xvcrl\") pod \"barbican-db-sync-p2j52\" (UID: \"19dfd708-252a-4b23-9d6c-2615078ce723\") " pod="openstack/barbican-db-sync-p2j52" Nov 24 20:49:13 crc kubenswrapper[4812]: I1124 20:49:13.438478 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19dfd708-252a-4b23-9d6c-2615078ce723-combined-ca-bundle\") pod \"barbican-db-sync-p2j52\" (UID: \"19dfd708-252a-4b23-9d6c-2615078ce723\") " pod="openstack/barbican-db-sync-p2j52" Nov 24 20:49:13 crc kubenswrapper[4812]: I1124 20:49:13.438689 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvcrl\" (UniqueName: \"kubernetes.io/projected/19dfd708-252a-4b23-9d6c-2615078ce723-kube-api-access-xvcrl\") pod \"barbican-db-sync-p2j52\" (UID: \"19dfd708-252a-4b23-9d6c-2615078ce723\") " pod="openstack/barbican-db-sync-p2j52" Nov 24 20:49:13 crc kubenswrapper[4812]: I1124 20:49:13.438784 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/19dfd708-252a-4b23-9d6c-2615078ce723-db-sync-config-data\") pod \"barbican-db-sync-p2j52\" (UID: \"19dfd708-252a-4b23-9d6c-2615078ce723\") " pod="openstack/barbican-db-sync-p2j52" Nov 24 20:49:13 crc kubenswrapper[4812]: I1124 20:49:13.453999 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/19dfd708-252a-4b23-9d6c-2615078ce723-db-sync-config-data\") pod \"barbican-db-sync-p2j52\" (UID: \"19dfd708-252a-4b23-9d6c-2615078ce723\") " pod="openstack/barbican-db-sync-p2j52" Nov 24 20:49:13 crc kubenswrapper[4812]: I1124 20:49:13.454667 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19dfd708-252a-4b23-9d6c-2615078ce723-combined-ca-bundle\") pod \"barbican-db-sync-p2j52\" (UID: \"19dfd708-252a-4b23-9d6c-2615078ce723\") " pod="openstack/barbican-db-sync-p2j52" Nov 24 20:49:13 crc kubenswrapper[4812]: I1124 20:49:13.460688 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvcrl\" (UniqueName: \"kubernetes.io/projected/19dfd708-252a-4b23-9d6c-2615078ce723-kube-api-access-xvcrl\") pod \"barbican-db-sync-p2j52\" (UID: \"19dfd708-252a-4b23-9d6c-2615078ce723\") " pod="openstack/barbican-db-sync-p2j52" Nov 24 20:49:13 crc kubenswrapper[4812]: I1124 20:49:13.531165 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-p2j52" Nov 24 20:49:13 crc kubenswrapper[4812]: I1124 20:49:13.949387 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-p2j52"] Nov 24 20:49:14 crc kubenswrapper[4812]: I1124 20:49:14.259533 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-p2j52" event={"ID":"19dfd708-252a-4b23-9d6c-2615078ce723","Type":"ContainerStarted","Data":"df79cedf09ece862001ebe0dfbd60dc36249348554236f7d8317bfa8c50c2f32"} Nov 24 20:49:14 crc kubenswrapper[4812]: I1124 20:49:14.259904 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-p2j52" event={"ID":"19dfd708-252a-4b23-9d6c-2615078ce723","Type":"ContainerStarted","Data":"9d6f6f5eaa90abf9d06901498c2b76bf74d1e7daef1aac1d498ee0ba79af6338"} Nov 24 20:49:14 crc kubenswrapper[4812]: I1124 20:49:14.283229 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-p2j52" podStartSLOduration=1.283201974 podStartE2EDuration="1.283201974s" podCreationTimestamp="2025-11-24 20:49:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:49:14.280800206 +0000 UTC m=+5548.069752617" watchObservedRunningTime="2025-11-24 20:49:14.283201974 +0000 UTC m=+5548.072154385" Nov 24 20:49:16 crc kubenswrapper[4812]: I1124 20:49:16.279306 4812 generic.go:334] "Generic (PLEG): container finished" podID="19dfd708-252a-4b23-9d6c-2615078ce723" containerID="df79cedf09ece862001ebe0dfbd60dc36249348554236f7d8317bfa8c50c2f32" exitCode=0 Nov 24 20:49:16 crc kubenswrapper[4812]: I1124 20:49:16.279373 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-p2j52" event={"ID":"19dfd708-252a-4b23-9d6c-2615078ce723","Type":"ContainerDied","Data":"df79cedf09ece862001ebe0dfbd60dc36249348554236f7d8317bfa8c50c2f32"} Nov 24 20:49:17 crc kubenswrapper[4812]: I1124 20:49:17.669485 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-p2j52" Nov 24 20:49:17 crc kubenswrapper[4812]: I1124 20:49:17.724880 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19dfd708-252a-4b23-9d6c-2615078ce723-combined-ca-bundle\") pod \"19dfd708-252a-4b23-9d6c-2615078ce723\" (UID: \"19dfd708-252a-4b23-9d6c-2615078ce723\") " Nov 24 20:49:17 crc kubenswrapper[4812]: I1124 20:49:17.725073 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/19dfd708-252a-4b23-9d6c-2615078ce723-db-sync-config-data\") pod \"19dfd708-252a-4b23-9d6c-2615078ce723\" (UID: \"19dfd708-252a-4b23-9d6c-2615078ce723\") " Nov 24 20:49:17 crc kubenswrapper[4812]: I1124 20:49:17.725184 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvcrl\" (UniqueName: \"kubernetes.io/projected/19dfd708-252a-4b23-9d6c-2615078ce723-kube-api-access-xvcrl\") pod \"19dfd708-252a-4b23-9d6c-2615078ce723\" (UID: \"19dfd708-252a-4b23-9d6c-2615078ce723\") " Nov 24 20:49:17 crc kubenswrapper[4812]: I1124 20:49:17.746574 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19dfd708-252a-4b23-9d6c-2615078ce723-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "19dfd708-252a-4b23-9d6c-2615078ce723" (UID: "19dfd708-252a-4b23-9d6c-2615078ce723"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:49:17 crc kubenswrapper[4812]: I1124 20:49:17.754628 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19dfd708-252a-4b23-9d6c-2615078ce723-kube-api-access-xvcrl" (OuterVolumeSpecName: "kube-api-access-xvcrl") pod "19dfd708-252a-4b23-9d6c-2615078ce723" (UID: "19dfd708-252a-4b23-9d6c-2615078ce723"). InnerVolumeSpecName "kube-api-access-xvcrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:49:17 crc kubenswrapper[4812]: I1124 20:49:17.795539 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19dfd708-252a-4b23-9d6c-2615078ce723-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19dfd708-252a-4b23-9d6c-2615078ce723" (UID: "19dfd708-252a-4b23-9d6c-2615078ce723"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:49:17 crc kubenswrapper[4812]: I1124 20:49:17.830527 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvcrl\" (UniqueName: \"kubernetes.io/projected/19dfd708-252a-4b23-9d6c-2615078ce723-kube-api-access-xvcrl\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:17 crc kubenswrapper[4812]: I1124 20:49:17.830578 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19dfd708-252a-4b23-9d6c-2615078ce723-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:17 crc kubenswrapper[4812]: I1124 20:49:17.830591 4812 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/19dfd708-252a-4b23-9d6c-2615078ce723-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.308579 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-p2j52" event={"ID":"19dfd708-252a-4b23-9d6c-2615078ce723","Type":"ContainerDied","Data":"9d6f6f5eaa90abf9d06901498c2b76bf74d1e7daef1aac1d498ee0ba79af6338"} Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.308633 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d6f6f5eaa90abf9d06901498c2b76bf74d1e7daef1aac1d498ee0ba79af6338" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.308655 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-p2j52" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.549726 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-9fc8c454f-zfhjc"] Nov 24 20:49:18 crc kubenswrapper[4812]: E1124 20:49:18.550271 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19dfd708-252a-4b23-9d6c-2615078ce723" containerName="barbican-db-sync" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.550283 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="19dfd708-252a-4b23-9d6c-2615078ce723" containerName="barbican-db-sync" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.550492 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="19dfd708-252a-4b23-9d6c-2615078ce723" containerName="barbican-db-sync" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.551275 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9fc8c454f-zfhjc" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.558491 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.558760 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.558935 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fnsnw" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.571789 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9fc8c454f-zfhjc"] Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.597399 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-858465b8fb-6s7lg"] Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.599019 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-858465b8fb-6s7lg" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.603793 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.634682 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-858465b8fb-6s7lg"] Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.646181 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj9pz\" (UniqueName: \"kubernetes.io/projected/97eacd0a-6a86-431a-ad20-8c5b767e02de-kube-api-access-nj9pz\") pod \"barbican-worker-9fc8c454f-zfhjc\" (UID: \"97eacd0a-6a86-431a-ad20-8c5b767e02de\") " pod="openstack/barbican-worker-9fc8c454f-zfhjc" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.646252 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97eacd0a-6a86-431a-ad20-8c5b767e02de-config-data\") pod \"barbican-worker-9fc8c454f-zfhjc\" (UID: \"97eacd0a-6a86-431a-ad20-8c5b767e02de\") " pod="openstack/barbican-worker-9fc8c454f-zfhjc" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.646284 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b777cfd-5ec7-41fb-9119-96779293c4b3-logs\") pod \"barbican-keystone-listener-858465b8fb-6s7lg\" (UID: \"5b777cfd-5ec7-41fb-9119-96779293c4b3\") " pod="openstack/barbican-keystone-listener-858465b8fb-6s7lg" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.646309 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b777cfd-5ec7-41fb-9119-96779293c4b3-combined-ca-bundle\") pod \"barbican-keystone-listener-858465b8fb-6s7lg\" (UID: \"5b777cfd-5ec7-41fb-9119-96779293c4b3\") " pod="openstack/barbican-keystone-listener-858465b8fb-6s7lg" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.646345 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b777cfd-5ec7-41fb-9119-96779293c4b3-config-data-custom\") pod \"barbican-keystone-listener-858465b8fb-6s7lg\" (UID: \"5b777cfd-5ec7-41fb-9119-96779293c4b3\") " pod="openstack/barbican-keystone-listener-858465b8fb-6s7lg" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.646380 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b777cfd-5ec7-41fb-9119-96779293c4b3-config-data\") pod \"barbican-keystone-listener-858465b8fb-6s7lg\" (UID: \"5b777cfd-5ec7-41fb-9119-96779293c4b3\") " pod="openstack/barbican-keystone-listener-858465b8fb-6s7lg" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.646404 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97eacd0a-6a86-431a-ad20-8c5b767e02de-combined-ca-bundle\") pod \"barbican-worker-9fc8c454f-zfhjc\" (UID: \"97eacd0a-6a86-431a-ad20-8c5b767e02de\") " pod="openstack/barbican-worker-9fc8c454f-zfhjc" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.646427 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97eacd0a-6a86-431a-ad20-8c5b767e02de-logs\") pod \"barbican-worker-9fc8c454f-zfhjc\" (UID: \"97eacd0a-6a86-431a-ad20-8c5b767e02de\") " pod="openstack/barbican-worker-9fc8c454f-zfhjc" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.646453 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njs6z\" (UniqueName: \"kubernetes.io/projected/5b777cfd-5ec7-41fb-9119-96779293c4b3-kube-api-access-njs6z\") pod \"barbican-keystone-listener-858465b8fb-6s7lg\" (UID: \"5b777cfd-5ec7-41fb-9119-96779293c4b3\") " pod="openstack/barbican-keystone-listener-858465b8fb-6s7lg" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.646476 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97eacd0a-6a86-431a-ad20-8c5b767e02de-config-data-custom\") pod \"barbican-worker-9fc8c454f-zfhjc\" (UID: \"97eacd0a-6a86-431a-ad20-8c5b767e02de\") " pod="openstack/barbican-worker-9fc8c454f-zfhjc" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.656937 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-548f777df9-v5vmj"] Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.658269 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548f777df9-v5vmj" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.661237 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-548f777df9-v5vmj"] Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.748422 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97eacd0a-6a86-431a-ad20-8c5b767e02de-logs\") pod \"barbican-worker-9fc8c454f-zfhjc\" (UID: \"97eacd0a-6a86-431a-ad20-8c5b767e02de\") " pod="openstack/barbican-worker-9fc8c454f-zfhjc" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.748479 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njs6z\" (UniqueName: \"kubernetes.io/projected/5b777cfd-5ec7-41fb-9119-96779293c4b3-kube-api-access-njs6z\") pod \"barbican-keystone-listener-858465b8fb-6s7lg\" (UID: \"5b777cfd-5ec7-41fb-9119-96779293c4b3\") " pod="openstack/barbican-keystone-listener-858465b8fb-6s7lg" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.748510 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97eacd0a-6a86-431a-ad20-8c5b767e02de-config-data-custom\") pod \"barbican-worker-9fc8c454f-zfhjc\" (UID: \"97eacd0a-6a86-431a-ad20-8c5b767e02de\") " pod="openstack/barbican-worker-9fc8c454f-zfhjc" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.748543 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-config\") pod \"dnsmasq-dns-548f777df9-v5vmj\" (UID: \"2ba77045-9e8e-425f-8160-b80ab2b21b36\") " pod="openstack/dnsmasq-dns-548f777df9-v5vmj" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.748702 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj9pz\" (UniqueName: \"kubernetes.io/projected/97eacd0a-6a86-431a-ad20-8c5b767e02de-kube-api-access-nj9pz\") pod \"barbican-worker-9fc8c454f-zfhjc\" (UID: \"97eacd0a-6a86-431a-ad20-8c5b767e02de\") " pod="openstack/barbican-worker-9fc8c454f-zfhjc" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.749079 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97eacd0a-6a86-431a-ad20-8c5b767e02de-logs\") pod \"barbican-worker-9fc8c454f-zfhjc\" (UID: \"97eacd0a-6a86-431a-ad20-8c5b767e02de\") " pod="openstack/barbican-worker-9fc8c454f-zfhjc" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.749518 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-ovsdbserver-nb\") pod \"dnsmasq-dns-548f777df9-v5vmj\" (UID: \"2ba77045-9e8e-425f-8160-b80ab2b21b36\") " pod="openstack/dnsmasq-dns-548f777df9-v5vmj" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.749621 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97eacd0a-6a86-431a-ad20-8c5b767e02de-config-data\") pod \"barbican-worker-9fc8c454f-zfhjc\" (UID: \"97eacd0a-6a86-431a-ad20-8c5b767e02de\") " pod="openstack/barbican-worker-9fc8c454f-zfhjc" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.749678 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b777cfd-5ec7-41fb-9119-96779293c4b3-logs\") pod \"barbican-keystone-listener-858465b8fb-6s7lg\" (UID: \"5b777cfd-5ec7-41fb-9119-96779293c4b3\") " pod="openstack/barbican-keystone-listener-858465b8fb-6s7lg" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.749726 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b777cfd-5ec7-41fb-9119-96779293c4b3-combined-ca-bundle\") pod \"barbican-keystone-listener-858465b8fb-6s7lg\" (UID: \"5b777cfd-5ec7-41fb-9119-96779293c4b3\") " pod="openstack/barbican-keystone-listener-858465b8fb-6s7lg" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.749765 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b777cfd-5ec7-41fb-9119-96779293c4b3-config-data-custom\") pod \"barbican-keystone-listener-858465b8fb-6s7lg\" (UID: \"5b777cfd-5ec7-41fb-9119-96779293c4b3\") " pod="openstack/barbican-keystone-listener-858465b8fb-6s7lg" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.749786 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-ovsdbserver-sb\") pod \"dnsmasq-dns-548f777df9-v5vmj\" (UID: \"2ba77045-9e8e-425f-8160-b80ab2b21b36\") " pod="openstack/dnsmasq-dns-548f777df9-v5vmj" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.749819 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b777cfd-5ec7-41fb-9119-96779293c4b3-config-data\") pod \"barbican-keystone-listener-858465b8fb-6s7lg\" (UID: \"5b777cfd-5ec7-41fb-9119-96779293c4b3\") " pod="openstack/barbican-keystone-listener-858465b8fb-6s7lg" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.749860 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-dns-svc\") pod \"dnsmasq-dns-548f777df9-v5vmj\" (UID: \"2ba77045-9e8e-425f-8160-b80ab2b21b36\") " pod="openstack/dnsmasq-dns-548f777df9-v5vmj" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.749885 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97eacd0a-6a86-431a-ad20-8c5b767e02de-combined-ca-bundle\") pod \"barbican-worker-9fc8c454f-zfhjc\" (UID: \"97eacd0a-6a86-431a-ad20-8c5b767e02de\") " pod="openstack/barbican-worker-9fc8c454f-zfhjc" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.749913 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frsk8\" (UniqueName: \"kubernetes.io/projected/2ba77045-9e8e-425f-8160-b80ab2b21b36-kube-api-access-frsk8\") pod \"dnsmasq-dns-548f777df9-v5vmj\" (UID: \"2ba77045-9e8e-425f-8160-b80ab2b21b36\") " pod="openstack/dnsmasq-dns-548f777df9-v5vmj" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.750775 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b777cfd-5ec7-41fb-9119-96779293c4b3-logs\") pod \"barbican-keystone-listener-858465b8fb-6s7lg\" (UID: \"5b777cfd-5ec7-41fb-9119-96779293c4b3\") " pod="openstack/barbican-keystone-listener-858465b8fb-6s7lg" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.753575 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97eacd0a-6a86-431a-ad20-8c5b767e02de-config-data-custom\") pod \"barbican-worker-9fc8c454f-zfhjc\" (UID: \"97eacd0a-6a86-431a-ad20-8c5b767e02de\") " pod="openstack/barbican-worker-9fc8c454f-zfhjc" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.756989 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97eacd0a-6a86-431a-ad20-8c5b767e02de-combined-ca-bundle\") pod \"barbican-worker-9fc8c454f-zfhjc\" (UID: \"97eacd0a-6a86-431a-ad20-8c5b767e02de\") " pod="openstack/barbican-worker-9fc8c454f-zfhjc" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.758211 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b777cfd-5ec7-41fb-9119-96779293c4b3-config-data-custom\") pod \"barbican-keystone-listener-858465b8fb-6s7lg\" (UID: \"5b777cfd-5ec7-41fb-9119-96779293c4b3\") " pod="openstack/barbican-keystone-listener-858465b8fb-6s7lg" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.759236 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b777cfd-5ec7-41fb-9119-96779293c4b3-config-data\") pod \"barbican-keystone-listener-858465b8fb-6s7lg\" (UID: \"5b777cfd-5ec7-41fb-9119-96779293c4b3\") " pod="openstack/barbican-keystone-listener-858465b8fb-6s7lg" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.760423 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97eacd0a-6a86-431a-ad20-8c5b767e02de-config-data\") pod \"barbican-worker-9fc8c454f-zfhjc\" (UID: \"97eacd0a-6a86-431a-ad20-8c5b767e02de\") " pod="openstack/barbican-worker-9fc8c454f-zfhjc" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.763100 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b777cfd-5ec7-41fb-9119-96779293c4b3-combined-ca-bundle\") pod \"barbican-keystone-listener-858465b8fb-6s7lg\" (UID: \"5b777cfd-5ec7-41fb-9119-96779293c4b3\") " pod="openstack/barbican-keystone-listener-858465b8fb-6s7lg" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.763876 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njs6z\" (UniqueName: \"kubernetes.io/projected/5b777cfd-5ec7-41fb-9119-96779293c4b3-kube-api-access-njs6z\") pod \"barbican-keystone-listener-858465b8fb-6s7lg\" (UID: \"5b777cfd-5ec7-41fb-9119-96779293c4b3\") " pod="openstack/barbican-keystone-listener-858465b8fb-6s7lg" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.772860 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj9pz\" (UniqueName: \"kubernetes.io/projected/97eacd0a-6a86-431a-ad20-8c5b767e02de-kube-api-access-nj9pz\") pod \"barbican-worker-9fc8c454f-zfhjc\" (UID: \"97eacd0a-6a86-431a-ad20-8c5b767e02de\") " pod="openstack/barbican-worker-9fc8c454f-zfhjc" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.781058 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-58d886c74d-lrk7q"] Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.782303 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.783646 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.796246 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58d886c74d-lrk7q"] Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.851296 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/675eb763-0ad8-4967-b2b5-920f1816c7f4-logs\") pod \"barbican-api-58d886c74d-lrk7q\" (UID: \"675eb763-0ad8-4967-b2b5-920f1816c7f4\") " pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.851364 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/675eb763-0ad8-4967-b2b5-920f1816c7f4-config-data\") pod \"barbican-api-58d886c74d-lrk7q\" (UID: \"675eb763-0ad8-4967-b2b5-920f1816c7f4\") " pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.851387 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-config\") pod \"dnsmasq-dns-548f777df9-v5vmj\" (UID: \"2ba77045-9e8e-425f-8160-b80ab2b21b36\") " pod="openstack/dnsmasq-dns-548f777df9-v5vmj" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.851519 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdk8k\" (UniqueName: \"kubernetes.io/projected/675eb763-0ad8-4967-b2b5-920f1816c7f4-kube-api-access-rdk8k\") pod \"barbican-api-58d886c74d-lrk7q\" (UID: \"675eb763-0ad8-4967-b2b5-920f1816c7f4\") " pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.851635 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-ovsdbserver-nb\") pod \"dnsmasq-dns-548f777df9-v5vmj\" (UID: \"2ba77045-9e8e-425f-8160-b80ab2b21b36\") " pod="openstack/dnsmasq-dns-548f777df9-v5vmj" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.851688 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/675eb763-0ad8-4967-b2b5-920f1816c7f4-config-data-custom\") pod \"barbican-api-58d886c74d-lrk7q\" (UID: \"675eb763-0ad8-4967-b2b5-920f1816c7f4\") " pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.851786 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/675eb763-0ad8-4967-b2b5-920f1816c7f4-combined-ca-bundle\") pod \"barbican-api-58d886c74d-lrk7q\" (UID: \"675eb763-0ad8-4967-b2b5-920f1816c7f4\") " pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.851909 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-ovsdbserver-sb\") pod \"dnsmasq-dns-548f777df9-v5vmj\" (UID: \"2ba77045-9e8e-425f-8160-b80ab2b21b36\") " pod="openstack/dnsmasq-dns-548f777df9-v5vmj" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.851978 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-dns-svc\") pod \"dnsmasq-dns-548f777df9-v5vmj\" (UID: \"2ba77045-9e8e-425f-8160-b80ab2b21b36\") " pod="openstack/dnsmasq-dns-548f777df9-v5vmj" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.852018 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frsk8\" (UniqueName: \"kubernetes.io/projected/2ba77045-9e8e-425f-8160-b80ab2b21b36-kube-api-access-frsk8\") pod \"dnsmasq-dns-548f777df9-v5vmj\" (UID: \"2ba77045-9e8e-425f-8160-b80ab2b21b36\") " pod="openstack/dnsmasq-dns-548f777df9-v5vmj" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.852102 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-config\") pod \"dnsmasq-dns-548f777df9-v5vmj\" (UID: \"2ba77045-9e8e-425f-8160-b80ab2b21b36\") " pod="openstack/dnsmasq-dns-548f777df9-v5vmj" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.852710 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-dns-svc\") pod \"dnsmasq-dns-548f777df9-v5vmj\" (UID: \"2ba77045-9e8e-425f-8160-b80ab2b21b36\") " pod="openstack/dnsmasq-dns-548f777df9-v5vmj" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.852720 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-ovsdbserver-sb\") pod \"dnsmasq-dns-548f777df9-v5vmj\" (UID: \"2ba77045-9e8e-425f-8160-b80ab2b21b36\") " pod="openstack/dnsmasq-dns-548f777df9-v5vmj" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.854105 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-ovsdbserver-nb\") pod \"dnsmasq-dns-548f777df9-v5vmj\" (UID: \"2ba77045-9e8e-425f-8160-b80ab2b21b36\") " pod="openstack/dnsmasq-dns-548f777df9-v5vmj" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.871892 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frsk8\" (UniqueName: \"kubernetes.io/projected/2ba77045-9e8e-425f-8160-b80ab2b21b36-kube-api-access-frsk8\") pod \"dnsmasq-dns-548f777df9-v5vmj\" (UID: \"2ba77045-9e8e-425f-8160-b80ab2b21b36\") " pod="openstack/dnsmasq-dns-548f777df9-v5vmj" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.896007 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9fc8c454f-zfhjc" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.926598 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-858465b8fb-6s7lg" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.953895 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/675eb763-0ad8-4967-b2b5-920f1816c7f4-config-data-custom\") pod \"barbican-api-58d886c74d-lrk7q\" (UID: \"675eb763-0ad8-4967-b2b5-920f1816c7f4\") " pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.953956 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/675eb763-0ad8-4967-b2b5-920f1816c7f4-combined-ca-bundle\") pod \"barbican-api-58d886c74d-lrk7q\" (UID: \"675eb763-0ad8-4967-b2b5-920f1816c7f4\") " pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.954038 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/675eb763-0ad8-4967-b2b5-920f1816c7f4-logs\") pod \"barbican-api-58d886c74d-lrk7q\" (UID: \"675eb763-0ad8-4967-b2b5-920f1816c7f4\") " pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.954068 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/675eb763-0ad8-4967-b2b5-920f1816c7f4-config-data\") pod \"barbican-api-58d886c74d-lrk7q\" (UID: \"675eb763-0ad8-4967-b2b5-920f1816c7f4\") " pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.954106 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdk8k\" (UniqueName: \"kubernetes.io/projected/675eb763-0ad8-4967-b2b5-920f1816c7f4-kube-api-access-rdk8k\") pod \"barbican-api-58d886c74d-lrk7q\" (UID: \"675eb763-0ad8-4967-b2b5-920f1816c7f4\") " pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.954977 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/675eb763-0ad8-4967-b2b5-920f1816c7f4-logs\") pod \"barbican-api-58d886c74d-lrk7q\" (UID: \"675eb763-0ad8-4967-b2b5-920f1816c7f4\") " pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.958703 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/675eb763-0ad8-4967-b2b5-920f1816c7f4-combined-ca-bundle\") pod \"barbican-api-58d886c74d-lrk7q\" (UID: \"675eb763-0ad8-4967-b2b5-920f1816c7f4\") " pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.963925 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/675eb763-0ad8-4967-b2b5-920f1816c7f4-config-data-custom\") pod \"barbican-api-58d886c74d-lrk7q\" (UID: \"675eb763-0ad8-4967-b2b5-920f1816c7f4\") " pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.964772 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/675eb763-0ad8-4967-b2b5-920f1816c7f4-config-data\") pod \"barbican-api-58d886c74d-lrk7q\" (UID: \"675eb763-0ad8-4967-b2b5-920f1816c7f4\") " pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.974724 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548f777df9-v5vmj" Nov 24 20:49:18 crc kubenswrapper[4812]: I1124 20:49:18.974799 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdk8k\" (UniqueName: \"kubernetes.io/projected/675eb763-0ad8-4967-b2b5-920f1816c7f4-kube-api-access-rdk8k\") pod \"barbican-api-58d886c74d-lrk7q\" (UID: \"675eb763-0ad8-4967-b2b5-920f1816c7f4\") " pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:19 crc kubenswrapper[4812]: I1124 20:49:19.147721 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:19 crc kubenswrapper[4812]: I1124 20:49:19.167130 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9fc8c454f-zfhjc"] Nov 24 20:49:19 crc kubenswrapper[4812]: I1124 20:49:19.245442 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-858465b8fb-6s7lg"] Nov 24 20:49:19 crc kubenswrapper[4812]: I1124 20:49:19.337639 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9fc8c454f-zfhjc" event={"ID":"97eacd0a-6a86-431a-ad20-8c5b767e02de","Type":"ContainerStarted","Data":"4ae5de6c678d3bbd4bf7f4960e9b5d1257b97ce66a60eac450add0127a10e99d"} Nov 24 20:49:19 crc kubenswrapper[4812]: I1124 20:49:19.343557 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-858465b8fb-6s7lg" event={"ID":"5b777cfd-5ec7-41fb-9119-96779293c4b3","Type":"ContainerStarted","Data":"7f4d4f954391a7ce00faf86828b08dcfa88f6d9ddde972a299fca61da3a95438"} Nov 24 20:49:19 crc kubenswrapper[4812]: I1124 20:49:19.354655 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-548f777df9-v5vmj"] Nov 24 20:49:19 crc kubenswrapper[4812]: W1124 20:49:19.398296 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ba77045_9e8e_425f_8160_b80ab2b21b36.slice/crio-5df644a33afcf508c32117908c2841e6bf58604c0062d1aa2bf90808c6fbc05e WatchSource:0}: Error finding container 5df644a33afcf508c32117908c2841e6bf58604c0062d1aa2bf90808c6fbc05e: Status 404 returned error can't find the container with id 5df644a33afcf508c32117908c2841e6bf58604c0062d1aa2bf90808c6fbc05e Nov 24 20:49:19 crc kubenswrapper[4812]: I1124 20:49:19.430281 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58d886c74d-lrk7q"] Nov 24 20:49:20 crc kubenswrapper[4812]: I1124 20:49:20.361306 4812 generic.go:334] "Generic (PLEG): container finished" podID="2ba77045-9e8e-425f-8160-b80ab2b21b36" containerID="92d773582afcdc07be3d27496b1a52523ee15ffe61586432ab51899ec09feb19" exitCode=0 Nov 24 20:49:20 crc kubenswrapper[4812]: I1124 20:49:20.361469 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548f777df9-v5vmj" event={"ID":"2ba77045-9e8e-425f-8160-b80ab2b21b36","Type":"ContainerDied","Data":"92d773582afcdc07be3d27496b1a52523ee15ffe61586432ab51899ec09feb19"} Nov 24 20:49:20 crc kubenswrapper[4812]: I1124 20:49:20.361656 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548f777df9-v5vmj" event={"ID":"2ba77045-9e8e-425f-8160-b80ab2b21b36","Type":"ContainerStarted","Data":"5df644a33afcf508c32117908c2841e6bf58604c0062d1aa2bf90808c6fbc05e"} Nov 24 20:49:20 crc kubenswrapper[4812]: I1124 20:49:20.367571 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9fc8c454f-zfhjc" event={"ID":"97eacd0a-6a86-431a-ad20-8c5b767e02de","Type":"ContainerStarted","Data":"11f90bff574ddbd47742ee021befd223960d379bacb5c8a5897efd0b0a510cf0"} Nov 24 20:49:20 crc kubenswrapper[4812]: I1124 20:49:20.367678 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9fc8c454f-zfhjc" event={"ID":"97eacd0a-6a86-431a-ad20-8c5b767e02de","Type":"ContainerStarted","Data":"cac3172dd06f2dd27b6d50c91a66feca67e6dfb9bf97f2e0f8b350d8daef69f2"} Nov 24 20:49:20 crc kubenswrapper[4812]: I1124 20:49:20.374307 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-858465b8fb-6s7lg" event={"ID":"5b777cfd-5ec7-41fb-9119-96779293c4b3","Type":"ContainerStarted","Data":"8184dc4be0af80384d5672af22e920dbc95ba5b1beef4e4474365c3af3302abc"} Nov 24 20:49:20 crc kubenswrapper[4812]: I1124 20:49:20.374443 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-858465b8fb-6s7lg" event={"ID":"5b777cfd-5ec7-41fb-9119-96779293c4b3","Type":"ContainerStarted","Data":"8ed0e707cdd9034a8c845746a9c0c1eae869f7e880ec0dd04b8602bc1ef3d163"} Nov 24 20:49:20 crc kubenswrapper[4812]: I1124 20:49:20.381773 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58d886c74d-lrk7q" event={"ID":"675eb763-0ad8-4967-b2b5-920f1816c7f4","Type":"ContainerStarted","Data":"e799652f21c5b1c4def016f2aa425c8685522d5cb0595322cdb69b46ba724ed9"} Nov 24 20:49:20 crc kubenswrapper[4812]: I1124 20:49:20.381862 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58d886c74d-lrk7q" event={"ID":"675eb763-0ad8-4967-b2b5-920f1816c7f4","Type":"ContainerStarted","Data":"07308ec94754f2700235d3e7127c0c13c8027040861e762a1863dce1d44a8a20"} Nov 24 20:49:20 crc kubenswrapper[4812]: I1124 20:49:20.381873 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58d886c74d-lrk7q" event={"ID":"675eb763-0ad8-4967-b2b5-920f1816c7f4","Type":"ContainerStarted","Data":"982e4d4db86445847d79d8105adb7ebffebbd781eed7a6b16b6c8050429dbee0"} Nov 24 20:49:20 crc kubenswrapper[4812]: I1124 20:49:20.384973 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:20 crc kubenswrapper[4812]: I1124 20:49:20.385012 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:20 crc kubenswrapper[4812]: I1124 20:49:20.407231 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-9fc8c454f-zfhjc" podStartSLOduration=2.407213604 podStartE2EDuration="2.407213604s" podCreationTimestamp="2025-11-24 20:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:49:20.403073827 +0000 UTC m=+5554.192026198" watchObservedRunningTime="2025-11-24 20:49:20.407213604 +0000 UTC m=+5554.196165975" Nov 24 20:49:20 crc kubenswrapper[4812]: I1124 20:49:20.422718 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-858465b8fb-6s7lg" podStartSLOduration=2.422694574 podStartE2EDuration="2.422694574s" podCreationTimestamp="2025-11-24 20:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:49:20.418287879 +0000 UTC m=+5554.207240250" watchObservedRunningTime="2025-11-24 20:49:20.422694574 +0000 UTC m=+5554.211646945" Nov 24 20:49:20 crc kubenswrapper[4812]: I1124 20:49:20.445846 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-58d886c74d-lrk7q" podStartSLOduration=2.44582939 podStartE2EDuration="2.44582939s" podCreationTimestamp="2025-11-24 20:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:49:20.443905636 +0000 UTC m=+5554.232857997" watchObservedRunningTime="2025-11-24 20:49:20.44582939 +0000 UTC m=+5554.234781751" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.173221 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-c687cccc6-vsgqh"] Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.174764 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.178000 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.178005 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.187965 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-c687cccc6-vsgqh"] Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.310009 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2480741-4342-4f50-9cca-a5746948eb5d-combined-ca-bundle\") pod \"barbican-api-c687cccc6-vsgqh\" (UID: \"a2480741-4342-4f50-9cca-a5746948eb5d\") " pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.310060 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2480741-4342-4f50-9cca-a5746948eb5d-config-data\") pod \"barbican-api-c687cccc6-vsgqh\" (UID: \"a2480741-4342-4f50-9cca-a5746948eb5d\") " pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.310127 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2480741-4342-4f50-9cca-a5746948eb5d-logs\") pod \"barbican-api-c687cccc6-vsgqh\" (UID: \"a2480741-4342-4f50-9cca-a5746948eb5d\") " pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.310147 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2480741-4342-4f50-9cca-a5746948eb5d-config-data-custom\") pod \"barbican-api-c687cccc6-vsgqh\" (UID: \"a2480741-4342-4f50-9cca-a5746948eb5d\") " pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.310241 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2480741-4342-4f50-9cca-a5746948eb5d-internal-tls-certs\") pod \"barbican-api-c687cccc6-vsgqh\" (UID: \"a2480741-4342-4f50-9cca-a5746948eb5d\") " pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.310288 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2480741-4342-4f50-9cca-a5746948eb5d-public-tls-certs\") pod \"barbican-api-c687cccc6-vsgqh\" (UID: \"a2480741-4342-4f50-9cca-a5746948eb5d\") " pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.310318 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9fg5\" (UniqueName: \"kubernetes.io/projected/a2480741-4342-4f50-9cca-a5746948eb5d-kube-api-access-k9fg5\") pod \"barbican-api-c687cccc6-vsgqh\" (UID: \"a2480741-4342-4f50-9cca-a5746948eb5d\") " pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.392069 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548f777df9-v5vmj" event={"ID":"2ba77045-9e8e-425f-8160-b80ab2b21b36","Type":"ContainerStarted","Data":"56e0a6cb3800c55b82729f183955a06e3000d6af5a3e78a6bedf5f71ad13bf4f"} Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.410652 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-548f777df9-v5vmj" podStartSLOduration=3.410633729 podStartE2EDuration="3.410633729s" podCreationTimestamp="2025-11-24 20:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:49:21.405260016 +0000 UTC m=+5555.194212407" watchObservedRunningTime="2025-11-24 20:49:21.410633729 +0000 UTC m=+5555.199586110" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.411781 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2480741-4342-4f50-9cca-a5746948eb5d-internal-tls-certs\") pod \"barbican-api-c687cccc6-vsgqh\" (UID: \"a2480741-4342-4f50-9cca-a5746948eb5d\") " pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.412640 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2480741-4342-4f50-9cca-a5746948eb5d-public-tls-certs\") pod \"barbican-api-c687cccc6-vsgqh\" (UID: \"a2480741-4342-4f50-9cca-a5746948eb5d\") " pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.412767 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9fg5\" (UniqueName: \"kubernetes.io/projected/a2480741-4342-4f50-9cca-a5746948eb5d-kube-api-access-k9fg5\") pod \"barbican-api-c687cccc6-vsgqh\" (UID: \"a2480741-4342-4f50-9cca-a5746948eb5d\") " pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.412855 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2480741-4342-4f50-9cca-a5746948eb5d-combined-ca-bundle\") pod \"barbican-api-c687cccc6-vsgqh\" (UID: \"a2480741-4342-4f50-9cca-a5746948eb5d\") " pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.412931 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2480741-4342-4f50-9cca-a5746948eb5d-config-data\") pod \"barbican-api-c687cccc6-vsgqh\" (UID: \"a2480741-4342-4f50-9cca-a5746948eb5d\") " pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.413037 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2480741-4342-4f50-9cca-a5746948eb5d-logs\") pod \"barbican-api-c687cccc6-vsgqh\" (UID: \"a2480741-4342-4f50-9cca-a5746948eb5d\") " pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.413105 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2480741-4342-4f50-9cca-a5746948eb5d-config-data-custom\") pod \"barbican-api-c687cccc6-vsgqh\" (UID: \"a2480741-4342-4f50-9cca-a5746948eb5d\") " pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.413704 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2480741-4342-4f50-9cca-a5746948eb5d-logs\") pod \"barbican-api-c687cccc6-vsgqh\" (UID: \"a2480741-4342-4f50-9cca-a5746948eb5d\") " pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.417276 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2480741-4342-4f50-9cca-a5746948eb5d-public-tls-certs\") pod \"barbican-api-c687cccc6-vsgqh\" (UID: \"a2480741-4342-4f50-9cca-a5746948eb5d\") " pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.422479 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2480741-4342-4f50-9cca-a5746948eb5d-internal-tls-certs\") pod \"barbican-api-c687cccc6-vsgqh\" (UID: \"a2480741-4342-4f50-9cca-a5746948eb5d\") " pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.425150 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2480741-4342-4f50-9cca-a5746948eb5d-config-data-custom\") pod \"barbican-api-c687cccc6-vsgqh\" (UID: \"a2480741-4342-4f50-9cca-a5746948eb5d\") " pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.425542 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2480741-4342-4f50-9cca-a5746948eb5d-config-data\") pod \"barbican-api-c687cccc6-vsgqh\" (UID: \"a2480741-4342-4f50-9cca-a5746948eb5d\") " pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.426822 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2480741-4342-4f50-9cca-a5746948eb5d-combined-ca-bundle\") pod \"barbican-api-c687cccc6-vsgqh\" (UID: \"a2480741-4342-4f50-9cca-a5746948eb5d\") " pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.439205 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9fg5\" (UniqueName: \"kubernetes.io/projected/a2480741-4342-4f50-9cca-a5746948eb5d-kube-api-access-k9fg5\") pod \"barbican-api-c687cccc6-vsgqh\" (UID: \"a2480741-4342-4f50-9cca-a5746948eb5d\") " pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.492398 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:21 crc kubenswrapper[4812]: I1124 20:49:21.932452 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-c687cccc6-vsgqh"] Nov 24 20:49:21 crc kubenswrapper[4812]: W1124 20:49:21.932477 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2480741_4342_4f50_9cca_a5746948eb5d.slice/crio-76c951d1924e76589679e2823c080073dd15402a64afd25596f19d2c71a92a5c WatchSource:0}: Error finding container 76c951d1924e76589679e2823c080073dd15402a64afd25596f19d2c71a92a5c: Status 404 returned error can't find the container with id 76c951d1924e76589679e2823c080073dd15402a64afd25596f19d2c71a92a5c Nov 24 20:49:22 crc kubenswrapper[4812]: I1124 20:49:22.401363 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c687cccc6-vsgqh" event={"ID":"a2480741-4342-4f50-9cca-a5746948eb5d","Type":"ContainerStarted","Data":"b40a1d3c30a45a45806f8c8093000af44335cefbf89e2187b7cad6bd99e154f5"} Nov 24 20:49:22 crc kubenswrapper[4812]: I1124 20:49:22.402551 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c687cccc6-vsgqh" event={"ID":"a2480741-4342-4f50-9cca-a5746948eb5d","Type":"ContainerStarted","Data":"32503b4b63dcb407b230ec79ebaa45a4515c776e0a66fcd323d386b308c8a9c4"} Nov 24 20:49:22 crc kubenswrapper[4812]: I1124 20:49:22.402575 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-548f777df9-v5vmj" Nov 24 20:49:22 crc kubenswrapper[4812]: I1124 20:49:22.402618 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-c687cccc6-vsgqh" event={"ID":"a2480741-4342-4f50-9cca-a5746948eb5d","Type":"ContainerStarted","Data":"76c951d1924e76589679e2823c080073dd15402a64afd25596f19d2c71a92a5c"} Nov 24 20:49:22 crc kubenswrapper[4812]: I1124 20:49:22.402635 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:22 crc kubenswrapper[4812]: I1124 20:49:22.402647 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:22 crc kubenswrapper[4812]: I1124 20:49:22.429764 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-c687cccc6-vsgqh" podStartSLOduration=1.429745067 podStartE2EDuration="1.429745067s" podCreationTimestamp="2025-11-24 20:49:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:49:22.429412918 +0000 UTC m=+5556.218365299" watchObservedRunningTime="2025-11-24 20:49:22.429745067 +0000 UTC m=+5556.218697438" Nov 24 20:49:23 crc kubenswrapper[4812]: I1124 20:49:23.965599 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:49:23 crc kubenswrapper[4812]: E1124 20:49:23.967914 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:49:25 crc kubenswrapper[4812]: I1124 20:49:25.524973 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:25 crc kubenswrapper[4812]: I1124 20:49:25.536869 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58d886c74d-lrk7q" podUID="675eb763-0ad8-4967-b2b5-920f1816c7f4" containerName="barbican-api-log" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 20:49:26 crc kubenswrapper[4812]: I1124 20:49:26.760131 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:28 crc kubenswrapper[4812]: I1124 20:49:28.976464 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-548f777df9-v5vmj" Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.038999 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c5dc88467-bm87w"] Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.039395 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" podUID="cdcce122-6441-49d4-8051-b4836f8f41d4" containerName="dnsmasq-dns" containerID="cri-o://5a0c663e423acdb666c9ed5ed757c544bcb4868d0fc1983beb188ab8da26e35a" gracePeriod=10 Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.471474 4812 generic.go:334] "Generic (PLEG): container finished" podID="cdcce122-6441-49d4-8051-b4836f8f41d4" containerID="5a0c663e423acdb666c9ed5ed757c544bcb4868d0fc1983beb188ab8da26e35a" exitCode=0 Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.471699 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" event={"ID":"cdcce122-6441-49d4-8051-b4836f8f41d4","Type":"ContainerDied","Data":"5a0c663e423acdb666c9ed5ed757c544bcb4868d0fc1983beb188ab8da26e35a"} Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.471949 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" event={"ID":"cdcce122-6441-49d4-8051-b4836f8f41d4","Type":"ContainerDied","Data":"5f05e4e77f60aa86d5de3fd69df6ba8fbdd396249778ebf730d378b9e94b43db"} Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.471973 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f05e4e77f60aa86d5de3fd69df6ba8fbdd396249778ebf730d378b9e94b43db" Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.511865 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.578908 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-dns-svc\") pod \"cdcce122-6441-49d4-8051-b4836f8f41d4\" (UID: \"cdcce122-6441-49d4-8051-b4836f8f41d4\") " Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.578991 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlbhl\" (UniqueName: \"kubernetes.io/projected/cdcce122-6441-49d4-8051-b4836f8f41d4-kube-api-access-hlbhl\") pod \"cdcce122-6441-49d4-8051-b4836f8f41d4\" (UID: \"cdcce122-6441-49d4-8051-b4836f8f41d4\") " Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.579029 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-ovsdbserver-sb\") pod \"cdcce122-6441-49d4-8051-b4836f8f41d4\" (UID: \"cdcce122-6441-49d4-8051-b4836f8f41d4\") " Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.579203 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-config\") pod \"cdcce122-6441-49d4-8051-b4836f8f41d4\" (UID: \"cdcce122-6441-49d4-8051-b4836f8f41d4\") " Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.579266 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-ovsdbserver-nb\") pod \"cdcce122-6441-49d4-8051-b4836f8f41d4\" (UID: \"cdcce122-6441-49d4-8051-b4836f8f41d4\") " Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.604575 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdcce122-6441-49d4-8051-b4836f8f41d4-kube-api-access-hlbhl" (OuterVolumeSpecName: "kube-api-access-hlbhl") pod "cdcce122-6441-49d4-8051-b4836f8f41d4" (UID: "cdcce122-6441-49d4-8051-b4836f8f41d4"). InnerVolumeSpecName "kube-api-access-hlbhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.652688 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cdcce122-6441-49d4-8051-b4836f8f41d4" (UID: "cdcce122-6441-49d4-8051-b4836f8f41d4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.668049 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cdcce122-6441-49d4-8051-b4836f8f41d4" (UID: "cdcce122-6441-49d4-8051-b4836f8f41d4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.670240 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cdcce122-6441-49d4-8051-b4836f8f41d4" (UID: "cdcce122-6441-49d4-8051-b4836f8f41d4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.670715 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-config" (OuterVolumeSpecName: "config") pod "cdcce122-6441-49d4-8051-b4836f8f41d4" (UID: "cdcce122-6441-49d4-8051-b4836f8f41d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.681639 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-config\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.681666 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.681677 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.681689 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlbhl\" (UniqueName: \"kubernetes.io/projected/cdcce122-6441-49d4-8051-b4836f8f41d4-kube-api-access-hlbhl\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:29 crc kubenswrapper[4812]: I1124 20:49:29.681697 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdcce122-6441-49d4-8051-b4836f8f41d4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:30 crc kubenswrapper[4812]: I1124 20:49:30.481945 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c5dc88467-bm87w" Nov 24 20:49:30 crc kubenswrapper[4812]: I1124 20:49:30.523856 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c5dc88467-bm87w"] Nov 24 20:49:30 crc kubenswrapper[4812]: I1124 20:49:30.531491 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c5dc88467-bm87w"] Nov 24 20:49:30 crc kubenswrapper[4812]: I1124 20:49:30.979133 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdcce122-6441-49d4-8051-b4836f8f41d4" path="/var/lib/kubelet/pods/cdcce122-6441-49d4-8051-b4836f8f41d4/volumes" Nov 24 20:49:32 crc kubenswrapper[4812]: I1124 20:49:32.833741 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:32 crc kubenswrapper[4812]: I1124 20:49:32.893968 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-c687cccc6-vsgqh" Nov 24 20:49:32 crc kubenswrapper[4812]: I1124 20:49:32.985483 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58d886c74d-lrk7q"] Nov 24 20:49:32 crc kubenswrapper[4812]: I1124 20:49:32.985726 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58d886c74d-lrk7q" podUID="675eb763-0ad8-4967-b2b5-920f1816c7f4" containerName="barbican-api-log" containerID="cri-o://07308ec94754f2700235d3e7127c0c13c8027040861e762a1863dce1d44a8a20" gracePeriod=30 Nov 24 20:49:32 crc kubenswrapper[4812]: I1124 20:49:32.985844 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58d886c74d-lrk7q" podUID="675eb763-0ad8-4967-b2b5-920f1816c7f4" containerName="barbican-api" containerID="cri-o://e799652f21c5b1c4def016f2aa425c8685522d5cb0595322cdb69b46ba724ed9" gracePeriod=30 Nov 24 20:49:33 crc kubenswrapper[4812]: I1124 20:49:33.518156 4812 generic.go:334] "Generic (PLEG): container finished" podID="675eb763-0ad8-4967-b2b5-920f1816c7f4" containerID="07308ec94754f2700235d3e7127c0c13c8027040861e762a1863dce1d44a8a20" exitCode=143 Nov 24 20:49:33 crc kubenswrapper[4812]: I1124 20:49:33.518187 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58d886c74d-lrk7q" event={"ID":"675eb763-0ad8-4967-b2b5-920f1816c7f4","Type":"ContainerDied","Data":"07308ec94754f2700235d3e7127c0c13c8027040861e762a1863dce1d44a8a20"} Nov 24 20:49:34 crc kubenswrapper[4812]: I1124 20:49:34.965921 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:49:34 crc kubenswrapper[4812]: E1124 20:49:34.966391 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:49:35 crc kubenswrapper[4812]: E1124 20:49:35.576966 4812 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.36:55436->38.102.83.36:46073: write tcp 38.102.83.36:55436->38.102.83.36:46073: write: broken pipe Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.088102 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58d886c74d-lrk7q" podUID="675eb763-0ad8-4967-b2b5-920f1816c7f4" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.38:9311/healthcheck\": read tcp 10.217.0.2:42082->10.217.1.38:9311: read: connection reset by peer" Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.088195 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58d886c74d-lrk7q" podUID="675eb763-0ad8-4967-b2b5-920f1816c7f4" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.38:9311/healthcheck\": read tcp 10.217.0.2:42088->10.217.1.38:9311: read: connection reset by peer" Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.507806 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.571392 4812 generic.go:334] "Generic (PLEG): container finished" podID="675eb763-0ad8-4967-b2b5-920f1816c7f4" containerID="e799652f21c5b1c4def016f2aa425c8685522d5cb0595322cdb69b46ba724ed9" exitCode=0 Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.571440 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58d886c74d-lrk7q" event={"ID":"675eb763-0ad8-4967-b2b5-920f1816c7f4","Type":"ContainerDied","Data":"e799652f21c5b1c4def016f2aa425c8685522d5cb0595322cdb69b46ba724ed9"} Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.571467 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58d886c74d-lrk7q" event={"ID":"675eb763-0ad8-4967-b2b5-920f1816c7f4","Type":"ContainerDied","Data":"982e4d4db86445847d79d8105adb7ebffebbd781eed7a6b16b6c8050429dbee0"} Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.571485 4812 scope.go:117] "RemoveContainer" containerID="e799652f21c5b1c4def016f2aa425c8685522d5cb0595322cdb69b46ba724ed9" Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.571479 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58d886c74d-lrk7q" Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.599348 4812 scope.go:117] "RemoveContainer" containerID="07308ec94754f2700235d3e7127c0c13c8027040861e762a1863dce1d44a8a20" Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.618540 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/675eb763-0ad8-4967-b2b5-920f1816c7f4-config-data-custom\") pod \"675eb763-0ad8-4967-b2b5-920f1816c7f4\" (UID: \"675eb763-0ad8-4967-b2b5-920f1816c7f4\") " Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.618589 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/675eb763-0ad8-4967-b2b5-920f1816c7f4-combined-ca-bundle\") pod \"675eb763-0ad8-4967-b2b5-920f1816c7f4\" (UID: \"675eb763-0ad8-4967-b2b5-920f1816c7f4\") " Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.618708 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdk8k\" (UniqueName: \"kubernetes.io/projected/675eb763-0ad8-4967-b2b5-920f1816c7f4-kube-api-access-rdk8k\") pod \"675eb763-0ad8-4967-b2b5-920f1816c7f4\" (UID: \"675eb763-0ad8-4967-b2b5-920f1816c7f4\") " Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.618826 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/675eb763-0ad8-4967-b2b5-920f1816c7f4-config-data\") pod \"675eb763-0ad8-4967-b2b5-920f1816c7f4\" (UID: \"675eb763-0ad8-4967-b2b5-920f1816c7f4\") " Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.618870 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/675eb763-0ad8-4967-b2b5-920f1816c7f4-logs\") pod \"675eb763-0ad8-4967-b2b5-920f1816c7f4\" (UID: \"675eb763-0ad8-4967-b2b5-920f1816c7f4\") " Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.619902 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/675eb763-0ad8-4967-b2b5-920f1816c7f4-logs" (OuterVolumeSpecName: "logs") pod "675eb763-0ad8-4967-b2b5-920f1816c7f4" (UID: "675eb763-0ad8-4967-b2b5-920f1816c7f4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.624910 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/675eb763-0ad8-4967-b2b5-920f1816c7f4-kube-api-access-rdk8k" (OuterVolumeSpecName: "kube-api-access-rdk8k") pod "675eb763-0ad8-4967-b2b5-920f1816c7f4" (UID: "675eb763-0ad8-4967-b2b5-920f1816c7f4"). InnerVolumeSpecName "kube-api-access-rdk8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.625423 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675eb763-0ad8-4967-b2b5-920f1816c7f4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "675eb763-0ad8-4967-b2b5-920f1816c7f4" (UID: "675eb763-0ad8-4967-b2b5-920f1816c7f4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.629644 4812 scope.go:117] "RemoveContainer" containerID="e799652f21c5b1c4def016f2aa425c8685522d5cb0595322cdb69b46ba724ed9" Nov 24 20:49:36 crc kubenswrapper[4812]: E1124 20:49:36.630656 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e799652f21c5b1c4def016f2aa425c8685522d5cb0595322cdb69b46ba724ed9\": container with ID starting with e799652f21c5b1c4def016f2aa425c8685522d5cb0595322cdb69b46ba724ed9 not found: ID does not exist" containerID="e799652f21c5b1c4def016f2aa425c8685522d5cb0595322cdb69b46ba724ed9" Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.630713 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e799652f21c5b1c4def016f2aa425c8685522d5cb0595322cdb69b46ba724ed9"} err="failed to get container status \"e799652f21c5b1c4def016f2aa425c8685522d5cb0595322cdb69b46ba724ed9\": rpc error: code = NotFound desc = could not find container \"e799652f21c5b1c4def016f2aa425c8685522d5cb0595322cdb69b46ba724ed9\": container with ID starting with e799652f21c5b1c4def016f2aa425c8685522d5cb0595322cdb69b46ba724ed9 not found: ID does not exist" Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.630747 4812 scope.go:117] "RemoveContainer" containerID="07308ec94754f2700235d3e7127c0c13c8027040861e762a1863dce1d44a8a20" Nov 24 20:49:36 crc kubenswrapper[4812]: E1124 20:49:36.631083 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07308ec94754f2700235d3e7127c0c13c8027040861e762a1863dce1d44a8a20\": container with ID starting with 07308ec94754f2700235d3e7127c0c13c8027040861e762a1863dce1d44a8a20 not found: ID does not exist" containerID="07308ec94754f2700235d3e7127c0c13c8027040861e762a1863dce1d44a8a20" Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.631130 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07308ec94754f2700235d3e7127c0c13c8027040861e762a1863dce1d44a8a20"} err="failed to get container status \"07308ec94754f2700235d3e7127c0c13c8027040861e762a1863dce1d44a8a20\": rpc error: code = NotFound desc = could not find container \"07308ec94754f2700235d3e7127c0c13c8027040861e762a1863dce1d44a8a20\": container with ID starting with 07308ec94754f2700235d3e7127c0c13c8027040861e762a1863dce1d44a8a20 not found: ID does not exist" Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.668096 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675eb763-0ad8-4967-b2b5-920f1816c7f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "675eb763-0ad8-4967-b2b5-920f1816c7f4" (UID: "675eb763-0ad8-4967-b2b5-920f1816c7f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.672053 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/675eb763-0ad8-4967-b2b5-920f1816c7f4-config-data" (OuterVolumeSpecName: "config-data") pod "675eb763-0ad8-4967-b2b5-920f1816c7f4" (UID: "675eb763-0ad8-4967-b2b5-920f1816c7f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.721070 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdk8k\" (UniqueName: \"kubernetes.io/projected/675eb763-0ad8-4967-b2b5-920f1816c7f4-kube-api-access-rdk8k\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.721108 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/675eb763-0ad8-4967-b2b5-920f1816c7f4-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.721118 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/675eb763-0ad8-4967-b2b5-920f1816c7f4-logs\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.721126 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/675eb763-0ad8-4967-b2b5-920f1816c7f4-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.721136 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/675eb763-0ad8-4967-b2b5-920f1816c7f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.927084 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58d886c74d-lrk7q"] Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.938911 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-58d886c74d-lrk7q"] Nov 24 20:49:36 crc kubenswrapper[4812]: I1124 20:49:36.983139 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="675eb763-0ad8-4967-b2b5-920f1816c7f4" path="/var/lib/kubelet/pods/675eb763-0ad8-4967-b2b5-920f1816c7f4/volumes" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.414594 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-9dp74"] Nov 24 20:49:40 crc kubenswrapper[4812]: E1124 20:49:40.415242 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="675eb763-0ad8-4967-b2b5-920f1816c7f4" containerName="barbican-api" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.415255 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="675eb763-0ad8-4967-b2b5-920f1816c7f4" containerName="barbican-api" Nov 24 20:49:40 crc kubenswrapper[4812]: E1124 20:49:40.415278 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcce122-6441-49d4-8051-b4836f8f41d4" containerName="dnsmasq-dns" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.415283 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcce122-6441-49d4-8051-b4836f8f41d4" containerName="dnsmasq-dns" Nov 24 20:49:40 crc kubenswrapper[4812]: E1124 20:49:40.415291 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="675eb763-0ad8-4967-b2b5-920f1816c7f4" containerName="barbican-api-log" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.415297 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="675eb763-0ad8-4967-b2b5-920f1816c7f4" containerName="barbican-api-log" Nov 24 20:49:40 crc kubenswrapper[4812]: E1124 20:49:40.415311 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcce122-6441-49d4-8051-b4836f8f41d4" containerName="init" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.415317 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcce122-6441-49d4-8051-b4836f8f41d4" containerName="init" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.415504 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcce122-6441-49d4-8051-b4836f8f41d4" containerName="dnsmasq-dns" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.415526 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="675eb763-0ad8-4967-b2b5-920f1816c7f4" containerName="barbican-api" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.415540 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="675eb763-0ad8-4967-b2b5-920f1816c7f4" containerName="barbican-api-log" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.417280 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9dp74" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.429135 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-188a-account-create-8jxrl"] Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.430544 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-188a-account-create-8jxrl" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.436938 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.445140 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9dp74"] Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.451751 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-188a-account-create-8jxrl"] Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.506741 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99p46\" (UniqueName: \"kubernetes.io/projected/310ff154-913c-4c61-b953-08c203c81f73-kube-api-access-99p46\") pod \"neutron-db-create-9dp74\" (UID: \"310ff154-913c-4c61-b953-08c203c81f73\") " pod="openstack/neutron-db-create-9dp74" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.506955 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/310ff154-913c-4c61-b953-08c203c81f73-operator-scripts\") pod \"neutron-db-create-9dp74\" (UID: \"310ff154-913c-4c61-b953-08c203c81f73\") " pod="openstack/neutron-db-create-9dp74" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.506996 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwz6k\" (UniqueName: \"kubernetes.io/projected/a5173d2d-26b0-4d31-86f8-80442713d33c-kube-api-access-qwz6k\") pod \"neutron-188a-account-create-8jxrl\" (UID: \"a5173d2d-26b0-4d31-86f8-80442713d33c\") " pod="openstack/neutron-188a-account-create-8jxrl" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.507041 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5173d2d-26b0-4d31-86f8-80442713d33c-operator-scripts\") pod \"neutron-188a-account-create-8jxrl\" (UID: \"a5173d2d-26b0-4d31-86f8-80442713d33c\") " pod="openstack/neutron-188a-account-create-8jxrl" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.608884 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwz6k\" (UniqueName: \"kubernetes.io/projected/a5173d2d-26b0-4d31-86f8-80442713d33c-kube-api-access-qwz6k\") pod \"neutron-188a-account-create-8jxrl\" (UID: \"a5173d2d-26b0-4d31-86f8-80442713d33c\") " pod="openstack/neutron-188a-account-create-8jxrl" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.608972 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5173d2d-26b0-4d31-86f8-80442713d33c-operator-scripts\") pod \"neutron-188a-account-create-8jxrl\" (UID: \"a5173d2d-26b0-4d31-86f8-80442713d33c\") " pod="openstack/neutron-188a-account-create-8jxrl" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.609391 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99p46\" (UniqueName: \"kubernetes.io/projected/310ff154-913c-4c61-b953-08c203c81f73-kube-api-access-99p46\") pod \"neutron-db-create-9dp74\" (UID: \"310ff154-913c-4c61-b953-08c203c81f73\") " pod="openstack/neutron-db-create-9dp74" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.609644 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/310ff154-913c-4c61-b953-08c203c81f73-operator-scripts\") pod \"neutron-db-create-9dp74\" (UID: \"310ff154-913c-4c61-b953-08c203c81f73\") " pod="openstack/neutron-db-create-9dp74" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.610389 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/310ff154-913c-4c61-b953-08c203c81f73-operator-scripts\") pod \"neutron-db-create-9dp74\" (UID: \"310ff154-913c-4c61-b953-08c203c81f73\") " pod="openstack/neutron-db-create-9dp74" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.612143 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5173d2d-26b0-4d31-86f8-80442713d33c-operator-scripts\") pod \"neutron-188a-account-create-8jxrl\" (UID: \"a5173d2d-26b0-4d31-86f8-80442713d33c\") " pod="openstack/neutron-188a-account-create-8jxrl" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.630096 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99p46\" (UniqueName: \"kubernetes.io/projected/310ff154-913c-4c61-b953-08c203c81f73-kube-api-access-99p46\") pod \"neutron-db-create-9dp74\" (UID: \"310ff154-913c-4c61-b953-08c203c81f73\") " pod="openstack/neutron-db-create-9dp74" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.640889 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwz6k\" (UniqueName: \"kubernetes.io/projected/a5173d2d-26b0-4d31-86f8-80442713d33c-kube-api-access-qwz6k\") pod \"neutron-188a-account-create-8jxrl\" (UID: \"a5173d2d-26b0-4d31-86f8-80442713d33c\") " pod="openstack/neutron-188a-account-create-8jxrl" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.751724 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9dp74" Nov 24 20:49:40 crc kubenswrapper[4812]: I1124 20:49:40.760585 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-188a-account-create-8jxrl" Nov 24 20:49:41 crc kubenswrapper[4812]: I1124 20:49:41.232517 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9dp74"] Nov 24 20:49:41 crc kubenswrapper[4812]: I1124 20:49:41.244410 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-188a-account-create-8jxrl"] Nov 24 20:49:41 crc kubenswrapper[4812]: W1124 20:49:41.250674 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5173d2d_26b0_4d31_86f8_80442713d33c.slice/crio-53aab43b65b1c658075adaefbb604abee735a878ce528793ee5480acc2947ae2 WatchSource:0}: Error finding container 53aab43b65b1c658075adaefbb604abee735a878ce528793ee5480acc2947ae2: Status 404 returned error can't find the container with id 53aab43b65b1c658075adaefbb604abee735a878ce528793ee5480acc2947ae2 Nov 24 20:49:41 crc kubenswrapper[4812]: I1124 20:49:41.622970 4812 generic.go:334] "Generic (PLEG): container finished" podID="310ff154-913c-4c61-b953-08c203c81f73" containerID="d060d02abe36f376bd30a5079deeb543d51ff3987deecf587ed94cc5d088bc43" exitCode=0 Nov 24 20:49:41 crc kubenswrapper[4812]: I1124 20:49:41.623077 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9dp74" event={"ID":"310ff154-913c-4c61-b953-08c203c81f73","Type":"ContainerDied","Data":"d060d02abe36f376bd30a5079deeb543d51ff3987deecf587ed94cc5d088bc43"} Nov 24 20:49:41 crc kubenswrapper[4812]: I1124 20:49:41.623372 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9dp74" event={"ID":"310ff154-913c-4c61-b953-08c203c81f73","Type":"ContainerStarted","Data":"57cbfc67bd73934d7cd5c888964cd967ddde027c98b907d215f7e28e6d909732"} Nov 24 20:49:41 crc kubenswrapper[4812]: I1124 20:49:41.626436 4812 generic.go:334] "Generic (PLEG): container finished" podID="a5173d2d-26b0-4d31-86f8-80442713d33c" containerID="d668797aa2af3e67c45edb9f290a38ebbdb63c3961304a7c7d91e4acf729b422" exitCode=0 Nov 24 20:49:41 crc kubenswrapper[4812]: I1124 20:49:41.626482 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-188a-account-create-8jxrl" event={"ID":"a5173d2d-26b0-4d31-86f8-80442713d33c","Type":"ContainerDied","Data":"d668797aa2af3e67c45edb9f290a38ebbdb63c3961304a7c7d91e4acf729b422"} Nov 24 20:49:41 crc kubenswrapper[4812]: I1124 20:49:41.626510 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-188a-account-create-8jxrl" event={"ID":"a5173d2d-26b0-4d31-86f8-80442713d33c","Type":"ContainerStarted","Data":"53aab43b65b1c658075adaefbb604abee735a878ce528793ee5480acc2947ae2"} Nov 24 20:49:43 crc kubenswrapper[4812]: I1124 20:49:43.123687 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9dp74" Nov 24 20:49:43 crc kubenswrapper[4812]: I1124 20:49:43.130744 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-188a-account-create-8jxrl" Nov 24 20:49:43 crc kubenswrapper[4812]: I1124 20:49:43.166107 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99p46\" (UniqueName: \"kubernetes.io/projected/310ff154-913c-4c61-b953-08c203c81f73-kube-api-access-99p46\") pod \"310ff154-913c-4c61-b953-08c203c81f73\" (UID: \"310ff154-913c-4c61-b953-08c203c81f73\") " Nov 24 20:49:43 crc kubenswrapper[4812]: I1124 20:49:43.166175 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5173d2d-26b0-4d31-86f8-80442713d33c-operator-scripts\") pod \"a5173d2d-26b0-4d31-86f8-80442713d33c\" (UID: \"a5173d2d-26b0-4d31-86f8-80442713d33c\") " Nov 24 20:49:43 crc kubenswrapper[4812]: I1124 20:49:43.166262 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/310ff154-913c-4c61-b953-08c203c81f73-operator-scripts\") pod \"310ff154-913c-4c61-b953-08c203c81f73\" (UID: \"310ff154-913c-4c61-b953-08c203c81f73\") " Nov 24 20:49:43 crc kubenswrapper[4812]: I1124 20:49:43.166321 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwz6k\" (UniqueName: \"kubernetes.io/projected/a5173d2d-26b0-4d31-86f8-80442713d33c-kube-api-access-qwz6k\") pod \"a5173d2d-26b0-4d31-86f8-80442713d33c\" (UID: \"a5173d2d-26b0-4d31-86f8-80442713d33c\") " Nov 24 20:49:43 crc kubenswrapper[4812]: I1124 20:49:43.167440 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/310ff154-913c-4c61-b953-08c203c81f73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "310ff154-913c-4c61-b953-08c203c81f73" (UID: "310ff154-913c-4c61-b953-08c203c81f73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:49:43 crc kubenswrapper[4812]: I1124 20:49:43.167823 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5173d2d-26b0-4d31-86f8-80442713d33c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a5173d2d-26b0-4d31-86f8-80442713d33c" (UID: "a5173d2d-26b0-4d31-86f8-80442713d33c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:49:43 crc kubenswrapper[4812]: I1124 20:49:43.182586 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/310ff154-913c-4c61-b953-08c203c81f73-kube-api-access-99p46" (OuterVolumeSpecName: "kube-api-access-99p46") pod "310ff154-913c-4c61-b953-08c203c81f73" (UID: "310ff154-913c-4c61-b953-08c203c81f73"). InnerVolumeSpecName "kube-api-access-99p46". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:49:43 crc kubenswrapper[4812]: I1124 20:49:43.182823 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5173d2d-26b0-4d31-86f8-80442713d33c-kube-api-access-qwz6k" (OuterVolumeSpecName: "kube-api-access-qwz6k") pod "a5173d2d-26b0-4d31-86f8-80442713d33c" (UID: "a5173d2d-26b0-4d31-86f8-80442713d33c"). InnerVolumeSpecName "kube-api-access-qwz6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:49:43 crc kubenswrapper[4812]: I1124 20:49:43.268723 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwz6k\" (UniqueName: \"kubernetes.io/projected/a5173d2d-26b0-4d31-86f8-80442713d33c-kube-api-access-qwz6k\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:43 crc kubenswrapper[4812]: I1124 20:49:43.268758 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99p46\" (UniqueName: \"kubernetes.io/projected/310ff154-913c-4c61-b953-08c203c81f73-kube-api-access-99p46\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:43 crc kubenswrapper[4812]: I1124 20:49:43.268772 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5173d2d-26b0-4d31-86f8-80442713d33c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:43 crc kubenswrapper[4812]: I1124 20:49:43.268783 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/310ff154-913c-4c61-b953-08c203c81f73-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:43 crc kubenswrapper[4812]: I1124 20:49:43.666918 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-188a-account-create-8jxrl" event={"ID":"a5173d2d-26b0-4d31-86f8-80442713d33c","Type":"ContainerDied","Data":"53aab43b65b1c658075adaefbb604abee735a878ce528793ee5480acc2947ae2"} Nov 24 20:49:43 crc kubenswrapper[4812]: I1124 20:49:43.667000 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53aab43b65b1c658075adaefbb604abee735a878ce528793ee5480acc2947ae2" Nov 24 20:49:43 crc kubenswrapper[4812]: I1124 20:49:43.667160 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-188a-account-create-8jxrl" Nov 24 20:49:43 crc kubenswrapper[4812]: I1124 20:49:43.670437 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9dp74" event={"ID":"310ff154-913c-4c61-b953-08c203c81f73","Type":"ContainerDied","Data":"57cbfc67bd73934d7cd5c888964cd967ddde027c98b907d215f7e28e6d909732"} Nov 24 20:49:43 crc kubenswrapper[4812]: I1124 20:49:43.670504 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57cbfc67bd73934d7cd5c888964cd967ddde027c98b907d215f7e28e6d909732" Nov 24 20:49:43 crc kubenswrapper[4812]: I1124 20:49:43.670780 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9dp74" Nov 24 20:49:45 crc kubenswrapper[4812]: I1124 20:49:45.655063 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-r64jf"] Nov 24 20:49:45 crc kubenswrapper[4812]: E1124 20:49:45.655799 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5173d2d-26b0-4d31-86f8-80442713d33c" containerName="mariadb-account-create" Nov 24 20:49:45 crc kubenswrapper[4812]: I1124 20:49:45.655815 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5173d2d-26b0-4d31-86f8-80442713d33c" containerName="mariadb-account-create" Nov 24 20:49:45 crc kubenswrapper[4812]: E1124 20:49:45.655832 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="310ff154-913c-4c61-b953-08c203c81f73" containerName="mariadb-database-create" Nov 24 20:49:45 crc kubenswrapper[4812]: I1124 20:49:45.655841 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="310ff154-913c-4c61-b953-08c203c81f73" containerName="mariadb-database-create" Nov 24 20:49:45 crc kubenswrapper[4812]: I1124 20:49:45.656047 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5173d2d-26b0-4d31-86f8-80442713d33c" containerName="mariadb-account-create" Nov 24 20:49:45 crc kubenswrapper[4812]: I1124 20:49:45.656078 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="310ff154-913c-4c61-b953-08c203c81f73" containerName="mariadb-database-create" Nov 24 20:49:45 crc kubenswrapper[4812]: I1124 20:49:45.656713 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-r64jf" Nov 24 20:49:45 crc kubenswrapper[4812]: I1124 20:49:45.658828 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 20:49:45 crc kubenswrapper[4812]: I1124 20:49:45.659502 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-jrgvd" Nov 24 20:49:45 crc kubenswrapper[4812]: I1124 20:49:45.659676 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 20:49:45 crc kubenswrapper[4812]: I1124 20:49:45.677863 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-r64jf"] Nov 24 20:49:45 crc kubenswrapper[4812]: I1124 20:49:45.717319 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7ae9954-3d82-47b9-9a6d-399d05351de2-config\") pod \"neutron-db-sync-r64jf\" (UID: \"e7ae9954-3d82-47b9-9a6d-399d05351de2\") " pod="openstack/neutron-db-sync-r64jf" Nov 24 20:49:45 crc kubenswrapper[4812]: I1124 20:49:45.717414 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wtfk\" (UniqueName: \"kubernetes.io/projected/e7ae9954-3d82-47b9-9a6d-399d05351de2-kube-api-access-2wtfk\") pod \"neutron-db-sync-r64jf\" (UID: \"e7ae9954-3d82-47b9-9a6d-399d05351de2\") " pod="openstack/neutron-db-sync-r64jf" Nov 24 20:49:45 crc kubenswrapper[4812]: I1124 20:49:45.717451 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ae9954-3d82-47b9-9a6d-399d05351de2-combined-ca-bundle\") pod \"neutron-db-sync-r64jf\" (UID: \"e7ae9954-3d82-47b9-9a6d-399d05351de2\") " pod="openstack/neutron-db-sync-r64jf" Nov 24 20:49:45 crc kubenswrapper[4812]: I1124 20:49:45.819029 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7ae9954-3d82-47b9-9a6d-399d05351de2-config\") pod \"neutron-db-sync-r64jf\" (UID: \"e7ae9954-3d82-47b9-9a6d-399d05351de2\") " pod="openstack/neutron-db-sync-r64jf" Nov 24 20:49:45 crc kubenswrapper[4812]: I1124 20:49:45.819109 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wtfk\" (UniqueName: \"kubernetes.io/projected/e7ae9954-3d82-47b9-9a6d-399d05351de2-kube-api-access-2wtfk\") pod \"neutron-db-sync-r64jf\" (UID: \"e7ae9954-3d82-47b9-9a6d-399d05351de2\") " pod="openstack/neutron-db-sync-r64jf" Nov 24 20:49:45 crc kubenswrapper[4812]: I1124 20:49:45.819153 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ae9954-3d82-47b9-9a6d-399d05351de2-combined-ca-bundle\") pod \"neutron-db-sync-r64jf\" (UID: \"e7ae9954-3d82-47b9-9a6d-399d05351de2\") " pod="openstack/neutron-db-sync-r64jf" Nov 24 20:49:45 crc kubenswrapper[4812]: I1124 20:49:45.825755 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7ae9954-3d82-47b9-9a6d-399d05351de2-config\") pod \"neutron-db-sync-r64jf\" (UID: \"e7ae9954-3d82-47b9-9a6d-399d05351de2\") " pod="openstack/neutron-db-sync-r64jf" Nov 24 20:49:45 crc kubenswrapper[4812]: I1124 20:49:45.826572 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ae9954-3d82-47b9-9a6d-399d05351de2-combined-ca-bundle\") pod \"neutron-db-sync-r64jf\" (UID: \"e7ae9954-3d82-47b9-9a6d-399d05351de2\") " pod="openstack/neutron-db-sync-r64jf" Nov 24 20:49:45 crc kubenswrapper[4812]: I1124 20:49:45.861099 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wtfk\" (UniqueName: \"kubernetes.io/projected/e7ae9954-3d82-47b9-9a6d-399d05351de2-kube-api-access-2wtfk\") pod \"neutron-db-sync-r64jf\" (UID: \"e7ae9954-3d82-47b9-9a6d-399d05351de2\") " pod="openstack/neutron-db-sync-r64jf" Nov 24 20:49:45 crc kubenswrapper[4812]: I1124 20:49:45.965621 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:49:45 crc kubenswrapper[4812]: E1124 20:49:45.965891 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:49:45 crc kubenswrapper[4812]: I1124 20:49:45.974893 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-r64jf" Nov 24 20:49:46 crc kubenswrapper[4812]: I1124 20:49:46.475058 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-r64jf"] Nov 24 20:49:46 crc kubenswrapper[4812]: W1124 20:49:46.480097 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7ae9954_3d82_47b9_9a6d_399d05351de2.slice/crio-c18b0fa4a1cab80d077f3e8a6e801d70fe8193f6cb50f83a3b8ed43cde08ca48 WatchSource:0}: Error finding container c18b0fa4a1cab80d077f3e8a6e801d70fe8193f6cb50f83a3b8ed43cde08ca48: Status 404 returned error can't find the container with id c18b0fa4a1cab80d077f3e8a6e801d70fe8193f6cb50f83a3b8ed43cde08ca48 Nov 24 20:49:46 crc kubenswrapper[4812]: I1124 20:49:46.709020 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-r64jf" event={"ID":"e7ae9954-3d82-47b9-9a6d-399d05351de2","Type":"ContainerStarted","Data":"99c13bfbd6fd38a15e53664b0a7db990f438998c4dce2b2a497511290a653e8a"} Nov 24 20:49:46 crc kubenswrapper[4812]: I1124 20:49:46.709590 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-r64jf" event={"ID":"e7ae9954-3d82-47b9-9a6d-399d05351de2","Type":"ContainerStarted","Data":"c18b0fa4a1cab80d077f3e8a6e801d70fe8193f6cb50f83a3b8ed43cde08ca48"} Nov 24 20:49:46 crc kubenswrapper[4812]: I1124 20:49:46.729703 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-r64jf" podStartSLOduration=1.729676055 podStartE2EDuration="1.729676055s" podCreationTimestamp="2025-11-24 20:49:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:49:46.728879792 +0000 UTC m=+5580.517832193" watchObservedRunningTime="2025-11-24 20:49:46.729676055 +0000 UTC m=+5580.518628466" Nov 24 20:49:51 crc kubenswrapper[4812]: I1124 20:49:51.776018 4812 generic.go:334] "Generic (PLEG): container finished" podID="e7ae9954-3d82-47b9-9a6d-399d05351de2" containerID="99c13bfbd6fd38a15e53664b0a7db990f438998c4dce2b2a497511290a653e8a" exitCode=0 Nov 24 20:49:51 crc kubenswrapper[4812]: I1124 20:49:51.776126 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-r64jf" event={"ID":"e7ae9954-3d82-47b9-9a6d-399d05351de2","Type":"ContainerDied","Data":"99c13bfbd6fd38a15e53664b0a7db990f438998c4dce2b2a497511290a653e8a"} Nov 24 20:49:53 crc kubenswrapper[4812]: I1124 20:49:53.176436 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-r64jf" Nov 24 20:49:53 crc kubenswrapper[4812]: I1124 20:49:53.290264 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7ae9954-3d82-47b9-9a6d-399d05351de2-config\") pod \"e7ae9954-3d82-47b9-9a6d-399d05351de2\" (UID: \"e7ae9954-3d82-47b9-9a6d-399d05351de2\") " Nov 24 20:49:53 crc kubenswrapper[4812]: I1124 20:49:53.290456 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wtfk\" (UniqueName: \"kubernetes.io/projected/e7ae9954-3d82-47b9-9a6d-399d05351de2-kube-api-access-2wtfk\") pod \"e7ae9954-3d82-47b9-9a6d-399d05351de2\" (UID: \"e7ae9954-3d82-47b9-9a6d-399d05351de2\") " Nov 24 20:49:53 crc kubenswrapper[4812]: I1124 20:49:53.290586 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ae9954-3d82-47b9-9a6d-399d05351de2-combined-ca-bundle\") pod \"e7ae9954-3d82-47b9-9a6d-399d05351de2\" (UID: \"e7ae9954-3d82-47b9-9a6d-399d05351de2\") " Nov 24 20:49:53 crc kubenswrapper[4812]: I1124 20:49:53.297980 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7ae9954-3d82-47b9-9a6d-399d05351de2-kube-api-access-2wtfk" (OuterVolumeSpecName: "kube-api-access-2wtfk") pod "e7ae9954-3d82-47b9-9a6d-399d05351de2" (UID: "e7ae9954-3d82-47b9-9a6d-399d05351de2"). InnerVolumeSpecName "kube-api-access-2wtfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:49:53 crc kubenswrapper[4812]: E1124 20:49:53.327184 4812 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7ae9954-3d82-47b9-9a6d-399d05351de2-combined-ca-bundle podName:e7ae9954-3d82-47b9-9a6d-399d05351de2 nodeName:}" failed. No retries permitted until 2025-11-24 20:49:53.827149291 +0000 UTC m=+5587.616101692 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/e7ae9954-3d82-47b9-9a6d-399d05351de2-combined-ca-bundle") pod "e7ae9954-3d82-47b9-9a6d-399d05351de2" (UID: "e7ae9954-3d82-47b9-9a6d-399d05351de2") : error deleting /var/lib/kubelet/pods/e7ae9954-3d82-47b9-9a6d-399d05351de2/volume-subpaths: remove /var/lib/kubelet/pods/e7ae9954-3d82-47b9-9a6d-399d05351de2/volume-subpaths: no such file or directory Nov 24 20:49:53 crc kubenswrapper[4812]: I1124 20:49:53.331258 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7ae9954-3d82-47b9-9a6d-399d05351de2-config" (OuterVolumeSpecName: "config") pod "e7ae9954-3d82-47b9-9a6d-399d05351de2" (UID: "e7ae9954-3d82-47b9-9a6d-399d05351de2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:49:53 crc kubenswrapper[4812]: I1124 20:49:53.394011 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e7ae9954-3d82-47b9-9a6d-399d05351de2-config\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:53 crc kubenswrapper[4812]: I1124 20:49:53.394061 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wtfk\" (UniqueName: \"kubernetes.io/projected/e7ae9954-3d82-47b9-9a6d-399d05351de2-kube-api-access-2wtfk\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:53 crc kubenswrapper[4812]: I1124 20:49:53.799994 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-r64jf" event={"ID":"e7ae9954-3d82-47b9-9a6d-399d05351de2","Type":"ContainerDied","Data":"c18b0fa4a1cab80d077f3e8a6e801d70fe8193f6cb50f83a3b8ed43cde08ca48"} Nov 24 20:49:53 crc kubenswrapper[4812]: I1124 20:49:53.800091 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c18b0fa4a1cab80d077f3e8a6e801d70fe8193f6cb50f83a3b8ed43cde08ca48" Nov 24 20:49:53 crc kubenswrapper[4812]: I1124 20:49:53.800109 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-r64jf" Nov 24 20:49:53 crc kubenswrapper[4812]: I1124 20:49:53.903434 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ae9954-3d82-47b9-9a6d-399d05351de2-combined-ca-bundle\") pod \"e7ae9954-3d82-47b9-9a6d-399d05351de2\" (UID: \"e7ae9954-3d82-47b9-9a6d-399d05351de2\") " Nov 24 20:49:53 crc kubenswrapper[4812]: I1124 20:49:53.909552 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7ae9954-3d82-47b9-9a6d-399d05351de2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7ae9954-3d82-47b9-9a6d-399d05351de2" (UID: "e7ae9954-3d82-47b9-9a6d-399d05351de2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.005921 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7ae9954-3d82-47b9-9a6d-399d05351de2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.086660 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7684487-f4xqs"] Nov 24 20:49:54 crc kubenswrapper[4812]: E1124 20:49:54.086998 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7ae9954-3d82-47b9-9a6d-399d05351de2" containerName="neutron-db-sync" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.087010 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7ae9954-3d82-47b9-9a6d-399d05351de2" containerName="neutron-db-sync" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.087157 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7ae9954-3d82-47b9-9a6d-399d05351de2" containerName="neutron-db-sync" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.088890 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7684487-f4xqs" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.104783 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7684487-f4xqs"] Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.136649 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5f44d765b8-k4cmk"] Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.138176 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f44d765b8-k4cmk" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.140379 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.140536 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-jrgvd" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.140640 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.140839 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.163962 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f44d765b8-k4cmk"] Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.209080 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-config\") pod \"dnsmasq-dns-5c7684487-f4xqs\" (UID: \"61621044-78cb-418d-a1a6-b88dae6f0f36\") " pod="openstack/dnsmasq-dns-5c7684487-f4xqs" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.209159 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-ovndb-tls-certs\") pod \"neutron-5f44d765b8-k4cmk\" (UID: \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\") " pod="openstack/neutron-5f44d765b8-k4cmk" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.209211 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-httpd-config\") pod \"neutron-5f44d765b8-k4cmk\" (UID: \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\") " pod="openstack/neutron-5f44d765b8-k4cmk" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.209250 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fwvr\" (UniqueName: \"kubernetes.io/projected/61621044-78cb-418d-a1a6-b88dae6f0f36-kube-api-access-9fwvr\") pod \"dnsmasq-dns-5c7684487-f4xqs\" (UID: \"61621044-78cb-418d-a1a6-b88dae6f0f36\") " pod="openstack/dnsmasq-dns-5c7684487-f4xqs" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.209363 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-dns-svc\") pod \"dnsmasq-dns-5c7684487-f4xqs\" (UID: \"61621044-78cb-418d-a1a6-b88dae6f0f36\") " pod="openstack/dnsmasq-dns-5c7684487-f4xqs" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.209424 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7684487-f4xqs\" (UID: \"61621044-78cb-418d-a1a6-b88dae6f0f36\") " pod="openstack/dnsmasq-dns-5c7684487-f4xqs" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.209454 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5ffg\" (UniqueName: \"kubernetes.io/projected/8a2b6699-c494-4751-9be6-9be5a49bdf4f-kube-api-access-g5ffg\") pod \"neutron-5f44d765b8-k4cmk\" (UID: \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\") " pod="openstack/neutron-5f44d765b8-k4cmk" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.209509 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-combined-ca-bundle\") pod \"neutron-5f44d765b8-k4cmk\" (UID: \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\") " pod="openstack/neutron-5f44d765b8-k4cmk" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.209561 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7684487-f4xqs\" (UID: \"61621044-78cb-418d-a1a6-b88dae6f0f36\") " pod="openstack/dnsmasq-dns-5c7684487-f4xqs" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.209598 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-config\") pod \"neutron-5f44d765b8-k4cmk\" (UID: \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\") " pod="openstack/neutron-5f44d765b8-k4cmk" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.310406 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-config\") pod \"dnsmasq-dns-5c7684487-f4xqs\" (UID: \"61621044-78cb-418d-a1a6-b88dae6f0f36\") " pod="openstack/dnsmasq-dns-5c7684487-f4xqs" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.310459 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-ovndb-tls-certs\") pod \"neutron-5f44d765b8-k4cmk\" (UID: \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\") " pod="openstack/neutron-5f44d765b8-k4cmk" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.310489 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-httpd-config\") pod \"neutron-5f44d765b8-k4cmk\" (UID: \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\") " pod="openstack/neutron-5f44d765b8-k4cmk" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.310520 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fwvr\" (UniqueName: \"kubernetes.io/projected/61621044-78cb-418d-a1a6-b88dae6f0f36-kube-api-access-9fwvr\") pod \"dnsmasq-dns-5c7684487-f4xqs\" (UID: \"61621044-78cb-418d-a1a6-b88dae6f0f36\") " pod="openstack/dnsmasq-dns-5c7684487-f4xqs" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.310575 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-dns-svc\") pod \"dnsmasq-dns-5c7684487-f4xqs\" (UID: \"61621044-78cb-418d-a1a6-b88dae6f0f36\") " pod="openstack/dnsmasq-dns-5c7684487-f4xqs" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.310612 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7684487-f4xqs\" (UID: \"61621044-78cb-418d-a1a6-b88dae6f0f36\") " pod="openstack/dnsmasq-dns-5c7684487-f4xqs" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.310641 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5ffg\" (UniqueName: \"kubernetes.io/projected/8a2b6699-c494-4751-9be6-9be5a49bdf4f-kube-api-access-g5ffg\") pod \"neutron-5f44d765b8-k4cmk\" (UID: \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\") " pod="openstack/neutron-5f44d765b8-k4cmk" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.310683 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-combined-ca-bundle\") pod \"neutron-5f44d765b8-k4cmk\" (UID: \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\") " pod="openstack/neutron-5f44d765b8-k4cmk" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.310726 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7684487-f4xqs\" (UID: \"61621044-78cb-418d-a1a6-b88dae6f0f36\") " pod="openstack/dnsmasq-dns-5c7684487-f4xqs" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.310754 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-config\") pod \"neutron-5f44d765b8-k4cmk\" (UID: \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\") " pod="openstack/neutron-5f44d765b8-k4cmk" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.312016 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7684487-f4xqs\" (UID: \"61621044-78cb-418d-a1a6-b88dae6f0f36\") " pod="openstack/dnsmasq-dns-5c7684487-f4xqs" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.312786 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-config\") pod \"dnsmasq-dns-5c7684487-f4xqs\" (UID: \"61621044-78cb-418d-a1a6-b88dae6f0f36\") " pod="openstack/dnsmasq-dns-5c7684487-f4xqs" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.313040 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-dns-svc\") pod \"dnsmasq-dns-5c7684487-f4xqs\" (UID: \"61621044-78cb-418d-a1a6-b88dae6f0f36\") " pod="openstack/dnsmasq-dns-5c7684487-f4xqs" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.313264 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7684487-f4xqs\" (UID: \"61621044-78cb-418d-a1a6-b88dae6f0f36\") " pod="openstack/dnsmasq-dns-5c7684487-f4xqs" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.316762 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-combined-ca-bundle\") pod \"neutron-5f44d765b8-k4cmk\" (UID: \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\") " pod="openstack/neutron-5f44d765b8-k4cmk" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.316826 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-ovndb-tls-certs\") pod \"neutron-5f44d765b8-k4cmk\" (UID: \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\") " pod="openstack/neutron-5f44d765b8-k4cmk" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.318479 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-httpd-config\") pod \"neutron-5f44d765b8-k4cmk\" (UID: \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\") " pod="openstack/neutron-5f44d765b8-k4cmk" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.330117 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-config\") pod \"neutron-5f44d765b8-k4cmk\" (UID: \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\") " pod="openstack/neutron-5f44d765b8-k4cmk" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.332945 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fwvr\" (UniqueName: \"kubernetes.io/projected/61621044-78cb-418d-a1a6-b88dae6f0f36-kube-api-access-9fwvr\") pod \"dnsmasq-dns-5c7684487-f4xqs\" (UID: \"61621044-78cb-418d-a1a6-b88dae6f0f36\") " pod="openstack/dnsmasq-dns-5c7684487-f4xqs" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.334190 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5ffg\" (UniqueName: \"kubernetes.io/projected/8a2b6699-c494-4751-9be6-9be5a49bdf4f-kube-api-access-g5ffg\") pod \"neutron-5f44d765b8-k4cmk\" (UID: \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\") " pod="openstack/neutron-5f44d765b8-k4cmk" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.466880 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7684487-f4xqs" Nov 24 20:49:54 crc kubenswrapper[4812]: I1124 20:49:54.478699 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f44d765b8-k4cmk" Nov 24 20:49:55 crc kubenswrapper[4812]: I1124 20:49:55.098166 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7684487-f4xqs"] Nov 24 20:49:55 crc kubenswrapper[4812]: I1124 20:49:55.299167 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f44d765b8-k4cmk"] Nov 24 20:49:55 crc kubenswrapper[4812]: W1124 20:49:55.309779 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a2b6699_c494_4751_9be6_9be5a49bdf4f.slice/crio-a87dd7b151ceea16b1829618d7985e39b4303b9288dfb365e1f67768715333a1 WatchSource:0}: Error finding container a87dd7b151ceea16b1829618d7985e39b4303b9288dfb365e1f67768715333a1: Status 404 returned error can't find the container with id a87dd7b151ceea16b1829618d7985e39b4303b9288dfb365e1f67768715333a1 Nov 24 20:49:55 crc kubenswrapper[4812]: I1124 20:49:55.819113 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f44d765b8-k4cmk" event={"ID":"8a2b6699-c494-4751-9be6-9be5a49bdf4f","Type":"ContainerStarted","Data":"cb144678e4a1689cdb895cce09ec49f30f5a7692108fad1cb32698ccc71de39e"} Nov 24 20:49:55 crc kubenswrapper[4812]: I1124 20:49:55.819173 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f44d765b8-k4cmk" event={"ID":"8a2b6699-c494-4751-9be6-9be5a49bdf4f","Type":"ContainerStarted","Data":"744b4ae0b4c086185c75dc6b539f4c62fbe3c76654739ec5921c3e16f35e29e9"} Nov 24 20:49:55 crc kubenswrapper[4812]: I1124 20:49:55.819190 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f44d765b8-k4cmk" event={"ID":"8a2b6699-c494-4751-9be6-9be5a49bdf4f","Type":"ContainerStarted","Data":"a87dd7b151ceea16b1829618d7985e39b4303b9288dfb365e1f67768715333a1"} Nov 24 20:49:55 crc kubenswrapper[4812]: I1124 20:49:55.820315 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5f44d765b8-k4cmk" Nov 24 20:49:55 crc kubenswrapper[4812]: I1124 20:49:55.822193 4812 generic.go:334] "Generic (PLEG): container finished" podID="61621044-78cb-418d-a1a6-b88dae6f0f36" containerID="a71fb177df29034cbdef8eda5be7b4db930bdbfc5305545efbfa0c5da28efa9b" exitCode=0 Nov 24 20:49:55 crc kubenswrapper[4812]: I1124 20:49:55.822228 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7684487-f4xqs" event={"ID":"61621044-78cb-418d-a1a6-b88dae6f0f36","Type":"ContainerDied","Data":"a71fb177df29034cbdef8eda5be7b4db930bdbfc5305545efbfa0c5da28efa9b"} Nov 24 20:49:55 crc kubenswrapper[4812]: I1124 20:49:55.822246 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7684487-f4xqs" event={"ID":"61621044-78cb-418d-a1a6-b88dae6f0f36","Type":"ContainerStarted","Data":"c24822aea5ed3965b9a90ecaaff0dd0dc9d263157a9ce3b0d014afd1b9d5d031"} Nov 24 20:49:55 crc kubenswrapper[4812]: I1124 20:49:55.880469 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5f44d765b8-k4cmk" podStartSLOduration=1.880445705 podStartE2EDuration="1.880445705s" podCreationTimestamp="2025-11-24 20:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:49:55.850461835 +0000 UTC m=+5589.639414206" watchObservedRunningTime="2025-11-24 20:49:55.880445705 +0000 UTC m=+5589.669398076" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.182869 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6f5b87f55c-47mq2"] Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.184271 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.186154 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.188537 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.198514 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f5b87f55c-47mq2"] Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.259889 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e0f72ed7-1afc-4291-ac01-8832add1eac3-config\") pod \"neutron-6f5b87f55c-47mq2\" (UID: \"e0f72ed7-1afc-4291-ac01-8832add1eac3\") " pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.259961 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f72ed7-1afc-4291-ac01-8832add1eac3-public-tls-certs\") pod \"neutron-6f5b87f55c-47mq2\" (UID: \"e0f72ed7-1afc-4291-ac01-8832add1eac3\") " pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.259996 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f72ed7-1afc-4291-ac01-8832add1eac3-combined-ca-bundle\") pod \"neutron-6f5b87f55c-47mq2\" (UID: \"e0f72ed7-1afc-4291-ac01-8832add1eac3\") " pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.260025 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f72ed7-1afc-4291-ac01-8832add1eac3-ovndb-tls-certs\") pod \"neutron-6f5b87f55c-47mq2\" (UID: \"e0f72ed7-1afc-4291-ac01-8832add1eac3\") " pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.260080 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkcm5\" (UniqueName: \"kubernetes.io/projected/e0f72ed7-1afc-4291-ac01-8832add1eac3-kube-api-access-fkcm5\") pod \"neutron-6f5b87f55c-47mq2\" (UID: \"e0f72ed7-1afc-4291-ac01-8832add1eac3\") " pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.260179 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e0f72ed7-1afc-4291-ac01-8832add1eac3-httpd-config\") pod \"neutron-6f5b87f55c-47mq2\" (UID: \"e0f72ed7-1afc-4291-ac01-8832add1eac3\") " pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.260237 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f72ed7-1afc-4291-ac01-8832add1eac3-internal-tls-certs\") pod \"neutron-6f5b87f55c-47mq2\" (UID: \"e0f72ed7-1afc-4291-ac01-8832add1eac3\") " pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.361400 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e0f72ed7-1afc-4291-ac01-8832add1eac3-config\") pod \"neutron-6f5b87f55c-47mq2\" (UID: \"e0f72ed7-1afc-4291-ac01-8832add1eac3\") " pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.361458 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f72ed7-1afc-4291-ac01-8832add1eac3-public-tls-certs\") pod \"neutron-6f5b87f55c-47mq2\" (UID: \"e0f72ed7-1afc-4291-ac01-8832add1eac3\") " pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.361486 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f72ed7-1afc-4291-ac01-8832add1eac3-combined-ca-bundle\") pod \"neutron-6f5b87f55c-47mq2\" (UID: \"e0f72ed7-1afc-4291-ac01-8832add1eac3\") " pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.361507 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f72ed7-1afc-4291-ac01-8832add1eac3-ovndb-tls-certs\") pod \"neutron-6f5b87f55c-47mq2\" (UID: \"e0f72ed7-1afc-4291-ac01-8832add1eac3\") " pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.361552 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkcm5\" (UniqueName: \"kubernetes.io/projected/e0f72ed7-1afc-4291-ac01-8832add1eac3-kube-api-access-fkcm5\") pod \"neutron-6f5b87f55c-47mq2\" (UID: \"e0f72ed7-1afc-4291-ac01-8832add1eac3\") " pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.361590 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e0f72ed7-1afc-4291-ac01-8832add1eac3-httpd-config\") pod \"neutron-6f5b87f55c-47mq2\" (UID: \"e0f72ed7-1afc-4291-ac01-8832add1eac3\") " pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.361626 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f72ed7-1afc-4291-ac01-8832add1eac3-internal-tls-certs\") pod \"neutron-6f5b87f55c-47mq2\" (UID: \"e0f72ed7-1afc-4291-ac01-8832add1eac3\") " pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.365212 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e0f72ed7-1afc-4291-ac01-8832add1eac3-config\") pod \"neutron-6f5b87f55c-47mq2\" (UID: \"e0f72ed7-1afc-4291-ac01-8832add1eac3\") " pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.365437 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f72ed7-1afc-4291-ac01-8832add1eac3-public-tls-certs\") pod \"neutron-6f5b87f55c-47mq2\" (UID: \"e0f72ed7-1afc-4291-ac01-8832add1eac3\") " pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.365903 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f72ed7-1afc-4291-ac01-8832add1eac3-ovndb-tls-certs\") pod \"neutron-6f5b87f55c-47mq2\" (UID: \"e0f72ed7-1afc-4291-ac01-8832add1eac3\") " pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.366007 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f72ed7-1afc-4291-ac01-8832add1eac3-combined-ca-bundle\") pod \"neutron-6f5b87f55c-47mq2\" (UID: \"e0f72ed7-1afc-4291-ac01-8832add1eac3\") " pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.367998 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f72ed7-1afc-4291-ac01-8832add1eac3-internal-tls-certs\") pod \"neutron-6f5b87f55c-47mq2\" (UID: \"e0f72ed7-1afc-4291-ac01-8832add1eac3\") " pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.370691 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e0f72ed7-1afc-4291-ac01-8832add1eac3-httpd-config\") pod \"neutron-6f5b87f55c-47mq2\" (UID: \"e0f72ed7-1afc-4291-ac01-8832add1eac3\") " pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.387671 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkcm5\" (UniqueName: \"kubernetes.io/projected/e0f72ed7-1afc-4291-ac01-8832add1eac3-kube-api-access-fkcm5\") pod \"neutron-6f5b87f55c-47mq2\" (UID: \"e0f72ed7-1afc-4291-ac01-8832add1eac3\") " pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.532955 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.835609 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7684487-f4xqs" event={"ID":"61621044-78cb-418d-a1a6-b88dae6f0f36","Type":"ContainerStarted","Data":"78220ce55f77313035c855be0d72356c1de9b0451d31be9adfc89dc67ae3364a"} Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.835982 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c7684487-f4xqs" Nov 24 20:49:56 crc kubenswrapper[4812]: I1124 20:49:56.852109 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c7684487-f4xqs" podStartSLOduration=2.852092548 podStartE2EDuration="2.852092548s" podCreationTimestamp="2025-11-24 20:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:49:56.851751538 +0000 UTC m=+5590.640703909" watchObservedRunningTime="2025-11-24 20:49:56.852092548 +0000 UTC m=+5590.641044929" Nov 24 20:49:57 crc kubenswrapper[4812]: I1124 20:49:57.062041 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f5b87f55c-47mq2"] Nov 24 20:49:57 crc kubenswrapper[4812]: I1124 20:49:57.854496 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f5b87f55c-47mq2" event={"ID":"e0f72ed7-1afc-4291-ac01-8832add1eac3","Type":"ContainerStarted","Data":"a780dde236b7e4df9a1058328653ac5a8cb8192dc7fea089cecf9d9bc153494b"} Nov 24 20:49:57 crc kubenswrapper[4812]: I1124 20:49:57.856482 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f5b87f55c-47mq2" event={"ID":"e0f72ed7-1afc-4291-ac01-8832add1eac3","Type":"ContainerStarted","Data":"2ff65f5740a22f99c51eb9b68b7b134e2f2339cc8b3bdb6cc09d229a8a450015"} Nov 24 20:49:57 crc kubenswrapper[4812]: I1124 20:49:57.856556 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f5b87f55c-47mq2" event={"ID":"e0f72ed7-1afc-4291-ac01-8832add1eac3","Type":"ContainerStarted","Data":"65ad5b5884495ea60e3a24617b4fc631be84091b7950e3ba19a5b731b2db9fb2"} Nov 24 20:49:57 crc kubenswrapper[4812]: I1124 20:49:57.856643 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:49:57 crc kubenswrapper[4812]: I1124 20:49:57.881044 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6f5b87f55c-47mq2" podStartSLOduration=1.881018175 podStartE2EDuration="1.881018175s" podCreationTimestamp="2025-11-24 20:49:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:49:57.873587744 +0000 UTC m=+5591.662540115" watchObservedRunningTime="2025-11-24 20:49:57.881018175 +0000 UTC m=+5591.669970546" Nov 24 20:50:00 crc kubenswrapper[4812]: I1124 20:50:00.965087 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:50:00 crc kubenswrapper[4812]: E1124 20:50:00.965560 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:50:04 crc kubenswrapper[4812]: I1124 20:50:04.468696 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c7684487-f4xqs" Nov 24 20:50:04 crc kubenswrapper[4812]: I1124 20:50:04.552782 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-548f777df9-v5vmj"] Nov 24 20:50:04 crc kubenswrapper[4812]: I1124 20:50:04.553055 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-548f777df9-v5vmj" podUID="2ba77045-9e8e-425f-8160-b80ab2b21b36" containerName="dnsmasq-dns" containerID="cri-o://56e0a6cb3800c55b82729f183955a06e3000d6af5a3e78a6bedf5f71ad13bf4f" gracePeriod=10 Nov 24 20:50:04 crc kubenswrapper[4812]: I1124 20:50:04.913317 4812 generic.go:334] "Generic (PLEG): container finished" podID="2ba77045-9e8e-425f-8160-b80ab2b21b36" containerID="56e0a6cb3800c55b82729f183955a06e3000d6af5a3e78a6bedf5f71ad13bf4f" exitCode=0 Nov 24 20:50:04 crc kubenswrapper[4812]: I1124 20:50:04.913435 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548f777df9-v5vmj" event={"ID":"2ba77045-9e8e-425f-8160-b80ab2b21b36","Type":"ContainerDied","Data":"56e0a6cb3800c55b82729f183955a06e3000d6af5a3e78a6bedf5f71ad13bf4f"} Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.027939 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548f777df9-v5vmj" Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.136292 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-ovsdbserver-sb\") pod \"2ba77045-9e8e-425f-8160-b80ab2b21b36\" (UID: \"2ba77045-9e8e-425f-8160-b80ab2b21b36\") " Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.136415 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-config\") pod \"2ba77045-9e8e-425f-8160-b80ab2b21b36\" (UID: \"2ba77045-9e8e-425f-8160-b80ab2b21b36\") " Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.136437 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frsk8\" (UniqueName: \"kubernetes.io/projected/2ba77045-9e8e-425f-8160-b80ab2b21b36-kube-api-access-frsk8\") pod \"2ba77045-9e8e-425f-8160-b80ab2b21b36\" (UID: \"2ba77045-9e8e-425f-8160-b80ab2b21b36\") " Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.136555 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-dns-svc\") pod \"2ba77045-9e8e-425f-8160-b80ab2b21b36\" (UID: \"2ba77045-9e8e-425f-8160-b80ab2b21b36\") " Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.136588 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-ovsdbserver-nb\") pod \"2ba77045-9e8e-425f-8160-b80ab2b21b36\" (UID: \"2ba77045-9e8e-425f-8160-b80ab2b21b36\") " Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.144133 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba77045-9e8e-425f-8160-b80ab2b21b36-kube-api-access-frsk8" (OuterVolumeSpecName: "kube-api-access-frsk8") pod "2ba77045-9e8e-425f-8160-b80ab2b21b36" (UID: "2ba77045-9e8e-425f-8160-b80ab2b21b36"). InnerVolumeSpecName "kube-api-access-frsk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.192483 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2ba77045-9e8e-425f-8160-b80ab2b21b36" (UID: "2ba77045-9e8e-425f-8160-b80ab2b21b36"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.192988 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2ba77045-9e8e-425f-8160-b80ab2b21b36" (UID: "2ba77045-9e8e-425f-8160-b80ab2b21b36"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.207674 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-config" (OuterVolumeSpecName: "config") pod "2ba77045-9e8e-425f-8160-b80ab2b21b36" (UID: "2ba77045-9e8e-425f-8160-b80ab2b21b36"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.211809 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2ba77045-9e8e-425f-8160-b80ab2b21b36" (UID: "2ba77045-9e8e-425f-8160-b80ab2b21b36"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.238090 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.238218 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-config\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.238303 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frsk8\" (UniqueName: \"kubernetes.io/projected/2ba77045-9e8e-425f-8160-b80ab2b21b36-kube-api-access-frsk8\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.238406 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.238472 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ba77045-9e8e-425f-8160-b80ab2b21b36-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.927678 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-548f777df9-v5vmj" event={"ID":"2ba77045-9e8e-425f-8160-b80ab2b21b36","Type":"ContainerDied","Data":"5df644a33afcf508c32117908c2841e6bf58604c0062d1aa2bf90808c6fbc05e"} Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.927757 4812 scope.go:117] "RemoveContainer" containerID="56e0a6cb3800c55b82729f183955a06e3000d6af5a3e78a6bedf5f71ad13bf4f" Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.927811 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-548f777df9-v5vmj" Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.972545 4812 scope.go:117] "RemoveContainer" containerID="92d773582afcdc07be3d27496b1a52523ee15ffe61586432ab51899ec09feb19" Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.979474 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-548f777df9-v5vmj"] Nov 24 20:50:05 crc kubenswrapper[4812]: I1124 20:50:05.988141 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-548f777df9-v5vmj"] Nov 24 20:50:06 crc kubenswrapper[4812]: I1124 20:50:06.986159 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ba77045-9e8e-425f-8160-b80ab2b21b36" path="/var/lib/kubelet/pods/2ba77045-9e8e-425f-8160-b80ab2b21b36/volumes" Nov 24 20:50:12 crc kubenswrapper[4812]: I1124 20:50:12.965955 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:50:14 crc kubenswrapper[4812]: I1124 20:50:14.009747 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"1ed217e6b672176474a4c358d1230ff543c81838c55c07bc096791453845da5d"} Nov 24 20:50:24 crc kubenswrapper[4812]: I1124 20:50:24.494287 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5f44d765b8-k4cmk" Nov 24 20:50:26 crc kubenswrapper[4812]: I1124 20:50:26.562164 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6f5b87f55c-47mq2" Nov 24 20:50:26 crc kubenswrapper[4812]: I1124 20:50:26.639555 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f44d765b8-k4cmk"] Nov 24 20:50:26 crc kubenswrapper[4812]: I1124 20:50:26.639885 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5f44d765b8-k4cmk" podUID="8a2b6699-c494-4751-9be6-9be5a49bdf4f" containerName="neutron-api" containerID="cri-o://744b4ae0b4c086185c75dc6b539f4c62fbe3c76654739ec5921c3e16f35e29e9" gracePeriod=30 Nov 24 20:50:26 crc kubenswrapper[4812]: I1124 20:50:26.640110 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5f44d765b8-k4cmk" podUID="8a2b6699-c494-4751-9be6-9be5a49bdf4f" containerName="neutron-httpd" containerID="cri-o://cb144678e4a1689cdb895cce09ec49f30f5a7692108fad1cb32698ccc71de39e" gracePeriod=30 Nov 24 20:50:27 crc kubenswrapper[4812]: I1124 20:50:27.166828 4812 generic.go:334] "Generic (PLEG): container finished" podID="8a2b6699-c494-4751-9be6-9be5a49bdf4f" containerID="cb144678e4a1689cdb895cce09ec49f30f5a7692108fad1cb32698ccc71de39e" exitCode=0 Nov 24 20:50:27 crc kubenswrapper[4812]: I1124 20:50:27.167189 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f44d765b8-k4cmk" event={"ID":"8a2b6699-c494-4751-9be6-9be5a49bdf4f","Type":"ContainerDied","Data":"cb144678e4a1689cdb895cce09ec49f30f5a7692108fad1cb32698ccc71de39e"} Nov 24 20:50:37 crc kubenswrapper[4812]: I1124 20:50:37.297730 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f44d765b8-k4cmk" event={"ID":"8a2b6699-c494-4751-9be6-9be5a49bdf4f","Type":"ContainerDied","Data":"744b4ae0b4c086185c75dc6b539f4c62fbe3c76654739ec5921c3e16f35e29e9"} Nov 24 20:50:37 crc kubenswrapper[4812]: I1124 20:50:37.297659 4812 generic.go:334] "Generic (PLEG): container finished" podID="8a2b6699-c494-4751-9be6-9be5a49bdf4f" containerID="744b4ae0b4c086185c75dc6b539f4c62fbe3c76654739ec5921c3e16f35e29e9" exitCode=0 Nov 24 20:50:37 crc kubenswrapper[4812]: I1124 20:50:37.801262 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f44d765b8-k4cmk" Nov 24 20:50:37 crc kubenswrapper[4812]: I1124 20:50:37.916044 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-combined-ca-bundle\") pod \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\" (UID: \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\") " Nov 24 20:50:37 crc kubenswrapper[4812]: I1124 20:50:37.916146 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5ffg\" (UniqueName: \"kubernetes.io/projected/8a2b6699-c494-4751-9be6-9be5a49bdf4f-kube-api-access-g5ffg\") pod \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\" (UID: \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\") " Nov 24 20:50:37 crc kubenswrapper[4812]: I1124 20:50:37.916219 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-config\") pod \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\" (UID: \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\") " Nov 24 20:50:37 crc kubenswrapper[4812]: I1124 20:50:37.916375 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-httpd-config\") pod \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\" (UID: \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\") " Nov 24 20:50:37 crc kubenswrapper[4812]: I1124 20:50:37.916460 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-ovndb-tls-certs\") pod \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\" (UID: \"8a2b6699-c494-4751-9be6-9be5a49bdf4f\") " Nov 24 20:50:37 crc kubenswrapper[4812]: I1124 20:50:37.923380 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "8a2b6699-c494-4751-9be6-9be5a49bdf4f" (UID: "8a2b6699-c494-4751-9be6-9be5a49bdf4f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:50:37 crc kubenswrapper[4812]: I1124 20:50:37.923504 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a2b6699-c494-4751-9be6-9be5a49bdf4f-kube-api-access-g5ffg" (OuterVolumeSpecName: "kube-api-access-g5ffg") pod "8a2b6699-c494-4751-9be6-9be5a49bdf4f" (UID: "8a2b6699-c494-4751-9be6-9be5a49bdf4f"). InnerVolumeSpecName "kube-api-access-g5ffg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:50:37 crc kubenswrapper[4812]: I1124 20:50:37.983518 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-config" (OuterVolumeSpecName: "config") pod "8a2b6699-c494-4751-9be6-9be5a49bdf4f" (UID: "8a2b6699-c494-4751-9be6-9be5a49bdf4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:50:37 crc kubenswrapper[4812]: I1124 20:50:37.993620 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a2b6699-c494-4751-9be6-9be5a49bdf4f" (UID: "8a2b6699-c494-4751-9be6-9be5a49bdf4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:50:38 crc kubenswrapper[4812]: I1124 20:50:38.004582 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "8a2b6699-c494-4751-9be6-9be5a49bdf4f" (UID: "8a2b6699-c494-4751-9be6-9be5a49bdf4f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:50:38 crc kubenswrapper[4812]: I1124 20:50:38.018840 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5ffg\" (UniqueName: \"kubernetes.io/projected/8a2b6699-c494-4751-9be6-9be5a49bdf4f-kube-api-access-g5ffg\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:38 crc kubenswrapper[4812]: I1124 20:50:38.018873 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-config\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:38 crc kubenswrapper[4812]: I1124 20:50:38.018889 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:38 crc kubenswrapper[4812]: I1124 20:50:38.018898 4812 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:38 crc kubenswrapper[4812]: I1124 20:50:38.018905 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a2b6699-c494-4751-9be6-9be5a49bdf4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:38 crc kubenswrapper[4812]: I1124 20:50:38.314046 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f44d765b8-k4cmk" event={"ID":"8a2b6699-c494-4751-9be6-9be5a49bdf4f","Type":"ContainerDied","Data":"a87dd7b151ceea16b1829618d7985e39b4303b9288dfb365e1f67768715333a1"} Nov 24 20:50:38 crc kubenswrapper[4812]: I1124 20:50:38.314460 4812 scope.go:117] "RemoveContainer" containerID="cb144678e4a1689cdb895cce09ec49f30f5a7692108fad1cb32698ccc71de39e" Nov 24 20:50:38 crc kubenswrapper[4812]: I1124 20:50:38.314726 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f44d765b8-k4cmk" Nov 24 20:50:38 crc kubenswrapper[4812]: I1124 20:50:38.349973 4812 scope.go:117] "RemoveContainer" containerID="744b4ae0b4c086185c75dc6b539f4c62fbe3c76654739ec5921c3e16f35e29e9" Nov 24 20:50:38 crc kubenswrapper[4812]: I1124 20:50:38.378490 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f44d765b8-k4cmk"] Nov 24 20:50:38 crc kubenswrapper[4812]: I1124 20:50:38.392988 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5f44d765b8-k4cmk"] Nov 24 20:50:38 crc kubenswrapper[4812]: I1124 20:50:38.984894 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a2b6699-c494-4751-9be6-9be5a49bdf4f" path="/var/lib/kubelet/pods/8a2b6699-c494-4751-9be6-9be5a49bdf4f/volumes" Nov 24 20:50:42 crc kubenswrapper[4812]: E1124 20:50:42.434078 4812 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.36:57064->38.102.83.36:46073: write tcp 38.102.83.36:57064->38.102.83.36:46073: write: broken pipe Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.303879 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-lkfn2"] Nov 24 20:50:46 crc kubenswrapper[4812]: E1124 20:50:46.304970 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba77045-9e8e-425f-8160-b80ab2b21b36" containerName="init" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.304989 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba77045-9e8e-425f-8160-b80ab2b21b36" containerName="init" Nov 24 20:50:46 crc kubenswrapper[4812]: E1124 20:50:46.305001 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2b6699-c494-4751-9be6-9be5a49bdf4f" containerName="neutron-httpd" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.305008 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2b6699-c494-4751-9be6-9be5a49bdf4f" containerName="neutron-httpd" Nov 24 20:50:46 crc kubenswrapper[4812]: E1124 20:50:46.305033 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2b6699-c494-4751-9be6-9be5a49bdf4f" containerName="neutron-api" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.305041 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2b6699-c494-4751-9be6-9be5a49bdf4f" containerName="neutron-api" Nov 24 20:50:46 crc kubenswrapper[4812]: E1124 20:50:46.305053 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba77045-9e8e-425f-8160-b80ab2b21b36" containerName="dnsmasq-dns" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.305060 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba77045-9e8e-425f-8160-b80ab2b21b36" containerName="dnsmasq-dns" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.305260 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a2b6699-c494-4751-9be6-9be5a49bdf4f" containerName="neutron-api" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.305287 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ba77045-9e8e-425f-8160-b80ab2b21b36" containerName="dnsmasq-dns" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.305302 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a2b6699-c494-4751-9be6-9be5a49bdf4f" containerName="neutron-httpd" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.306019 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lkfn2" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.308705 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.308885 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.308999 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-rqcpb" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.309113 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.317294 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.358020 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-lkfn2"] Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.381913 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-lkfn2"] Nov 24 20:50:46 crc kubenswrapper[4812]: E1124 20:50:46.382864 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-2jkpw ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-2jkpw ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-lkfn2" podUID="b51c985d-5b8b-47fd-9ab9-f77eec70e0fd" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.392363 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-wfb4f"] Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.395614 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.403547 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lkfn2" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.403664 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wfb4f"] Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.441396 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lkfn2" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.460404 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59d7dfc545-4qj7l"] Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.462120 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.488506 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59d7dfc545-4qj7l"] Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.504478 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fb34afdf-0308-4f83-9ebe-1d25aef208cb-swiftconf\") pod \"swift-ring-rebalance-wfb4f\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.504624 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fb34afdf-0308-4f83-9ebe-1d25aef208cb-etc-swift\") pod \"swift-ring-rebalance-wfb4f\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.504672 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fb34afdf-0308-4f83-9ebe-1d25aef208cb-ring-data-devices\") pod \"swift-ring-rebalance-wfb4f\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.504696 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fb34afdf-0308-4f83-9ebe-1d25aef208cb-dispersionconf\") pod \"swift-ring-rebalance-wfb4f\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.504728 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb34afdf-0308-4f83-9ebe-1d25aef208cb-combined-ca-bundle\") pod \"swift-ring-rebalance-wfb4f\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.504754 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb34afdf-0308-4f83-9ebe-1d25aef208cb-scripts\") pod \"swift-ring-rebalance-wfb4f\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.504800 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvsmn\" (UniqueName: \"kubernetes.io/projected/fb34afdf-0308-4f83-9ebe-1d25aef208cb-kube-api-access-kvsmn\") pod \"swift-ring-rebalance-wfb4f\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.606648 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb34afdf-0308-4f83-9ebe-1d25aef208cb-combined-ca-bundle\") pod \"swift-ring-rebalance-wfb4f\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.606704 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb34afdf-0308-4f83-9ebe-1d25aef208cb-scripts\") pod \"swift-ring-rebalance-wfb4f\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.606747 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvsmn\" (UniqueName: \"kubernetes.io/projected/fb34afdf-0308-4f83-9ebe-1d25aef208cb-kube-api-access-kvsmn\") pod \"swift-ring-rebalance-wfb4f\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.606763 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fb34afdf-0308-4f83-9ebe-1d25aef208cb-swiftconf\") pod \"swift-ring-rebalance-wfb4f\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.606799 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-ovsdbserver-sb\") pod \"dnsmasq-dns-59d7dfc545-4qj7l\" (UID: \"e00419f9-6c65-4198-9faf-28a1ad214606\") " pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.606824 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-dns-svc\") pod \"dnsmasq-dns-59d7dfc545-4qj7l\" (UID: \"e00419f9-6c65-4198-9faf-28a1ad214606\") " pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.606851 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-ovsdbserver-nb\") pod \"dnsmasq-dns-59d7dfc545-4qj7l\" (UID: \"e00419f9-6c65-4198-9faf-28a1ad214606\") " pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.606984 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cqcr\" (UniqueName: \"kubernetes.io/projected/e00419f9-6c65-4198-9faf-28a1ad214606-kube-api-access-4cqcr\") pod \"dnsmasq-dns-59d7dfc545-4qj7l\" (UID: \"e00419f9-6c65-4198-9faf-28a1ad214606\") " pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.607026 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fb34afdf-0308-4f83-9ebe-1d25aef208cb-etc-swift\") pod \"swift-ring-rebalance-wfb4f\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.607064 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-config\") pod \"dnsmasq-dns-59d7dfc545-4qj7l\" (UID: \"e00419f9-6c65-4198-9faf-28a1ad214606\") " pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.607099 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fb34afdf-0308-4f83-9ebe-1d25aef208cb-ring-data-devices\") pod \"swift-ring-rebalance-wfb4f\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.607122 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fb34afdf-0308-4f83-9ebe-1d25aef208cb-dispersionconf\") pod \"swift-ring-rebalance-wfb4f\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.607521 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fb34afdf-0308-4f83-9ebe-1d25aef208cb-etc-swift\") pod \"swift-ring-rebalance-wfb4f\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.609561 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb34afdf-0308-4f83-9ebe-1d25aef208cb-scripts\") pod \"swift-ring-rebalance-wfb4f\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.612612 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fb34afdf-0308-4f83-9ebe-1d25aef208cb-dispersionconf\") pod \"swift-ring-rebalance-wfb4f\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.612702 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fb34afdf-0308-4f83-9ebe-1d25aef208cb-swiftconf\") pod \"swift-ring-rebalance-wfb4f\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.617799 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb34afdf-0308-4f83-9ebe-1d25aef208cb-combined-ca-bundle\") pod \"swift-ring-rebalance-wfb4f\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.618191 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fb34afdf-0308-4f83-9ebe-1d25aef208cb-ring-data-devices\") pod \"swift-ring-rebalance-wfb4f\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.629973 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvsmn\" (UniqueName: \"kubernetes.io/projected/fb34afdf-0308-4f83-9ebe-1d25aef208cb-kube-api-access-kvsmn\") pod \"swift-ring-rebalance-wfb4f\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.709025 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-ovsdbserver-sb\") pod \"dnsmasq-dns-59d7dfc545-4qj7l\" (UID: \"e00419f9-6c65-4198-9faf-28a1ad214606\") " pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.709080 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-dns-svc\") pod \"dnsmasq-dns-59d7dfc545-4qj7l\" (UID: \"e00419f9-6c65-4198-9faf-28a1ad214606\") " pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.709111 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-ovsdbserver-nb\") pod \"dnsmasq-dns-59d7dfc545-4qj7l\" (UID: \"e00419f9-6c65-4198-9faf-28a1ad214606\") " pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.709160 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cqcr\" (UniqueName: \"kubernetes.io/projected/e00419f9-6c65-4198-9faf-28a1ad214606-kube-api-access-4cqcr\") pod \"dnsmasq-dns-59d7dfc545-4qj7l\" (UID: \"e00419f9-6c65-4198-9faf-28a1ad214606\") " pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.709189 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-config\") pod \"dnsmasq-dns-59d7dfc545-4qj7l\" (UID: \"e00419f9-6c65-4198-9faf-28a1ad214606\") " pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.709965 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-ovsdbserver-sb\") pod \"dnsmasq-dns-59d7dfc545-4qj7l\" (UID: \"e00419f9-6c65-4198-9faf-28a1ad214606\") " pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.709995 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-config\") pod \"dnsmasq-dns-59d7dfc545-4qj7l\" (UID: \"e00419f9-6c65-4198-9faf-28a1ad214606\") " pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.710550 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-ovsdbserver-nb\") pod \"dnsmasq-dns-59d7dfc545-4qj7l\" (UID: \"e00419f9-6c65-4198-9faf-28a1ad214606\") " pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.710839 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-dns-svc\") pod \"dnsmasq-dns-59d7dfc545-4qj7l\" (UID: \"e00419f9-6c65-4198-9faf-28a1ad214606\") " pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.726549 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cqcr\" (UniqueName: \"kubernetes.io/projected/e00419f9-6c65-4198-9faf-28a1ad214606-kube-api-access-4cqcr\") pod \"dnsmasq-dns-59d7dfc545-4qj7l\" (UID: \"e00419f9-6c65-4198-9faf-28a1ad214606\") " pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.736468 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:46 crc kubenswrapper[4812]: I1124 20:50:46.786000 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" Nov 24 20:50:47 crc kubenswrapper[4812]: I1124 20:50:47.183526 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wfb4f"] Nov 24 20:50:47 crc kubenswrapper[4812]: I1124 20:50:47.258946 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59d7dfc545-4qj7l"] Nov 24 20:50:47 crc kubenswrapper[4812]: W1124 20:50:47.269187 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode00419f9_6c65_4198_9faf_28a1ad214606.slice/crio-558c9723b634dfc604762abadf41fc9ea3673246dbda273b0484dcf9a94a29f7 WatchSource:0}: Error finding container 558c9723b634dfc604762abadf41fc9ea3673246dbda273b0484dcf9a94a29f7: Status 404 returned error can't find the container with id 558c9723b634dfc604762abadf41fc9ea3673246dbda273b0484dcf9a94a29f7 Nov 24 20:50:47 crc kubenswrapper[4812]: I1124 20:50:47.412315 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wfb4f" event={"ID":"fb34afdf-0308-4f83-9ebe-1d25aef208cb","Type":"ContainerStarted","Data":"2dd982c358e19709360f6a5b99243e0239acfcb7f8cc94bb10aad7a05bc2e6ea"} Nov 24 20:50:47 crc kubenswrapper[4812]: I1124 20:50:47.412784 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wfb4f" event={"ID":"fb34afdf-0308-4f83-9ebe-1d25aef208cb","Type":"ContainerStarted","Data":"c53f63c4bbbfa4dafa376f5e6c76e5e09ea8b4ec05990f21394da8dae6fcee4e"} Nov 24 20:50:47 crc kubenswrapper[4812]: I1124 20:50:47.414309 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" event={"ID":"e00419f9-6c65-4198-9faf-28a1ad214606","Type":"ContainerStarted","Data":"558c9723b634dfc604762abadf41fc9ea3673246dbda273b0484dcf9a94a29f7"} Nov 24 20:50:47 crc kubenswrapper[4812]: I1124 20:50:47.414396 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lkfn2" Nov 24 20:50:47 crc kubenswrapper[4812]: I1124 20:50:47.439547 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-wfb4f" podStartSLOduration=1.439525393 podStartE2EDuration="1.439525393s" podCreationTimestamp="2025-11-24 20:50:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:50:47.437037562 +0000 UTC m=+5641.225989953" watchObservedRunningTime="2025-11-24 20:50:47.439525393 +0000 UTC m=+5641.228477764" Nov 24 20:50:47 crc kubenswrapper[4812]: I1124 20:50:47.483294 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-lkfn2"] Nov 24 20:50:47 crc kubenswrapper[4812]: I1124 20:50:47.498143 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-lkfn2"] Nov 24 20:50:48 crc kubenswrapper[4812]: I1124 20:50:48.424032 4812 generic.go:334] "Generic (PLEG): container finished" podID="e00419f9-6c65-4198-9faf-28a1ad214606" containerID="aae1660d2ab81c41f965fd203ac7fc7900a2828bfe6ee697f1a90389c47c90c2" exitCode=0 Nov 24 20:50:48 crc kubenswrapper[4812]: I1124 20:50:48.424123 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" event={"ID":"e00419f9-6c65-4198-9faf-28a1ad214606","Type":"ContainerDied","Data":"aae1660d2ab81c41f965fd203ac7fc7900a2828bfe6ee697f1a90389c47c90c2"} Nov 24 20:50:48 crc kubenswrapper[4812]: I1124 20:50:48.836144 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-cc5dd9bf4-5f8nr"] Nov 24 20:50:48 crc kubenswrapper[4812]: I1124 20:50:48.838164 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:48 crc kubenswrapper[4812]: I1124 20:50:48.845741 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 24 20:50:48 crc kubenswrapper[4812]: I1124 20:50:48.860551 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-cc5dd9bf4-5f8nr"] Nov 24 20:50:48 crc kubenswrapper[4812]: I1124 20:50:48.948764 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b2884fb-957e-47b7-8c8d-2e947921388c-log-httpd\") pod \"swift-proxy-cc5dd9bf4-5f8nr\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:48 crc kubenswrapper[4812]: I1124 20:50:48.948861 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b2884fb-957e-47b7-8c8d-2e947921388c-run-httpd\") pod \"swift-proxy-cc5dd9bf4-5f8nr\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:48 crc kubenswrapper[4812]: I1124 20:50:48.948892 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b2884fb-957e-47b7-8c8d-2e947921388c-etc-swift\") pod \"swift-proxy-cc5dd9bf4-5f8nr\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:48 crc kubenswrapper[4812]: I1124 20:50:48.948915 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b2884fb-957e-47b7-8c8d-2e947921388c-config-data\") pod \"swift-proxy-cc5dd9bf4-5f8nr\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:48 crc kubenswrapper[4812]: I1124 20:50:48.948933 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b2884fb-957e-47b7-8c8d-2e947921388c-combined-ca-bundle\") pod \"swift-proxy-cc5dd9bf4-5f8nr\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:48 crc kubenswrapper[4812]: I1124 20:50:48.949107 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p8mk\" (UniqueName: \"kubernetes.io/projected/4b2884fb-957e-47b7-8c8d-2e947921388c-kube-api-access-4p8mk\") pod \"swift-proxy-cc5dd9bf4-5f8nr\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:48 crc kubenswrapper[4812]: I1124 20:50:48.976960 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b51c985d-5b8b-47fd-9ab9-f77eec70e0fd" path="/var/lib/kubelet/pods/b51c985d-5b8b-47fd-9ab9-f77eec70e0fd/volumes" Nov 24 20:50:49 crc kubenswrapper[4812]: I1124 20:50:49.050721 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b2884fb-957e-47b7-8c8d-2e947921388c-log-httpd\") pod \"swift-proxy-cc5dd9bf4-5f8nr\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:49 crc kubenswrapper[4812]: I1124 20:50:49.050927 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b2884fb-957e-47b7-8c8d-2e947921388c-run-httpd\") pod \"swift-proxy-cc5dd9bf4-5f8nr\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:49 crc kubenswrapper[4812]: I1124 20:50:49.051007 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b2884fb-957e-47b7-8c8d-2e947921388c-etc-swift\") pod \"swift-proxy-cc5dd9bf4-5f8nr\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:49 crc kubenswrapper[4812]: I1124 20:50:49.051073 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b2884fb-957e-47b7-8c8d-2e947921388c-config-data\") pod \"swift-proxy-cc5dd9bf4-5f8nr\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:49 crc kubenswrapper[4812]: I1124 20:50:49.051111 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b2884fb-957e-47b7-8c8d-2e947921388c-combined-ca-bundle\") pod \"swift-proxy-cc5dd9bf4-5f8nr\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:49 crc kubenswrapper[4812]: I1124 20:50:49.051179 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p8mk\" (UniqueName: \"kubernetes.io/projected/4b2884fb-957e-47b7-8c8d-2e947921388c-kube-api-access-4p8mk\") pod \"swift-proxy-cc5dd9bf4-5f8nr\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:49 crc kubenswrapper[4812]: I1124 20:50:49.051717 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b2884fb-957e-47b7-8c8d-2e947921388c-log-httpd\") pod \"swift-proxy-cc5dd9bf4-5f8nr\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:49 crc kubenswrapper[4812]: I1124 20:50:49.052771 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b2884fb-957e-47b7-8c8d-2e947921388c-run-httpd\") pod \"swift-proxy-cc5dd9bf4-5f8nr\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:49 crc kubenswrapper[4812]: I1124 20:50:49.059446 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b2884fb-957e-47b7-8c8d-2e947921388c-combined-ca-bundle\") pod \"swift-proxy-cc5dd9bf4-5f8nr\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:49 crc kubenswrapper[4812]: I1124 20:50:49.068389 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b2884fb-957e-47b7-8c8d-2e947921388c-config-data\") pod \"swift-proxy-cc5dd9bf4-5f8nr\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:49 crc kubenswrapper[4812]: I1124 20:50:49.070038 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b2884fb-957e-47b7-8c8d-2e947921388c-etc-swift\") pod \"swift-proxy-cc5dd9bf4-5f8nr\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:49 crc kubenswrapper[4812]: I1124 20:50:49.074511 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p8mk\" (UniqueName: \"kubernetes.io/projected/4b2884fb-957e-47b7-8c8d-2e947921388c-kube-api-access-4p8mk\") pod \"swift-proxy-cc5dd9bf4-5f8nr\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:49 crc kubenswrapper[4812]: I1124 20:50:49.164637 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:49 crc kubenswrapper[4812]: I1124 20:50:49.434927 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" event={"ID":"e00419f9-6c65-4198-9faf-28a1ad214606","Type":"ContainerStarted","Data":"02fd422028e424af4b52fcf25a4587b28c58df9c7d2fe5ab05d35b571b06b34c"} Nov 24 20:50:49 crc kubenswrapper[4812]: I1124 20:50:49.435119 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" Nov 24 20:50:49 crc kubenswrapper[4812]: I1124 20:50:49.461940 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" podStartSLOduration=3.461921571 podStartE2EDuration="3.461921571s" podCreationTimestamp="2025-11-24 20:50:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:50:49.459578945 +0000 UTC m=+5643.248531326" watchObservedRunningTime="2025-11-24 20:50:49.461921571 +0000 UTC m=+5643.250873952" Nov 24 20:50:49 crc kubenswrapper[4812]: I1124 20:50:49.898742 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-cc5dd9bf4-5f8nr"] Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.297449 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-f55484fc8-26grz"] Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.303603 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.307954 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.308141 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.312252 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-f55484fc8-26grz"] Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.375394 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca44fa40-7e18-4f18-b5a9-8714994880b8-config-data\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.375454 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca44fa40-7e18-4f18-b5a9-8714994880b8-internal-tls-certs\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.375492 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc6nv\" (UniqueName: \"kubernetes.io/projected/ca44fa40-7e18-4f18-b5a9-8714994880b8-kube-api-access-lc6nv\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.375530 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca44fa40-7e18-4f18-b5a9-8714994880b8-log-httpd\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.375563 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca44fa40-7e18-4f18-b5a9-8714994880b8-run-httpd\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.375603 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca44fa40-7e18-4f18-b5a9-8714994880b8-public-tls-certs\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.375628 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ca44fa40-7e18-4f18-b5a9-8714994880b8-etc-swift\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.375664 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca44fa40-7e18-4f18-b5a9-8714994880b8-combined-ca-bundle\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.451235 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" event={"ID":"4b2884fb-957e-47b7-8c8d-2e947921388c","Type":"ContainerStarted","Data":"d8654310a687a799a11d2e07e8abc633b8a351246dddea913f043b4113fe43d9"} Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.451301 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" event={"ID":"4b2884fb-957e-47b7-8c8d-2e947921388c","Type":"ContainerStarted","Data":"2e80afcd6905fccd08c09f73a179856ec96debdee36b31ea90ed07a9810cdc51"} Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.451314 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" event={"ID":"4b2884fb-957e-47b7-8c8d-2e947921388c","Type":"ContainerStarted","Data":"e3d640a5d3dac149639f0c09c5cc2d59a51badaf68640b857e8bdc2a23ce5a11"} Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.477484 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca44fa40-7e18-4f18-b5a9-8714994880b8-public-tls-certs\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.477538 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ca44fa40-7e18-4f18-b5a9-8714994880b8-etc-swift\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.477588 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca44fa40-7e18-4f18-b5a9-8714994880b8-combined-ca-bundle\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.477640 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca44fa40-7e18-4f18-b5a9-8714994880b8-config-data\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.477685 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca44fa40-7e18-4f18-b5a9-8714994880b8-internal-tls-certs\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.477724 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc6nv\" (UniqueName: \"kubernetes.io/projected/ca44fa40-7e18-4f18-b5a9-8714994880b8-kube-api-access-lc6nv\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.477753 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca44fa40-7e18-4f18-b5a9-8714994880b8-log-httpd\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.477779 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca44fa40-7e18-4f18-b5a9-8714994880b8-run-httpd\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.478226 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca44fa40-7e18-4f18-b5a9-8714994880b8-run-httpd\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.478515 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca44fa40-7e18-4f18-b5a9-8714994880b8-log-httpd\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.483733 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca44fa40-7e18-4f18-b5a9-8714994880b8-combined-ca-bundle\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.483750 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca44fa40-7e18-4f18-b5a9-8714994880b8-config-data\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.483874 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca44fa40-7e18-4f18-b5a9-8714994880b8-public-tls-certs\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.487240 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ca44fa40-7e18-4f18-b5a9-8714994880b8-etc-swift\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.503370 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc6nv\" (UniqueName: \"kubernetes.io/projected/ca44fa40-7e18-4f18-b5a9-8714994880b8-kube-api-access-lc6nv\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.514013 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca44fa40-7e18-4f18-b5a9-8714994880b8-internal-tls-certs\") pod \"swift-proxy-f55484fc8-26grz\" (UID: \"ca44fa40-7e18-4f18-b5a9-8714994880b8\") " pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:50 crc kubenswrapper[4812]: I1124 20:50:50.626017 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:51 crc kubenswrapper[4812]: I1124 20:50:51.214687 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" podStartSLOduration=3.214669109 podStartE2EDuration="3.214669109s" podCreationTimestamp="2025-11-24 20:50:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:50:50.499060813 +0000 UTC m=+5644.288013184" watchObservedRunningTime="2025-11-24 20:50:51.214669109 +0000 UTC m=+5645.003621480" Nov 24 20:50:51 crc kubenswrapper[4812]: I1124 20:50:51.219101 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-f55484fc8-26grz"] Nov 24 20:50:51 crc kubenswrapper[4812]: W1124 20:50:51.222918 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca44fa40_7e18_4f18_b5a9_8714994880b8.slice/crio-bf14ab74c270b1369133fc15c7907e3739e398ebdb413daaafd8e78653ff3b06 WatchSource:0}: Error finding container bf14ab74c270b1369133fc15c7907e3739e398ebdb413daaafd8e78653ff3b06: Status 404 returned error can't find the container with id bf14ab74c270b1369133fc15c7907e3739e398ebdb413daaafd8e78653ff3b06 Nov 24 20:50:51 crc kubenswrapper[4812]: I1124 20:50:51.467971 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f55484fc8-26grz" event={"ID":"ca44fa40-7e18-4f18-b5a9-8714994880b8","Type":"ContainerStarted","Data":"bf14ab74c270b1369133fc15c7907e3739e398ebdb413daaafd8e78653ff3b06"} Nov 24 20:50:51 crc kubenswrapper[4812]: I1124 20:50:51.470608 4812 generic.go:334] "Generic (PLEG): container finished" podID="fb34afdf-0308-4f83-9ebe-1d25aef208cb" containerID="2dd982c358e19709360f6a5b99243e0239acfcb7f8cc94bb10aad7a05bc2e6ea" exitCode=0 Nov 24 20:50:51 crc kubenswrapper[4812]: I1124 20:50:51.470756 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wfb4f" event={"ID":"fb34afdf-0308-4f83-9ebe-1d25aef208cb","Type":"ContainerDied","Data":"2dd982c358e19709360f6a5b99243e0239acfcb7f8cc94bb10aad7a05bc2e6ea"} Nov 24 20:50:51 crc kubenswrapper[4812]: I1124 20:50:51.470983 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:51 crc kubenswrapper[4812]: I1124 20:50:51.471007 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:52 crc kubenswrapper[4812]: I1124 20:50:52.520057 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f55484fc8-26grz" event={"ID":"ca44fa40-7e18-4f18-b5a9-8714994880b8","Type":"ContainerStarted","Data":"b26f1f33e5e09d6052377aeb549926393eae8fb843b845a00ede6b7e957e2ebb"} Nov 24 20:50:52 crc kubenswrapper[4812]: I1124 20:50:52.520741 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:52 crc kubenswrapper[4812]: I1124 20:50:52.520820 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:50:52 crc kubenswrapper[4812]: I1124 20:50:52.520858 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-f55484fc8-26grz" event={"ID":"ca44fa40-7e18-4f18-b5a9-8714994880b8","Type":"ContainerStarted","Data":"7937795c7dd7fef9712614638197d771795e2865f88d7fd37c322f6f556469cd"} Nov 24 20:50:52 crc kubenswrapper[4812]: I1124 20:50:52.579228 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-f55484fc8-26grz" podStartSLOduration=2.57920784 podStartE2EDuration="2.57920784s" podCreationTimestamp="2025-11-24 20:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:50:52.562541607 +0000 UTC m=+5646.351493988" watchObservedRunningTime="2025-11-24 20:50:52.57920784 +0000 UTC m=+5646.368160211" Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.006523 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.126238 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fb34afdf-0308-4f83-9ebe-1d25aef208cb-etc-swift\") pod \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.126351 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvsmn\" (UniqueName: \"kubernetes.io/projected/fb34afdf-0308-4f83-9ebe-1d25aef208cb-kube-api-access-kvsmn\") pod \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.126391 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb34afdf-0308-4f83-9ebe-1d25aef208cb-scripts\") pod \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.126427 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fb34afdf-0308-4f83-9ebe-1d25aef208cb-swiftconf\") pod \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.126468 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb34afdf-0308-4f83-9ebe-1d25aef208cb-combined-ca-bundle\") pod \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.126513 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fb34afdf-0308-4f83-9ebe-1d25aef208cb-dispersionconf\") pod \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.126572 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fb34afdf-0308-4f83-9ebe-1d25aef208cb-ring-data-devices\") pod \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\" (UID: \"fb34afdf-0308-4f83-9ebe-1d25aef208cb\") " Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.127306 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb34afdf-0308-4f83-9ebe-1d25aef208cb-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "fb34afdf-0308-4f83-9ebe-1d25aef208cb" (UID: "fb34afdf-0308-4f83-9ebe-1d25aef208cb"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.127745 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb34afdf-0308-4f83-9ebe-1d25aef208cb-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "fb34afdf-0308-4f83-9ebe-1d25aef208cb" (UID: "fb34afdf-0308-4f83-9ebe-1d25aef208cb"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.137172 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb34afdf-0308-4f83-9ebe-1d25aef208cb-kube-api-access-kvsmn" (OuterVolumeSpecName: "kube-api-access-kvsmn") pod "fb34afdf-0308-4f83-9ebe-1d25aef208cb" (UID: "fb34afdf-0308-4f83-9ebe-1d25aef208cb"). InnerVolumeSpecName "kube-api-access-kvsmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.140297 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb34afdf-0308-4f83-9ebe-1d25aef208cb-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "fb34afdf-0308-4f83-9ebe-1d25aef208cb" (UID: "fb34afdf-0308-4f83-9ebe-1d25aef208cb"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.148623 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb34afdf-0308-4f83-9ebe-1d25aef208cb-scripts" (OuterVolumeSpecName: "scripts") pod "fb34afdf-0308-4f83-9ebe-1d25aef208cb" (UID: "fb34afdf-0308-4f83-9ebe-1d25aef208cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.150388 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb34afdf-0308-4f83-9ebe-1d25aef208cb-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "fb34afdf-0308-4f83-9ebe-1d25aef208cb" (UID: "fb34afdf-0308-4f83-9ebe-1d25aef208cb"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.152980 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb34afdf-0308-4f83-9ebe-1d25aef208cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb34afdf-0308-4f83-9ebe-1d25aef208cb" (UID: "fb34afdf-0308-4f83-9ebe-1d25aef208cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.228438 4812 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/fb34afdf-0308-4f83-9ebe-1d25aef208cb-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.228639 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvsmn\" (UniqueName: \"kubernetes.io/projected/fb34afdf-0308-4f83-9ebe-1d25aef208cb-kube-api-access-kvsmn\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.228709 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb34afdf-0308-4f83-9ebe-1d25aef208cb-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.228765 4812 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/fb34afdf-0308-4f83-9ebe-1d25aef208cb-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.228847 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb34afdf-0308-4f83-9ebe-1d25aef208cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.228907 4812 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/fb34afdf-0308-4f83-9ebe-1d25aef208cb-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.228969 4812 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/fb34afdf-0308-4f83-9ebe-1d25aef208cb-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.558770 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wfb4f" Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.559582 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wfb4f" event={"ID":"fb34afdf-0308-4f83-9ebe-1d25aef208cb","Type":"ContainerDied","Data":"c53f63c4bbbfa4dafa376f5e6c76e5e09ea8b4ec05990f21394da8dae6fcee4e"} Nov 24 20:50:53 crc kubenswrapper[4812]: I1124 20:50:53.559639 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c53f63c4bbbfa4dafa376f5e6c76e5e09ea8b4ec05990f21394da8dae6fcee4e" Nov 24 20:50:56 crc kubenswrapper[4812]: I1124 20:50:56.789685 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" Nov 24 20:50:56 crc kubenswrapper[4812]: I1124 20:50:56.882646 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7684487-f4xqs"] Nov 24 20:50:56 crc kubenswrapper[4812]: I1124 20:50:56.882923 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c7684487-f4xqs" podUID="61621044-78cb-418d-a1a6-b88dae6f0f36" containerName="dnsmasq-dns" containerID="cri-o://78220ce55f77313035c855be0d72356c1de9b0451d31be9adfc89dc67ae3364a" gracePeriod=10 Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.356039 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7684487-f4xqs" Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.421385 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fwvr\" (UniqueName: \"kubernetes.io/projected/61621044-78cb-418d-a1a6-b88dae6f0f36-kube-api-access-9fwvr\") pod \"61621044-78cb-418d-a1a6-b88dae6f0f36\" (UID: \"61621044-78cb-418d-a1a6-b88dae6f0f36\") " Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.421507 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-ovsdbserver-nb\") pod \"61621044-78cb-418d-a1a6-b88dae6f0f36\" (UID: \"61621044-78cb-418d-a1a6-b88dae6f0f36\") " Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.421607 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-config\") pod \"61621044-78cb-418d-a1a6-b88dae6f0f36\" (UID: \"61621044-78cb-418d-a1a6-b88dae6f0f36\") " Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.421635 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-dns-svc\") pod \"61621044-78cb-418d-a1a6-b88dae6f0f36\" (UID: \"61621044-78cb-418d-a1a6-b88dae6f0f36\") " Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.421662 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-ovsdbserver-sb\") pod \"61621044-78cb-418d-a1a6-b88dae6f0f36\" (UID: \"61621044-78cb-418d-a1a6-b88dae6f0f36\") " Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.426867 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61621044-78cb-418d-a1a6-b88dae6f0f36-kube-api-access-9fwvr" (OuterVolumeSpecName: "kube-api-access-9fwvr") pod "61621044-78cb-418d-a1a6-b88dae6f0f36" (UID: "61621044-78cb-418d-a1a6-b88dae6f0f36"). InnerVolumeSpecName "kube-api-access-9fwvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.462857 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "61621044-78cb-418d-a1a6-b88dae6f0f36" (UID: "61621044-78cb-418d-a1a6-b88dae6f0f36"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.463245 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "61621044-78cb-418d-a1a6-b88dae6f0f36" (UID: "61621044-78cb-418d-a1a6-b88dae6f0f36"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.478032 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "61621044-78cb-418d-a1a6-b88dae6f0f36" (UID: "61621044-78cb-418d-a1a6-b88dae6f0f36"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.494308 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-config" (OuterVolumeSpecName: "config") pod "61621044-78cb-418d-a1a6-b88dae6f0f36" (UID: "61621044-78cb-418d-a1a6-b88dae6f0f36"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.523931 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.523974 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-config\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.523987 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.523999 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61621044-78cb-418d-a1a6-b88dae6f0f36-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.524013 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fwvr\" (UniqueName: \"kubernetes.io/projected/61621044-78cb-418d-a1a6-b88dae6f0f36-kube-api-access-9fwvr\") on node \"crc\" DevicePath \"\"" Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.602928 4812 generic.go:334] "Generic (PLEG): container finished" podID="61621044-78cb-418d-a1a6-b88dae6f0f36" containerID="78220ce55f77313035c855be0d72356c1de9b0451d31be9adfc89dc67ae3364a" exitCode=0 Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.603007 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7684487-f4xqs" Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.603035 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7684487-f4xqs" event={"ID":"61621044-78cb-418d-a1a6-b88dae6f0f36","Type":"ContainerDied","Data":"78220ce55f77313035c855be0d72356c1de9b0451d31be9adfc89dc67ae3364a"} Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.603085 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7684487-f4xqs" event={"ID":"61621044-78cb-418d-a1a6-b88dae6f0f36","Type":"ContainerDied","Data":"c24822aea5ed3965b9a90ecaaff0dd0dc9d263157a9ce3b0d014afd1b9d5d031"} Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.603114 4812 scope.go:117] "RemoveContainer" containerID="78220ce55f77313035c855be0d72356c1de9b0451d31be9adfc89dc67ae3364a" Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.637512 4812 scope.go:117] "RemoveContainer" containerID="a71fb177df29034cbdef8eda5be7b4db930bdbfc5305545efbfa0c5da28efa9b" Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.647151 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7684487-f4xqs"] Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.660941 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7684487-f4xqs"] Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.671781 4812 scope.go:117] "RemoveContainer" containerID="78220ce55f77313035c855be0d72356c1de9b0451d31be9adfc89dc67ae3364a" Nov 24 20:50:57 crc kubenswrapper[4812]: E1124 20:50:57.672403 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78220ce55f77313035c855be0d72356c1de9b0451d31be9adfc89dc67ae3364a\": container with ID starting with 78220ce55f77313035c855be0d72356c1de9b0451d31be9adfc89dc67ae3364a not found: ID does not exist" containerID="78220ce55f77313035c855be0d72356c1de9b0451d31be9adfc89dc67ae3364a" Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.672472 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78220ce55f77313035c855be0d72356c1de9b0451d31be9adfc89dc67ae3364a"} err="failed to get container status \"78220ce55f77313035c855be0d72356c1de9b0451d31be9adfc89dc67ae3364a\": rpc error: code = NotFound desc = could not find container \"78220ce55f77313035c855be0d72356c1de9b0451d31be9adfc89dc67ae3364a\": container with ID starting with 78220ce55f77313035c855be0d72356c1de9b0451d31be9adfc89dc67ae3364a not found: ID does not exist" Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.672521 4812 scope.go:117] "RemoveContainer" containerID="a71fb177df29034cbdef8eda5be7b4db930bdbfc5305545efbfa0c5da28efa9b" Nov 24 20:50:57 crc kubenswrapper[4812]: E1124 20:50:57.673093 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a71fb177df29034cbdef8eda5be7b4db930bdbfc5305545efbfa0c5da28efa9b\": container with ID starting with a71fb177df29034cbdef8eda5be7b4db930bdbfc5305545efbfa0c5da28efa9b not found: ID does not exist" containerID="a71fb177df29034cbdef8eda5be7b4db930bdbfc5305545efbfa0c5da28efa9b" Nov 24 20:50:57 crc kubenswrapper[4812]: I1124 20:50:57.673148 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a71fb177df29034cbdef8eda5be7b4db930bdbfc5305545efbfa0c5da28efa9b"} err="failed to get container status \"a71fb177df29034cbdef8eda5be7b4db930bdbfc5305545efbfa0c5da28efa9b\": rpc error: code = NotFound desc = could not find container \"a71fb177df29034cbdef8eda5be7b4db930bdbfc5305545efbfa0c5da28efa9b\": container with ID starting with a71fb177df29034cbdef8eda5be7b4db930bdbfc5305545efbfa0c5da28efa9b not found: ID does not exist" Nov 24 20:50:58 crc kubenswrapper[4812]: I1124 20:50:58.987188 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61621044-78cb-418d-a1a6-b88dae6f0f36" path="/var/lib/kubelet/pods/61621044-78cb-418d-a1a6-b88dae6f0f36/volumes" Nov 24 20:50:59 crc kubenswrapper[4812]: I1124 20:50:59.167969 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:50:59 crc kubenswrapper[4812]: I1124 20:50:59.169685 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:51:00 crc kubenswrapper[4812]: I1124 20:51:00.640444 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:51:00 crc kubenswrapper[4812]: I1124 20:51:00.645747 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-f55484fc8-26grz" Nov 24 20:51:00 crc kubenswrapper[4812]: I1124 20:51:00.736103 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-cc5dd9bf4-5f8nr"] Nov 24 20:51:00 crc kubenswrapper[4812]: I1124 20:51:00.736482 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" podUID="4b2884fb-957e-47b7-8c8d-2e947921388c" containerName="proxy-httpd" containerID="cri-o://2e80afcd6905fccd08c09f73a179856ec96debdee36b31ea90ed07a9810cdc51" gracePeriod=30 Nov 24 20:51:00 crc kubenswrapper[4812]: I1124 20:51:00.736551 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" podUID="4b2884fb-957e-47b7-8c8d-2e947921388c" containerName="proxy-server" containerID="cri-o://d8654310a687a799a11d2e07e8abc633b8a351246dddea913f043b4113fe43d9" gracePeriod=30 Nov 24 20:51:01 crc kubenswrapper[4812]: I1124 20:51:01.646706 4812 generic.go:334] "Generic (PLEG): container finished" podID="4b2884fb-957e-47b7-8c8d-2e947921388c" containerID="d8654310a687a799a11d2e07e8abc633b8a351246dddea913f043b4113fe43d9" exitCode=0 Nov 24 20:51:01 crc kubenswrapper[4812]: I1124 20:51:01.646737 4812 generic.go:334] "Generic (PLEG): container finished" podID="4b2884fb-957e-47b7-8c8d-2e947921388c" containerID="2e80afcd6905fccd08c09f73a179856ec96debdee36b31ea90ed07a9810cdc51" exitCode=0 Nov 24 20:51:01 crc kubenswrapper[4812]: I1124 20:51:01.646972 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" event={"ID":"4b2884fb-957e-47b7-8c8d-2e947921388c","Type":"ContainerDied","Data":"d8654310a687a799a11d2e07e8abc633b8a351246dddea913f043b4113fe43d9"} Nov 24 20:51:01 crc kubenswrapper[4812]: I1124 20:51:01.647014 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" event={"ID":"4b2884fb-957e-47b7-8c8d-2e947921388c","Type":"ContainerDied","Data":"2e80afcd6905fccd08c09f73a179856ec96debdee36b31ea90ed07a9810cdc51"} Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.000321 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.129571 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p8mk\" (UniqueName: \"kubernetes.io/projected/4b2884fb-957e-47b7-8c8d-2e947921388c-kube-api-access-4p8mk\") pod \"4b2884fb-957e-47b7-8c8d-2e947921388c\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.129699 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b2884fb-957e-47b7-8c8d-2e947921388c-run-httpd\") pod \"4b2884fb-957e-47b7-8c8d-2e947921388c\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.129735 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b2884fb-957e-47b7-8c8d-2e947921388c-config-data\") pod \"4b2884fb-957e-47b7-8c8d-2e947921388c\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.129786 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b2884fb-957e-47b7-8c8d-2e947921388c-log-httpd\") pod \"4b2884fb-957e-47b7-8c8d-2e947921388c\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.129947 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b2884fb-957e-47b7-8c8d-2e947921388c-combined-ca-bundle\") pod \"4b2884fb-957e-47b7-8c8d-2e947921388c\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.130070 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b2884fb-957e-47b7-8c8d-2e947921388c-etc-swift\") pod \"4b2884fb-957e-47b7-8c8d-2e947921388c\" (UID: \"4b2884fb-957e-47b7-8c8d-2e947921388c\") " Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.132012 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b2884fb-957e-47b7-8c8d-2e947921388c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4b2884fb-957e-47b7-8c8d-2e947921388c" (UID: "4b2884fb-957e-47b7-8c8d-2e947921388c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.132503 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b2884fb-957e-47b7-8c8d-2e947921388c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4b2884fb-957e-47b7-8c8d-2e947921388c" (UID: "4b2884fb-957e-47b7-8c8d-2e947921388c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.137306 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b2884fb-957e-47b7-8c8d-2e947921388c-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4b2884fb-957e-47b7-8c8d-2e947921388c" (UID: "4b2884fb-957e-47b7-8c8d-2e947921388c"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.137965 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b2884fb-957e-47b7-8c8d-2e947921388c-kube-api-access-4p8mk" (OuterVolumeSpecName: "kube-api-access-4p8mk") pod "4b2884fb-957e-47b7-8c8d-2e947921388c" (UID: "4b2884fb-957e-47b7-8c8d-2e947921388c"). InnerVolumeSpecName "kube-api-access-4p8mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.185512 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b2884fb-957e-47b7-8c8d-2e947921388c-config-data" (OuterVolumeSpecName: "config-data") pod "4b2884fb-957e-47b7-8c8d-2e947921388c" (UID: "4b2884fb-957e-47b7-8c8d-2e947921388c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.201401 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b2884fb-957e-47b7-8c8d-2e947921388c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b2884fb-957e-47b7-8c8d-2e947921388c" (UID: "4b2884fb-957e-47b7-8c8d-2e947921388c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.232698 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b2884fb-957e-47b7-8c8d-2e947921388c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.232739 4812 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b2884fb-957e-47b7-8c8d-2e947921388c-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.232754 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p8mk\" (UniqueName: \"kubernetes.io/projected/4b2884fb-957e-47b7-8c8d-2e947921388c-kube-api-access-4p8mk\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.232769 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b2884fb-957e-47b7-8c8d-2e947921388c-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.232780 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b2884fb-957e-47b7-8c8d-2e947921388c-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.232794 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b2884fb-957e-47b7-8c8d-2e947921388c-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.658754 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" event={"ID":"4b2884fb-957e-47b7-8c8d-2e947921388c","Type":"ContainerDied","Data":"e3d640a5d3dac149639f0c09c5cc2d59a51badaf68640b857e8bdc2a23ce5a11"} Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.659134 4812 scope.go:117] "RemoveContainer" containerID="d8654310a687a799a11d2e07e8abc633b8a351246dddea913f043b4113fe43d9" Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.658853 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-cc5dd9bf4-5f8nr" Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.700614 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-cc5dd9bf4-5f8nr"] Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.701249 4812 scope.go:117] "RemoveContainer" containerID="2e80afcd6905fccd08c09f73a179856ec96debdee36b31ea90ed07a9810cdc51" Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.709272 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-cc5dd9bf4-5f8nr"] Nov 24 20:51:02 crc kubenswrapper[4812]: I1124 20:51:02.984518 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b2884fb-957e-47b7-8c8d-2e947921388c" path="/var/lib/kubelet/pods/4b2884fb-957e-47b7-8c8d-2e947921388c/volumes" Nov 24 20:51:06 crc kubenswrapper[4812]: I1124 20:51:06.753023 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-xxqg8"] Nov 24 20:51:06 crc kubenswrapper[4812]: E1124 20:51:06.753931 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb34afdf-0308-4f83-9ebe-1d25aef208cb" containerName="swift-ring-rebalance" Nov 24 20:51:06 crc kubenswrapper[4812]: I1124 20:51:06.753946 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb34afdf-0308-4f83-9ebe-1d25aef208cb" containerName="swift-ring-rebalance" Nov 24 20:51:06 crc kubenswrapper[4812]: E1124 20:51:06.753968 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61621044-78cb-418d-a1a6-b88dae6f0f36" containerName="init" Nov 24 20:51:06 crc kubenswrapper[4812]: I1124 20:51:06.753976 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="61621044-78cb-418d-a1a6-b88dae6f0f36" containerName="init" Nov 24 20:51:06 crc kubenswrapper[4812]: E1124 20:51:06.753986 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61621044-78cb-418d-a1a6-b88dae6f0f36" containerName="dnsmasq-dns" Nov 24 20:51:06 crc kubenswrapper[4812]: I1124 20:51:06.754023 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="61621044-78cb-418d-a1a6-b88dae6f0f36" containerName="dnsmasq-dns" Nov 24 20:51:06 crc kubenswrapper[4812]: E1124 20:51:06.754045 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b2884fb-957e-47b7-8c8d-2e947921388c" containerName="proxy-server" Nov 24 20:51:06 crc kubenswrapper[4812]: I1124 20:51:06.754054 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b2884fb-957e-47b7-8c8d-2e947921388c" containerName="proxy-server" Nov 24 20:51:06 crc kubenswrapper[4812]: E1124 20:51:06.754074 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b2884fb-957e-47b7-8c8d-2e947921388c" containerName="proxy-httpd" Nov 24 20:51:06 crc kubenswrapper[4812]: I1124 20:51:06.754086 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b2884fb-957e-47b7-8c8d-2e947921388c" containerName="proxy-httpd" Nov 24 20:51:06 crc kubenswrapper[4812]: I1124 20:51:06.754323 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="61621044-78cb-418d-a1a6-b88dae6f0f36" containerName="dnsmasq-dns" Nov 24 20:51:06 crc kubenswrapper[4812]: I1124 20:51:06.754359 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b2884fb-957e-47b7-8c8d-2e947921388c" containerName="proxy-server" Nov 24 20:51:06 crc kubenswrapper[4812]: I1124 20:51:06.754373 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb34afdf-0308-4f83-9ebe-1d25aef208cb" containerName="swift-ring-rebalance" Nov 24 20:51:06 crc kubenswrapper[4812]: I1124 20:51:06.754400 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b2884fb-957e-47b7-8c8d-2e947921388c" containerName="proxy-httpd" Nov 24 20:51:06 crc kubenswrapper[4812]: I1124 20:51:06.755045 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xxqg8" Nov 24 20:51:06 crc kubenswrapper[4812]: I1124 20:51:06.769645 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xxqg8"] Nov 24 20:51:06 crc kubenswrapper[4812]: I1124 20:51:06.779518 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9816-account-create-5vd9g"] Nov 24 20:51:06 crc kubenswrapper[4812]: I1124 20:51:06.780909 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9816-account-create-5vd9g" Nov 24 20:51:06 crc kubenswrapper[4812]: I1124 20:51:06.793513 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 24 20:51:06 crc kubenswrapper[4812]: I1124 20:51:06.796142 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9816-account-create-5vd9g"] Nov 24 20:51:06 crc kubenswrapper[4812]: I1124 20:51:06.938055 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a481c6e-8ec6-4c42-892b-346fdca3aace-operator-scripts\") pod \"cinder-9816-account-create-5vd9g\" (UID: \"1a481c6e-8ec6-4c42-892b-346fdca3aace\") " pod="openstack/cinder-9816-account-create-5vd9g" Nov 24 20:51:06 crc kubenswrapper[4812]: I1124 20:51:06.938112 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d6gv\" (UniqueName: \"kubernetes.io/projected/1c982f0e-64c7-4e8e-999c-39c42e012f5c-kube-api-access-7d6gv\") pod \"cinder-db-create-xxqg8\" (UID: \"1c982f0e-64c7-4e8e-999c-39c42e012f5c\") " pod="openstack/cinder-db-create-xxqg8" Nov 24 20:51:06 crc kubenswrapper[4812]: I1124 20:51:06.938152 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl7zp\" (UniqueName: \"kubernetes.io/projected/1a481c6e-8ec6-4c42-892b-346fdca3aace-kube-api-access-vl7zp\") pod \"cinder-9816-account-create-5vd9g\" (UID: \"1a481c6e-8ec6-4c42-892b-346fdca3aace\") " pod="openstack/cinder-9816-account-create-5vd9g" Nov 24 20:51:06 crc kubenswrapper[4812]: I1124 20:51:06.938181 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c982f0e-64c7-4e8e-999c-39c42e012f5c-operator-scripts\") pod \"cinder-db-create-xxqg8\" (UID: \"1c982f0e-64c7-4e8e-999c-39c42e012f5c\") " pod="openstack/cinder-db-create-xxqg8" Nov 24 20:51:07 crc kubenswrapper[4812]: I1124 20:51:07.039620 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d6gv\" (UniqueName: \"kubernetes.io/projected/1c982f0e-64c7-4e8e-999c-39c42e012f5c-kube-api-access-7d6gv\") pod \"cinder-db-create-xxqg8\" (UID: \"1c982f0e-64c7-4e8e-999c-39c42e012f5c\") " pod="openstack/cinder-db-create-xxqg8" Nov 24 20:51:07 crc kubenswrapper[4812]: I1124 20:51:07.039689 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl7zp\" (UniqueName: \"kubernetes.io/projected/1a481c6e-8ec6-4c42-892b-346fdca3aace-kube-api-access-vl7zp\") pod \"cinder-9816-account-create-5vd9g\" (UID: \"1a481c6e-8ec6-4c42-892b-346fdca3aace\") " pod="openstack/cinder-9816-account-create-5vd9g" Nov 24 20:51:07 crc kubenswrapper[4812]: I1124 20:51:07.039719 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c982f0e-64c7-4e8e-999c-39c42e012f5c-operator-scripts\") pod \"cinder-db-create-xxqg8\" (UID: \"1c982f0e-64c7-4e8e-999c-39c42e012f5c\") " pod="openstack/cinder-db-create-xxqg8" Nov 24 20:51:07 crc kubenswrapper[4812]: I1124 20:51:07.039839 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a481c6e-8ec6-4c42-892b-346fdca3aace-operator-scripts\") pod \"cinder-9816-account-create-5vd9g\" (UID: \"1a481c6e-8ec6-4c42-892b-346fdca3aace\") " pod="openstack/cinder-9816-account-create-5vd9g" Nov 24 20:51:07 crc kubenswrapper[4812]: I1124 20:51:07.040563 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a481c6e-8ec6-4c42-892b-346fdca3aace-operator-scripts\") pod \"cinder-9816-account-create-5vd9g\" (UID: \"1a481c6e-8ec6-4c42-892b-346fdca3aace\") " pod="openstack/cinder-9816-account-create-5vd9g" Nov 24 20:51:07 crc kubenswrapper[4812]: I1124 20:51:07.040768 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c982f0e-64c7-4e8e-999c-39c42e012f5c-operator-scripts\") pod \"cinder-db-create-xxqg8\" (UID: \"1c982f0e-64c7-4e8e-999c-39c42e012f5c\") " pod="openstack/cinder-db-create-xxqg8" Nov 24 20:51:07 crc kubenswrapper[4812]: I1124 20:51:07.058613 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl7zp\" (UniqueName: \"kubernetes.io/projected/1a481c6e-8ec6-4c42-892b-346fdca3aace-kube-api-access-vl7zp\") pod \"cinder-9816-account-create-5vd9g\" (UID: \"1a481c6e-8ec6-4c42-892b-346fdca3aace\") " pod="openstack/cinder-9816-account-create-5vd9g" Nov 24 20:51:07 crc kubenswrapper[4812]: I1124 20:51:07.064008 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d6gv\" (UniqueName: \"kubernetes.io/projected/1c982f0e-64c7-4e8e-999c-39c42e012f5c-kube-api-access-7d6gv\") pod \"cinder-db-create-xxqg8\" (UID: \"1c982f0e-64c7-4e8e-999c-39c42e012f5c\") " pod="openstack/cinder-db-create-xxqg8" Nov 24 20:51:07 crc kubenswrapper[4812]: I1124 20:51:07.080591 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xxqg8" Nov 24 20:51:07 crc kubenswrapper[4812]: I1124 20:51:07.105961 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9816-account-create-5vd9g" Nov 24 20:51:07 crc kubenswrapper[4812]: I1124 20:51:07.430313 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9816-account-create-5vd9g"] Nov 24 20:51:07 crc kubenswrapper[4812]: I1124 20:51:07.591901 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xxqg8"] Nov 24 20:51:07 crc kubenswrapper[4812]: I1124 20:51:07.704241 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xxqg8" event={"ID":"1c982f0e-64c7-4e8e-999c-39c42e012f5c","Type":"ContainerStarted","Data":"8b83949eca3d02681dc0b2488b33433b93f531ae7a0c5c8326297a2759a06694"} Nov 24 20:51:07 crc kubenswrapper[4812]: I1124 20:51:07.705983 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9816-account-create-5vd9g" event={"ID":"1a481c6e-8ec6-4c42-892b-346fdca3aace","Type":"ContainerStarted","Data":"9bc46cfb314a54f3cde70e4067c64d432c623ab0c8a0939edd8d5898a7b3801f"} Nov 24 20:51:07 crc kubenswrapper[4812]: I1124 20:51:07.706407 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9816-account-create-5vd9g" event={"ID":"1a481c6e-8ec6-4c42-892b-346fdca3aace","Type":"ContainerStarted","Data":"3677afad3affcb3a0d9784e20fb904669b8863a21188026a46b995489b3c596e"} Nov 24 20:51:07 crc kubenswrapper[4812]: I1124 20:51:07.721614 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-9816-account-create-5vd9g" podStartSLOduration=1.721596805 podStartE2EDuration="1.721596805s" podCreationTimestamp="2025-11-24 20:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:51:07.719123345 +0000 UTC m=+5661.508075706" watchObservedRunningTime="2025-11-24 20:51:07.721596805 +0000 UTC m=+5661.510549176" Nov 24 20:51:08 crc kubenswrapper[4812]: I1124 20:51:08.718061 4812 generic.go:334] "Generic (PLEG): container finished" podID="1a481c6e-8ec6-4c42-892b-346fdca3aace" containerID="9bc46cfb314a54f3cde70e4067c64d432c623ab0c8a0939edd8d5898a7b3801f" exitCode=0 Nov 24 20:51:08 crc kubenswrapper[4812]: I1124 20:51:08.718120 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9816-account-create-5vd9g" event={"ID":"1a481c6e-8ec6-4c42-892b-346fdca3aace","Type":"ContainerDied","Data":"9bc46cfb314a54f3cde70e4067c64d432c623ab0c8a0939edd8d5898a7b3801f"} Nov 24 20:51:08 crc kubenswrapper[4812]: I1124 20:51:08.719892 4812 generic.go:334] "Generic (PLEG): container finished" podID="1c982f0e-64c7-4e8e-999c-39c42e012f5c" containerID="e6c83beac92b914ac6eb587410357b8c38334d2f8b704b2611bcb6176a7a98b2" exitCode=0 Nov 24 20:51:08 crc kubenswrapper[4812]: I1124 20:51:08.719937 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xxqg8" event={"ID":"1c982f0e-64c7-4e8e-999c-39c42e012f5c","Type":"ContainerDied","Data":"e6c83beac92b914ac6eb587410357b8c38334d2f8b704b2611bcb6176a7a98b2"} Nov 24 20:51:10 crc kubenswrapper[4812]: I1124 20:51:10.184528 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xxqg8" Nov 24 20:51:10 crc kubenswrapper[4812]: I1124 20:51:10.191769 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9816-account-create-5vd9g" Nov 24 20:51:10 crc kubenswrapper[4812]: I1124 20:51:10.335710 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c982f0e-64c7-4e8e-999c-39c42e012f5c-operator-scripts\") pod \"1c982f0e-64c7-4e8e-999c-39c42e012f5c\" (UID: \"1c982f0e-64c7-4e8e-999c-39c42e012f5c\") " Nov 24 20:51:10 crc kubenswrapper[4812]: I1124 20:51:10.335778 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d6gv\" (UniqueName: \"kubernetes.io/projected/1c982f0e-64c7-4e8e-999c-39c42e012f5c-kube-api-access-7d6gv\") pod \"1c982f0e-64c7-4e8e-999c-39c42e012f5c\" (UID: \"1c982f0e-64c7-4e8e-999c-39c42e012f5c\") " Nov 24 20:51:10 crc kubenswrapper[4812]: I1124 20:51:10.335891 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl7zp\" (UniqueName: \"kubernetes.io/projected/1a481c6e-8ec6-4c42-892b-346fdca3aace-kube-api-access-vl7zp\") pod \"1a481c6e-8ec6-4c42-892b-346fdca3aace\" (UID: \"1a481c6e-8ec6-4c42-892b-346fdca3aace\") " Nov 24 20:51:10 crc kubenswrapper[4812]: I1124 20:51:10.335969 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a481c6e-8ec6-4c42-892b-346fdca3aace-operator-scripts\") pod \"1a481c6e-8ec6-4c42-892b-346fdca3aace\" (UID: \"1a481c6e-8ec6-4c42-892b-346fdca3aace\") " Nov 24 20:51:10 crc kubenswrapper[4812]: I1124 20:51:10.336418 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c982f0e-64c7-4e8e-999c-39c42e012f5c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c982f0e-64c7-4e8e-999c-39c42e012f5c" (UID: "1c982f0e-64c7-4e8e-999c-39c42e012f5c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:51:10 crc kubenswrapper[4812]: I1124 20:51:10.336768 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a481c6e-8ec6-4c42-892b-346fdca3aace-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1a481c6e-8ec6-4c42-892b-346fdca3aace" (UID: "1a481c6e-8ec6-4c42-892b-346fdca3aace"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:51:10 crc kubenswrapper[4812]: I1124 20:51:10.337247 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c982f0e-64c7-4e8e-999c-39c42e012f5c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:10 crc kubenswrapper[4812]: I1124 20:51:10.337266 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a481c6e-8ec6-4c42-892b-346fdca3aace-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:10 crc kubenswrapper[4812]: I1124 20:51:10.343448 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a481c6e-8ec6-4c42-892b-346fdca3aace-kube-api-access-vl7zp" (OuterVolumeSpecName: "kube-api-access-vl7zp") pod "1a481c6e-8ec6-4c42-892b-346fdca3aace" (UID: "1a481c6e-8ec6-4c42-892b-346fdca3aace"). InnerVolumeSpecName "kube-api-access-vl7zp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:51:10 crc kubenswrapper[4812]: I1124 20:51:10.346574 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c982f0e-64c7-4e8e-999c-39c42e012f5c-kube-api-access-7d6gv" (OuterVolumeSpecName: "kube-api-access-7d6gv") pod "1c982f0e-64c7-4e8e-999c-39c42e012f5c" (UID: "1c982f0e-64c7-4e8e-999c-39c42e012f5c"). InnerVolumeSpecName "kube-api-access-7d6gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:51:10 crc kubenswrapper[4812]: I1124 20:51:10.439059 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d6gv\" (UniqueName: \"kubernetes.io/projected/1c982f0e-64c7-4e8e-999c-39c42e012f5c-kube-api-access-7d6gv\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:10 crc kubenswrapper[4812]: I1124 20:51:10.439096 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl7zp\" (UniqueName: \"kubernetes.io/projected/1a481c6e-8ec6-4c42-892b-346fdca3aace-kube-api-access-vl7zp\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:10 crc kubenswrapper[4812]: I1124 20:51:10.741745 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xxqg8" event={"ID":"1c982f0e-64c7-4e8e-999c-39c42e012f5c","Type":"ContainerDied","Data":"8b83949eca3d02681dc0b2488b33433b93f531ae7a0c5c8326297a2759a06694"} Nov 24 20:51:10 crc kubenswrapper[4812]: I1124 20:51:10.742041 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b83949eca3d02681dc0b2488b33433b93f531ae7a0c5c8326297a2759a06694" Nov 24 20:51:10 crc kubenswrapper[4812]: I1124 20:51:10.741798 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xxqg8" Nov 24 20:51:10 crc kubenswrapper[4812]: I1124 20:51:10.746452 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9816-account-create-5vd9g" event={"ID":"1a481c6e-8ec6-4c42-892b-346fdca3aace","Type":"ContainerDied","Data":"3677afad3affcb3a0d9784e20fb904669b8863a21188026a46b995489b3c596e"} Nov 24 20:51:10 crc kubenswrapper[4812]: I1124 20:51:10.746491 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3677afad3affcb3a0d9784e20fb904669b8863a21188026a46b995489b3c596e" Nov 24 20:51:10 crc kubenswrapper[4812]: I1124 20:51:10.746566 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9816-account-create-5vd9g" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.011065 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-xhdqg"] Nov 24 20:51:12 crc kubenswrapper[4812]: E1124 20:51:12.013591 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a481c6e-8ec6-4c42-892b-346fdca3aace" containerName="mariadb-account-create" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.013630 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a481c6e-8ec6-4c42-892b-346fdca3aace" containerName="mariadb-account-create" Nov 24 20:51:12 crc kubenswrapper[4812]: E1124 20:51:12.013649 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c982f0e-64c7-4e8e-999c-39c42e012f5c" containerName="mariadb-database-create" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.013658 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c982f0e-64c7-4e8e-999c-39c42e012f5c" containerName="mariadb-database-create" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.014012 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c982f0e-64c7-4e8e-999c-39c42e012f5c" containerName="mariadb-database-create" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.014049 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a481c6e-8ec6-4c42-892b-346fdca3aace" containerName="mariadb-account-create" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.018298 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.025386 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.025574 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.026542 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-dtkcw" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.030944 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xhdqg"] Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.171421 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-scripts\") pod \"cinder-db-sync-xhdqg\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.171743 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-config-data\") pod \"cinder-db-sync-xhdqg\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.171851 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-db-sync-config-data\") pod \"cinder-db-sync-xhdqg\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.171953 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghmrn\" (UniqueName: \"kubernetes.io/projected/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-kube-api-access-ghmrn\") pod \"cinder-db-sync-xhdqg\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.172070 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-etc-machine-id\") pod \"cinder-db-sync-xhdqg\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.172218 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-combined-ca-bundle\") pod \"cinder-db-sync-xhdqg\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.277099 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-config-data\") pod \"cinder-db-sync-xhdqg\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.277208 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-db-sync-config-data\") pod \"cinder-db-sync-xhdqg\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.277269 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghmrn\" (UniqueName: \"kubernetes.io/projected/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-kube-api-access-ghmrn\") pod \"cinder-db-sync-xhdqg\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.277382 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-etc-machine-id\") pod \"cinder-db-sync-xhdqg\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.277578 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-combined-ca-bundle\") pod \"cinder-db-sync-xhdqg\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.277769 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-scripts\") pod \"cinder-db-sync-xhdqg\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.277793 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-etc-machine-id\") pod \"cinder-db-sync-xhdqg\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.284926 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-scripts\") pod \"cinder-db-sync-xhdqg\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.284933 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-combined-ca-bundle\") pod \"cinder-db-sync-xhdqg\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.290565 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-config-data\") pod \"cinder-db-sync-xhdqg\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.284934 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-db-sync-config-data\") pod \"cinder-db-sync-xhdqg\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.296072 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghmrn\" (UniqueName: \"kubernetes.io/projected/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-kube-api-access-ghmrn\") pod \"cinder-db-sync-xhdqg\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.355226 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.642702 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-xhdqg"] Nov 24 20:51:12 crc kubenswrapper[4812]: I1124 20:51:12.777733 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xhdqg" event={"ID":"d00a581c-f73c-4e31-9fb4-a469e26fe3a3","Type":"ContainerStarted","Data":"50e5a862fe96b563c80d2e216eaf146ea102c0931efebd756a83022393665a09"} Nov 24 20:51:13 crc kubenswrapper[4812]: I1124 20:51:13.790473 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xhdqg" event={"ID":"d00a581c-f73c-4e31-9fb4-a469e26fe3a3","Type":"ContainerStarted","Data":"76b6eb9693cdf7b0aac5eba5e66a95786a942ebc1a8f63bd8efcfb1f6bc8bebe"} Nov 24 20:51:13 crc kubenswrapper[4812]: I1124 20:51:13.829509 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-xhdqg" podStartSLOduration=2.8294819479999997 podStartE2EDuration="2.829481948s" podCreationTimestamp="2025-11-24 20:51:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:51:13.815267974 +0000 UTC m=+5667.604220385" watchObservedRunningTime="2025-11-24 20:51:13.829481948 +0000 UTC m=+5667.618434349" Nov 24 20:51:15 crc kubenswrapper[4812]: I1124 20:51:15.818206 4812 generic.go:334] "Generic (PLEG): container finished" podID="d00a581c-f73c-4e31-9fb4-a469e26fe3a3" containerID="76b6eb9693cdf7b0aac5eba5e66a95786a942ebc1a8f63bd8efcfb1f6bc8bebe" exitCode=0 Nov 24 20:51:15 crc kubenswrapper[4812]: I1124 20:51:15.818384 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xhdqg" event={"ID":"d00a581c-f73c-4e31-9fb4-a469e26fe3a3","Type":"ContainerDied","Data":"76b6eb9693cdf7b0aac5eba5e66a95786a942ebc1a8f63bd8efcfb1f6bc8bebe"} Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.332411 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.395268 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-etc-machine-id\") pod \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.395412 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-scripts\") pod \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.395470 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghmrn\" (UniqueName: \"kubernetes.io/projected/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-kube-api-access-ghmrn\") pod \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.395502 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-config-data\") pod \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.395554 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-combined-ca-bundle\") pod \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.395686 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-db-sync-config-data\") pod \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\" (UID: \"d00a581c-f73c-4e31-9fb4-a469e26fe3a3\") " Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.397477 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d00a581c-f73c-4e31-9fb4-a469e26fe3a3" (UID: "d00a581c-f73c-4e31-9fb4-a469e26fe3a3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.404857 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d00a581c-f73c-4e31-9fb4-a469e26fe3a3" (UID: "d00a581c-f73c-4e31-9fb4-a469e26fe3a3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.408184 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-kube-api-access-ghmrn" (OuterVolumeSpecName: "kube-api-access-ghmrn") pod "d00a581c-f73c-4e31-9fb4-a469e26fe3a3" (UID: "d00a581c-f73c-4e31-9fb4-a469e26fe3a3"). InnerVolumeSpecName "kube-api-access-ghmrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.412504 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-scripts" (OuterVolumeSpecName: "scripts") pod "d00a581c-f73c-4e31-9fb4-a469e26fe3a3" (UID: "d00a581c-f73c-4e31-9fb4-a469e26fe3a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.465530 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-config-data" (OuterVolumeSpecName: "config-data") pod "d00a581c-f73c-4e31-9fb4-a469e26fe3a3" (UID: "d00a581c-f73c-4e31-9fb4-a469e26fe3a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.470798 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d00a581c-f73c-4e31-9fb4-a469e26fe3a3" (UID: "d00a581c-f73c-4e31-9fb4-a469e26fe3a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.497248 4812 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.497288 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.497299 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghmrn\" (UniqueName: \"kubernetes.io/projected/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-kube-api-access-ghmrn\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.497309 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.497316 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.497325 4812 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d00a581c-f73c-4e31-9fb4-a469e26fe3a3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.844155 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-xhdqg" event={"ID":"d00a581c-f73c-4e31-9fb4-a469e26fe3a3","Type":"ContainerDied","Data":"50e5a862fe96b563c80d2e216eaf146ea102c0931efebd756a83022393665a09"} Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.844572 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50e5a862fe96b563c80d2e216eaf146ea102c0931efebd756a83022393665a09" Nov 24 20:51:17 crc kubenswrapper[4812]: I1124 20:51:17.844214 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-xhdqg" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.162986 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fb6bf6ddc-8wnfp"] Nov 24 20:51:18 crc kubenswrapper[4812]: E1124 20:51:18.163368 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d00a581c-f73c-4e31-9fb4-a469e26fe3a3" containerName="cinder-db-sync" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.163385 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d00a581c-f73c-4e31-9fb4-a469e26fe3a3" containerName="cinder-db-sync" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.163552 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d00a581c-f73c-4e31-9fb4-a469e26fe3a3" containerName="cinder-db-sync" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.164612 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.190872 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fb6bf6ddc-8wnfp"] Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.211576 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-config\") pod \"dnsmasq-dns-fb6bf6ddc-8wnfp\" (UID: \"1423088d-0ab4-40ae-b307-27988d74b383\") " pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.211675 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbmhb\" (UniqueName: \"kubernetes.io/projected/1423088d-0ab4-40ae-b307-27988d74b383-kube-api-access-cbmhb\") pod \"dnsmasq-dns-fb6bf6ddc-8wnfp\" (UID: \"1423088d-0ab4-40ae-b307-27988d74b383\") " pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.211727 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-dns-svc\") pod \"dnsmasq-dns-fb6bf6ddc-8wnfp\" (UID: \"1423088d-0ab4-40ae-b307-27988d74b383\") " pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.211744 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-ovsdbserver-sb\") pod \"dnsmasq-dns-fb6bf6ddc-8wnfp\" (UID: \"1423088d-0ab4-40ae-b307-27988d74b383\") " pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.211802 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-ovsdbserver-nb\") pod \"dnsmasq-dns-fb6bf6ddc-8wnfp\" (UID: \"1423088d-0ab4-40ae-b307-27988d74b383\") " pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.295475 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.296764 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.298969 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.299223 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.299408 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.299644 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-dtkcw" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.313113 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-ovsdbserver-sb\") pod \"dnsmasq-dns-fb6bf6ddc-8wnfp\" (UID: \"1423088d-0ab4-40ae-b307-27988d74b383\") " pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.313150 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-dns-svc\") pod \"dnsmasq-dns-fb6bf6ddc-8wnfp\" (UID: \"1423088d-0ab4-40ae-b307-27988d74b383\") " pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.313180 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-logs\") pod \"cinder-api-0\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.313219 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-ovsdbserver-nb\") pod \"dnsmasq-dns-fb6bf6ddc-8wnfp\" (UID: \"1423088d-0ab4-40ae-b307-27988d74b383\") " pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.313243 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.313261 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-config\") pod \"dnsmasq-dns-fb6bf6ddc-8wnfp\" (UID: \"1423088d-0ab4-40ae-b307-27988d74b383\") " pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.313282 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-scripts\") pod \"cinder-api-0\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.313325 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-config-data-custom\") pod \"cinder-api-0\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.313380 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbmhb\" (UniqueName: \"kubernetes.io/projected/1423088d-0ab4-40ae-b307-27988d74b383-kube-api-access-cbmhb\") pod \"dnsmasq-dns-fb6bf6ddc-8wnfp\" (UID: \"1423088d-0ab4-40ae-b307-27988d74b383\") " pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.313405 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.313469 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-config-data\") pod \"cinder-api-0\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.313487 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frwqm\" (UniqueName: \"kubernetes.io/projected/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-kube-api-access-frwqm\") pod \"cinder-api-0\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.313990 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-ovsdbserver-sb\") pod \"dnsmasq-dns-fb6bf6ddc-8wnfp\" (UID: \"1423088d-0ab4-40ae-b307-27988d74b383\") " pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.314029 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-ovsdbserver-nb\") pod \"dnsmasq-dns-fb6bf6ddc-8wnfp\" (UID: \"1423088d-0ab4-40ae-b307-27988d74b383\") " pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.314198 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-config\") pod \"dnsmasq-dns-fb6bf6ddc-8wnfp\" (UID: \"1423088d-0ab4-40ae-b307-27988d74b383\") " pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.314411 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-dns-svc\") pod \"dnsmasq-dns-fb6bf6ddc-8wnfp\" (UID: \"1423088d-0ab4-40ae-b307-27988d74b383\") " pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.321831 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.348222 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbmhb\" (UniqueName: \"kubernetes.io/projected/1423088d-0ab4-40ae-b307-27988d74b383-kube-api-access-cbmhb\") pod \"dnsmasq-dns-fb6bf6ddc-8wnfp\" (UID: \"1423088d-0ab4-40ae-b307-27988d74b383\") " pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.414587 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-config-data-custom\") pod \"cinder-api-0\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.414649 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.414670 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-config-data\") pod \"cinder-api-0\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.414688 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frwqm\" (UniqueName: \"kubernetes.io/projected/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-kube-api-access-frwqm\") pod \"cinder-api-0\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.414884 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-logs\") pod \"cinder-api-0\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.415062 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.415126 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-scripts\") pod \"cinder-api-0\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.415290 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.415553 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-logs\") pod \"cinder-api-0\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.420113 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-config-data-custom\") pod \"cinder-api-0\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.420449 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-config-data\") pod \"cinder-api-0\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.422514 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.425873 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-scripts\") pod \"cinder-api-0\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.433648 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frwqm\" (UniqueName: \"kubernetes.io/projected/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-kube-api-access-frwqm\") pod \"cinder-api-0\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.488017 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.613263 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 20:51:18 crc kubenswrapper[4812]: I1124 20:51:18.929322 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fb6bf6ddc-8wnfp"] Nov 24 20:51:19 crc kubenswrapper[4812]: I1124 20:51:19.066889 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 20:51:19 crc kubenswrapper[4812]: W1124 20:51:19.092035 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a7a21e3_a0fe_4ac1_8ff8_f43df6c0c25f.slice/crio-081fe56fe52617e1793f2a782015bd65f157ad66844849bd3647652a0308a048 WatchSource:0}: Error finding container 081fe56fe52617e1793f2a782015bd65f157ad66844849bd3647652a0308a048: Status 404 returned error can't find the container with id 081fe56fe52617e1793f2a782015bd65f157ad66844849bd3647652a0308a048 Nov 24 20:51:19 crc kubenswrapper[4812]: I1124 20:51:19.861272 4812 generic.go:334] "Generic (PLEG): container finished" podID="1423088d-0ab4-40ae-b307-27988d74b383" containerID="1bd2818a88cc669a11f0d2d833715d539ce18e8eff48ae07c3a0ea3533349353" exitCode=0 Nov 24 20:51:19 crc kubenswrapper[4812]: I1124 20:51:19.861424 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" event={"ID":"1423088d-0ab4-40ae-b307-27988d74b383","Type":"ContainerDied","Data":"1bd2818a88cc669a11f0d2d833715d539ce18e8eff48ae07c3a0ea3533349353"} Nov 24 20:51:19 crc kubenswrapper[4812]: I1124 20:51:19.861753 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" event={"ID":"1423088d-0ab4-40ae-b307-27988d74b383","Type":"ContainerStarted","Data":"821a12e2b7f2a4743827d1268c4e153b2c7b2372ebaca39e86d98378f7ddd961"} Nov 24 20:51:19 crc kubenswrapper[4812]: I1124 20:51:19.866909 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f","Type":"ContainerStarted","Data":"b203e4e9c8a0c6dedba58525928e16bb47890f0b5de440ec2c1193edac53a0fa"} Nov 24 20:51:19 crc kubenswrapper[4812]: I1124 20:51:19.866939 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f","Type":"ContainerStarted","Data":"081fe56fe52617e1793f2a782015bd65f157ad66844849bd3647652a0308a048"} Nov 24 20:51:20 crc kubenswrapper[4812]: I1124 20:51:20.878157 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f","Type":"ContainerStarted","Data":"0c928f30bb5fcf2424eaf08b7eef580bb5fb2f4607b5caf058b945bfeaad3c84"} Nov 24 20:51:20 crc kubenswrapper[4812]: I1124 20:51:20.878726 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 20:51:20 crc kubenswrapper[4812]: I1124 20:51:20.879977 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" event={"ID":"1423088d-0ab4-40ae-b307-27988d74b383","Type":"ContainerStarted","Data":"76ac57bd7c3d41cf85577663362aa609e197d9c59005ef450835b7e37ff6bce4"} Nov 24 20:51:20 crc kubenswrapper[4812]: I1124 20:51:20.880137 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" Nov 24 20:51:20 crc kubenswrapper[4812]: I1124 20:51:20.893868 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.893851463 podStartE2EDuration="2.893851463s" podCreationTimestamp="2025-11-24 20:51:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:51:20.891039053 +0000 UTC m=+5674.679991434" watchObservedRunningTime="2025-11-24 20:51:20.893851463 +0000 UTC m=+5674.682803824" Nov 24 20:51:20 crc kubenswrapper[4812]: I1124 20:51:20.914217 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" podStartSLOduration=2.91419687 podStartE2EDuration="2.91419687s" podCreationTimestamp="2025-11-24 20:51:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:51:20.909437695 +0000 UTC m=+5674.698390056" watchObservedRunningTime="2025-11-24 20:51:20.91419687 +0000 UTC m=+5674.703149241" Nov 24 20:51:21 crc kubenswrapper[4812]: I1124 20:51:21.182793 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bd8r2"] Nov 24 20:51:21 crc kubenswrapper[4812]: I1124 20:51:21.184895 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bd8r2" Nov 24 20:51:21 crc kubenswrapper[4812]: I1124 20:51:21.201302 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bd8r2"] Nov 24 20:51:21 crc kubenswrapper[4812]: I1124 20:51:21.266486 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6csmg\" (UniqueName: \"kubernetes.io/projected/0745be11-3cf3-45d7-94bb-8c55b91ffe15-kube-api-access-6csmg\") pod \"redhat-marketplace-bd8r2\" (UID: \"0745be11-3cf3-45d7-94bb-8c55b91ffe15\") " pod="openshift-marketplace/redhat-marketplace-bd8r2" Nov 24 20:51:21 crc kubenswrapper[4812]: I1124 20:51:21.266776 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0745be11-3cf3-45d7-94bb-8c55b91ffe15-utilities\") pod \"redhat-marketplace-bd8r2\" (UID: \"0745be11-3cf3-45d7-94bb-8c55b91ffe15\") " pod="openshift-marketplace/redhat-marketplace-bd8r2" Nov 24 20:51:21 crc kubenswrapper[4812]: I1124 20:51:21.266834 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0745be11-3cf3-45d7-94bb-8c55b91ffe15-catalog-content\") pod \"redhat-marketplace-bd8r2\" (UID: \"0745be11-3cf3-45d7-94bb-8c55b91ffe15\") " pod="openshift-marketplace/redhat-marketplace-bd8r2" Nov 24 20:51:21 crc kubenswrapper[4812]: I1124 20:51:21.368630 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0745be11-3cf3-45d7-94bb-8c55b91ffe15-utilities\") pod \"redhat-marketplace-bd8r2\" (UID: \"0745be11-3cf3-45d7-94bb-8c55b91ffe15\") " pod="openshift-marketplace/redhat-marketplace-bd8r2" Nov 24 20:51:21 crc kubenswrapper[4812]: I1124 20:51:21.368676 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0745be11-3cf3-45d7-94bb-8c55b91ffe15-catalog-content\") pod \"redhat-marketplace-bd8r2\" (UID: \"0745be11-3cf3-45d7-94bb-8c55b91ffe15\") " pod="openshift-marketplace/redhat-marketplace-bd8r2" Nov 24 20:51:21 crc kubenswrapper[4812]: I1124 20:51:21.368742 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6csmg\" (UniqueName: \"kubernetes.io/projected/0745be11-3cf3-45d7-94bb-8c55b91ffe15-kube-api-access-6csmg\") pod \"redhat-marketplace-bd8r2\" (UID: \"0745be11-3cf3-45d7-94bb-8c55b91ffe15\") " pod="openshift-marketplace/redhat-marketplace-bd8r2" Nov 24 20:51:21 crc kubenswrapper[4812]: I1124 20:51:21.369290 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0745be11-3cf3-45d7-94bb-8c55b91ffe15-utilities\") pod \"redhat-marketplace-bd8r2\" (UID: \"0745be11-3cf3-45d7-94bb-8c55b91ffe15\") " pod="openshift-marketplace/redhat-marketplace-bd8r2" Nov 24 20:51:21 crc kubenswrapper[4812]: I1124 20:51:21.369428 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0745be11-3cf3-45d7-94bb-8c55b91ffe15-catalog-content\") pod \"redhat-marketplace-bd8r2\" (UID: \"0745be11-3cf3-45d7-94bb-8c55b91ffe15\") " pod="openshift-marketplace/redhat-marketplace-bd8r2" Nov 24 20:51:21 crc kubenswrapper[4812]: I1124 20:51:21.386441 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6csmg\" (UniqueName: \"kubernetes.io/projected/0745be11-3cf3-45d7-94bb-8c55b91ffe15-kube-api-access-6csmg\") pod \"redhat-marketplace-bd8r2\" (UID: \"0745be11-3cf3-45d7-94bb-8c55b91ffe15\") " pod="openshift-marketplace/redhat-marketplace-bd8r2" Nov 24 20:51:21 crc kubenswrapper[4812]: I1124 20:51:21.459315 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 20:51:21 crc kubenswrapper[4812]: I1124 20:51:21.506442 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bd8r2" Nov 24 20:51:22 crc kubenswrapper[4812]: W1124 20:51:22.046582 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0745be11_3cf3_45d7_94bb_8c55b91ffe15.slice/crio-91121c026be43f438c46aee1d69b76eadcca8fb1cc1988ed3c1f9b5aaeb82dbf WatchSource:0}: Error finding container 91121c026be43f438c46aee1d69b76eadcca8fb1cc1988ed3c1f9b5aaeb82dbf: Status 404 returned error can't find the container with id 91121c026be43f438c46aee1d69b76eadcca8fb1cc1988ed3c1f9b5aaeb82dbf Nov 24 20:51:22 crc kubenswrapper[4812]: I1124 20:51:22.049621 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bd8r2"] Nov 24 20:51:22 crc kubenswrapper[4812]: I1124 20:51:22.898262 4812 generic.go:334] "Generic (PLEG): container finished" podID="0745be11-3cf3-45d7-94bb-8c55b91ffe15" containerID="818e60fa5b8774303d0ca560548b829137b822292316c4b68f17e16bb54a4ff9" exitCode=0 Nov 24 20:51:22 crc kubenswrapper[4812]: I1124 20:51:22.898356 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bd8r2" event={"ID":"0745be11-3cf3-45d7-94bb-8c55b91ffe15","Type":"ContainerDied","Data":"818e60fa5b8774303d0ca560548b829137b822292316c4b68f17e16bb54a4ff9"} Nov 24 20:51:22 crc kubenswrapper[4812]: I1124 20:51:22.898674 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bd8r2" event={"ID":"0745be11-3cf3-45d7-94bb-8c55b91ffe15","Type":"ContainerStarted","Data":"91121c026be43f438c46aee1d69b76eadcca8fb1cc1988ed3c1f9b5aaeb82dbf"} Nov 24 20:51:22 crc kubenswrapper[4812]: I1124 20:51:22.898744 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f" containerName="cinder-api-log" containerID="cri-o://b203e4e9c8a0c6dedba58525928e16bb47890f0b5de440ec2c1193edac53a0fa" gracePeriod=30 Nov 24 20:51:22 crc kubenswrapper[4812]: I1124 20:51:22.898799 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f" containerName="cinder-api" containerID="cri-o://0c928f30bb5fcf2424eaf08b7eef580bb5fb2f4607b5caf058b945bfeaad3c84" gracePeriod=30 Nov 24 20:51:22 crc kubenswrapper[4812]: I1124 20:51:22.903461 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.523619 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.608753 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-config-data\") pod \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.608852 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-combined-ca-bundle\") pod \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.608879 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-config-data-custom\") pod \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.608905 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-logs\") pod \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.608996 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-etc-machine-id\") pod \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.609033 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frwqm\" (UniqueName: \"kubernetes.io/projected/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-kube-api-access-frwqm\") pod \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.609071 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-scripts\") pod \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\" (UID: \"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f\") " Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.609489 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f" (UID: "1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.610018 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-logs" (OuterVolumeSpecName: "logs") pod "1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f" (UID: "1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.610378 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-logs\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.610403 4812 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.614568 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f" (UID: "1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.614563 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-scripts" (OuterVolumeSpecName: "scripts") pod "1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f" (UID: "1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.615118 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-kube-api-access-frwqm" (OuterVolumeSpecName: "kube-api-access-frwqm") pod "1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f" (UID: "1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f"). InnerVolumeSpecName "kube-api-access-frwqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.644305 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f" (UID: "1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.694115 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-config-data" (OuterVolumeSpecName: "config-data") pod "1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f" (UID: "1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.712481 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.712830 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frwqm\" (UniqueName: \"kubernetes.io/projected/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-kube-api-access-frwqm\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.713002 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.713148 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.713303 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.918679 4812 generic.go:334] "Generic (PLEG): container finished" podID="1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f" containerID="0c928f30bb5fcf2424eaf08b7eef580bb5fb2f4607b5caf058b945bfeaad3c84" exitCode=0 Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.919032 4812 generic.go:334] "Generic (PLEG): container finished" podID="1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f" containerID="b203e4e9c8a0c6dedba58525928e16bb47890f0b5de440ec2c1193edac53a0fa" exitCode=143 Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.918811 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.918746 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f","Type":"ContainerDied","Data":"0c928f30bb5fcf2424eaf08b7eef580bb5fb2f4607b5caf058b945bfeaad3c84"} Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.919551 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f","Type":"ContainerDied","Data":"b203e4e9c8a0c6dedba58525928e16bb47890f0b5de440ec2c1193edac53a0fa"} Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.919585 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f","Type":"ContainerDied","Data":"081fe56fe52617e1793f2a782015bd65f157ad66844849bd3647652a0308a048"} Nov 24 20:51:23 crc kubenswrapper[4812]: I1124 20:51:23.919656 4812 scope.go:117] "RemoveContainer" containerID="0c928f30bb5fcf2424eaf08b7eef580bb5fb2f4607b5caf058b945bfeaad3c84" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.012766 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.015088 4812 scope.go:117] "RemoveContainer" containerID="b203e4e9c8a0c6dedba58525928e16bb47890f0b5de440ec2c1193edac53a0fa" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.026493 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.039553 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 20:51:24 crc kubenswrapper[4812]: E1124 20:51:24.040073 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f" containerName="cinder-api-log" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.040103 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f" containerName="cinder-api-log" Nov 24 20:51:24 crc kubenswrapper[4812]: E1124 20:51:24.040132 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f" containerName="cinder-api" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.040140 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f" containerName="cinder-api" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.040367 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f" containerName="cinder-api" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.040407 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f" containerName="cinder-api-log" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.041582 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.045024 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-dtkcw" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.045141 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.045223 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.045381 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.046651 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.046844 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.048364 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.128720 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-scripts\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.128823 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.128851 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.128902 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/36f90f79-9656-4aee-8606-c4faab60548a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.128953 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-config-data-custom\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.128983 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.129123 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36f90f79-9656-4aee-8606-c4faab60548a-logs\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.129301 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks6m2\" (UniqueName: \"kubernetes.io/projected/36f90f79-9656-4aee-8606-c4faab60548a-kube-api-access-ks6m2\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.129456 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-config-data\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.131489 4812 scope.go:117] "RemoveContainer" containerID="0c928f30bb5fcf2424eaf08b7eef580bb5fb2f4607b5caf058b945bfeaad3c84" Nov 24 20:51:24 crc kubenswrapper[4812]: E1124 20:51:24.131994 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c928f30bb5fcf2424eaf08b7eef580bb5fb2f4607b5caf058b945bfeaad3c84\": container with ID starting with 0c928f30bb5fcf2424eaf08b7eef580bb5fb2f4607b5caf058b945bfeaad3c84 not found: ID does not exist" containerID="0c928f30bb5fcf2424eaf08b7eef580bb5fb2f4607b5caf058b945bfeaad3c84" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.132031 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c928f30bb5fcf2424eaf08b7eef580bb5fb2f4607b5caf058b945bfeaad3c84"} err="failed to get container status \"0c928f30bb5fcf2424eaf08b7eef580bb5fb2f4607b5caf058b945bfeaad3c84\": rpc error: code = NotFound desc = could not find container \"0c928f30bb5fcf2424eaf08b7eef580bb5fb2f4607b5caf058b945bfeaad3c84\": container with ID starting with 0c928f30bb5fcf2424eaf08b7eef580bb5fb2f4607b5caf058b945bfeaad3c84 not found: ID does not exist" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.132060 4812 scope.go:117] "RemoveContainer" containerID="b203e4e9c8a0c6dedba58525928e16bb47890f0b5de440ec2c1193edac53a0fa" Nov 24 20:51:24 crc kubenswrapper[4812]: E1124 20:51:24.132608 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b203e4e9c8a0c6dedba58525928e16bb47890f0b5de440ec2c1193edac53a0fa\": container with ID starting with b203e4e9c8a0c6dedba58525928e16bb47890f0b5de440ec2c1193edac53a0fa not found: ID does not exist" containerID="b203e4e9c8a0c6dedba58525928e16bb47890f0b5de440ec2c1193edac53a0fa" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.132646 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b203e4e9c8a0c6dedba58525928e16bb47890f0b5de440ec2c1193edac53a0fa"} err="failed to get container status \"b203e4e9c8a0c6dedba58525928e16bb47890f0b5de440ec2c1193edac53a0fa\": rpc error: code = NotFound desc = could not find container \"b203e4e9c8a0c6dedba58525928e16bb47890f0b5de440ec2c1193edac53a0fa\": container with ID starting with b203e4e9c8a0c6dedba58525928e16bb47890f0b5de440ec2c1193edac53a0fa not found: ID does not exist" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.132674 4812 scope.go:117] "RemoveContainer" containerID="0c928f30bb5fcf2424eaf08b7eef580bb5fb2f4607b5caf058b945bfeaad3c84" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.133098 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c928f30bb5fcf2424eaf08b7eef580bb5fb2f4607b5caf058b945bfeaad3c84"} err="failed to get container status \"0c928f30bb5fcf2424eaf08b7eef580bb5fb2f4607b5caf058b945bfeaad3c84\": rpc error: code = NotFound desc = could not find container \"0c928f30bb5fcf2424eaf08b7eef580bb5fb2f4607b5caf058b945bfeaad3c84\": container with ID starting with 0c928f30bb5fcf2424eaf08b7eef580bb5fb2f4607b5caf058b945bfeaad3c84 not found: ID does not exist" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.133128 4812 scope.go:117] "RemoveContainer" containerID="b203e4e9c8a0c6dedba58525928e16bb47890f0b5de440ec2c1193edac53a0fa" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.133461 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b203e4e9c8a0c6dedba58525928e16bb47890f0b5de440ec2c1193edac53a0fa"} err="failed to get container status \"b203e4e9c8a0c6dedba58525928e16bb47890f0b5de440ec2c1193edac53a0fa\": rpc error: code = NotFound desc = could not find container \"b203e4e9c8a0c6dedba58525928e16bb47890f0b5de440ec2c1193edac53a0fa\": container with ID starting with b203e4e9c8a0c6dedba58525928e16bb47890f0b5de440ec2c1193edac53a0fa not found: ID does not exist" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.231650 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-scripts\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.231827 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.231887 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.231991 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/36f90f79-9656-4aee-8606-c4faab60548a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.232111 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-config-data-custom\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.232184 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.232267 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36f90f79-9656-4aee-8606-c4faab60548a-logs\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.232377 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks6m2\" (UniqueName: \"kubernetes.io/projected/36f90f79-9656-4aee-8606-c4faab60548a-kube-api-access-ks6m2\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.232472 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-config-data\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.232882 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/36f90f79-9656-4aee-8606-c4faab60548a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.235735 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36f90f79-9656-4aee-8606-c4faab60548a-logs\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.237662 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-config-data\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.240251 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.240504 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-config-data-custom\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.240577 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-scripts\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.241097 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.249928 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.266223 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks6m2\" (UniqueName: \"kubernetes.io/projected/36f90f79-9656-4aee-8606-c4faab60548a-kube-api-access-ks6m2\") pod \"cinder-api-0\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.391739 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.936963 4812 generic.go:334] "Generic (PLEG): container finished" podID="0745be11-3cf3-45d7-94bb-8c55b91ffe15" containerID="7a78de3607fdc51a03cfc79dbc00846fd6f9ec99888a3ff0c9be189fde3ac6da" exitCode=0 Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.937026 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bd8r2" event={"ID":"0745be11-3cf3-45d7-94bb-8c55b91ffe15","Type":"ContainerDied","Data":"7a78de3607fdc51a03cfc79dbc00846fd6f9ec99888a3ff0c9be189fde3ac6da"} Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.950744 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 20:51:24 crc kubenswrapper[4812]: I1124 20:51:24.988677 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f" path="/var/lib/kubelet/pods/1a7a21e3-a0fe-4ac1-8ff8-f43df6c0c25f/volumes" Nov 24 20:51:25 crc kubenswrapper[4812]: I1124 20:51:25.975041 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bd8r2" event={"ID":"0745be11-3cf3-45d7-94bb-8c55b91ffe15","Type":"ContainerStarted","Data":"4e3e2bfe93fbce8aab36a7f6e28bdc1d7883a6d47aba828e1fe13fffb97b01fd"} Nov 24 20:51:25 crc kubenswrapper[4812]: I1124 20:51:25.981592 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"36f90f79-9656-4aee-8606-c4faab60548a","Type":"ContainerStarted","Data":"49034c9f53f2af3629330144340573846b8f7eb731bf5153f398533fdf41583b"} Nov 24 20:51:25 crc kubenswrapper[4812]: I1124 20:51:25.981672 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"36f90f79-9656-4aee-8606-c4faab60548a","Type":"ContainerStarted","Data":"d1e4da434f4a9d63dc85e51ca509cd9b0d261bd69d7196b047de65660411544f"} Nov 24 20:51:25 crc kubenswrapper[4812]: I1124 20:51:25.999841 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bd8r2" podStartSLOduration=2.212881281 podStartE2EDuration="4.999818665s" podCreationTimestamp="2025-11-24 20:51:21 +0000 UTC" firstStartedPulling="2025-11-24 20:51:22.903180952 +0000 UTC m=+5676.692133333" lastFinishedPulling="2025-11-24 20:51:25.690118326 +0000 UTC m=+5679.479070717" observedRunningTime="2025-11-24 20:51:25.995832071 +0000 UTC m=+5679.784784482" watchObservedRunningTime="2025-11-24 20:51:25.999818665 +0000 UTC m=+5679.788771046" Nov 24 20:51:26 crc kubenswrapper[4812]: I1124 20:51:26.994190 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"36f90f79-9656-4aee-8606-c4faab60548a","Type":"ContainerStarted","Data":"97e378f6b10fd1526fd1f0a3bba9b7dcaa0a54d5547311e753a6c742f7bc05ee"} Nov 24 20:51:27 crc kubenswrapper[4812]: I1124 20:51:27.020643 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.020620522 podStartE2EDuration="4.020620522s" podCreationTimestamp="2025-11-24 20:51:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:51:27.01455366 +0000 UTC m=+5680.803506081" watchObservedRunningTime="2025-11-24 20:51:27.020620522 +0000 UTC m=+5680.809572903" Nov 24 20:51:28 crc kubenswrapper[4812]: I1124 20:51:28.002375 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 20:51:28 crc kubenswrapper[4812]: I1124 20:51:28.491391 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" Nov 24 20:51:28 crc kubenswrapper[4812]: I1124 20:51:28.578561 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d7dfc545-4qj7l"] Nov 24 20:51:28 crc kubenswrapper[4812]: I1124 20:51:28.578887 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" podUID="e00419f9-6c65-4198-9faf-28a1ad214606" containerName="dnsmasq-dns" containerID="cri-o://02fd422028e424af4b52fcf25a4587b28c58df9c7d2fe5ab05d35b571b06b34c" gracePeriod=10 Nov 24 20:51:29 crc kubenswrapper[4812]: I1124 20:51:29.011429 4812 generic.go:334] "Generic (PLEG): container finished" podID="e00419f9-6c65-4198-9faf-28a1ad214606" containerID="02fd422028e424af4b52fcf25a4587b28c58df9c7d2fe5ab05d35b571b06b34c" exitCode=0 Nov 24 20:51:29 crc kubenswrapper[4812]: I1124 20:51:29.011689 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" event={"ID":"e00419f9-6c65-4198-9faf-28a1ad214606","Type":"ContainerDied","Data":"02fd422028e424af4b52fcf25a4587b28c58df9c7d2fe5ab05d35b571b06b34c"} Nov 24 20:51:29 crc kubenswrapper[4812]: I1124 20:51:29.090392 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" Nov 24 20:51:29 crc kubenswrapper[4812]: I1124 20:51:29.232698 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-ovsdbserver-nb\") pod \"e00419f9-6c65-4198-9faf-28a1ad214606\" (UID: \"e00419f9-6c65-4198-9faf-28a1ad214606\") " Nov 24 20:51:29 crc kubenswrapper[4812]: I1124 20:51:29.232949 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-config\") pod \"e00419f9-6c65-4198-9faf-28a1ad214606\" (UID: \"e00419f9-6c65-4198-9faf-28a1ad214606\") " Nov 24 20:51:29 crc kubenswrapper[4812]: I1124 20:51:29.233092 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cqcr\" (UniqueName: \"kubernetes.io/projected/e00419f9-6c65-4198-9faf-28a1ad214606-kube-api-access-4cqcr\") pod \"e00419f9-6c65-4198-9faf-28a1ad214606\" (UID: \"e00419f9-6c65-4198-9faf-28a1ad214606\") " Nov 24 20:51:29 crc kubenswrapper[4812]: I1124 20:51:29.233279 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-ovsdbserver-sb\") pod \"e00419f9-6c65-4198-9faf-28a1ad214606\" (UID: \"e00419f9-6c65-4198-9faf-28a1ad214606\") " Nov 24 20:51:29 crc kubenswrapper[4812]: I1124 20:51:29.233401 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-dns-svc\") pod \"e00419f9-6c65-4198-9faf-28a1ad214606\" (UID: \"e00419f9-6c65-4198-9faf-28a1ad214606\") " Nov 24 20:51:29 crc kubenswrapper[4812]: I1124 20:51:29.240290 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e00419f9-6c65-4198-9faf-28a1ad214606-kube-api-access-4cqcr" (OuterVolumeSpecName: "kube-api-access-4cqcr") pod "e00419f9-6c65-4198-9faf-28a1ad214606" (UID: "e00419f9-6c65-4198-9faf-28a1ad214606"). InnerVolumeSpecName "kube-api-access-4cqcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:51:29 crc kubenswrapper[4812]: I1124 20:51:29.278729 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e00419f9-6c65-4198-9faf-28a1ad214606" (UID: "e00419f9-6c65-4198-9faf-28a1ad214606"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:51:29 crc kubenswrapper[4812]: I1124 20:51:29.300100 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e00419f9-6c65-4198-9faf-28a1ad214606" (UID: "e00419f9-6c65-4198-9faf-28a1ad214606"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:51:29 crc kubenswrapper[4812]: I1124 20:51:29.314749 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-config" (OuterVolumeSpecName: "config") pod "e00419f9-6c65-4198-9faf-28a1ad214606" (UID: "e00419f9-6c65-4198-9faf-28a1ad214606"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:51:29 crc kubenswrapper[4812]: I1124 20:51:29.317844 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e00419f9-6c65-4198-9faf-28a1ad214606" (UID: "e00419f9-6c65-4198-9faf-28a1ad214606"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:51:29 crc kubenswrapper[4812]: I1124 20:51:29.337195 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:29 crc kubenswrapper[4812]: I1124 20:51:29.337238 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-config\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:29 crc kubenswrapper[4812]: I1124 20:51:29.337250 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cqcr\" (UniqueName: \"kubernetes.io/projected/e00419f9-6c65-4198-9faf-28a1ad214606-kube-api-access-4cqcr\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:29 crc kubenswrapper[4812]: I1124 20:51:29.337264 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:29 crc kubenswrapper[4812]: I1124 20:51:29.337273 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e00419f9-6c65-4198-9faf-28a1ad214606-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:30 crc kubenswrapper[4812]: I1124 20:51:30.025650 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" event={"ID":"e00419f9-6c65-4198-9faf-28a1ad214606","Type":"ContainerDied","Data":"558c9723b634dfc604762abadf41fc9ea3673246dbda273b0484dcf9a94a29f7"} Nov 24 20:51:30 crc kubenswrapper[4812]: I1124 20:51:30.025723 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d7dfc545-4qj7l" Nov 24 20:51:30 crc kubenswrapper[4812]: I1124 20:51:30.026517 4812 scope.go:117] "RemoveContainer" containerID="02fd422028e424af4b52fcf25a4587b28c58df9c7d2fe5ab05d35b571b06b34c" Nov 24 20:51:30 crc kubenswrapper[4812]: I1124 20:51:30.063787 4812 scope.go:117] "RemoveContainer" containerID="aae1660d2ab81c41f965fd203ac7fc7900a2828bfe6ee697f1a90389c47c90c2" Nov 24 20:51:30 crc kubenswrapper[4812]: I1124 20:51:30.085574 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d7dfc545-4qj7l"] Nov 24 20:51:30 crc kubenswrapper[4812]: I1124 20:51:30.101890 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59d7dfc545-4qj7l"] Nov 24 20:51:30 crc kubenswrapper[4812]: I1124 20:51:30.989949 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e00419f9-6c65-4198-9faf-28a1ad214606" path="/var/lib/kubelet/pods/e00419f9-6c65-4198-9faf-28a1ad214606/volumes" Nov 24 20:51:31 crc kubenswrapper[4812]: I1124 20:51:31.506960 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bd8r2" Nov 24 20:51:31 crc kubenswrapper[4812]: I1124 20:51:31.507020 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bd8r2" Nov 24 20:51:31 crc kubenswrapper[4812]: I1124 20:51:31.590865 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bd8r2" Nov 24 20:51:32 crc kubenswrapper[4812]: I1124 20:51:32.139914 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bd8r2" Nov 24 20:51:32 crc kubenswrapper[4812]: I1124 20:51:32.208240 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bd8r2"] Nov 24 20:51:34 crc kubenswrapper[4812]: I1124 20:51:34.074855 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bd8r2" podUID="0745be11-3cf3-45d7-94bb-8c55b91ffe15" containerName="registry-server" containerID="cri-o://4e3e2bfe93fbce8aab36a7f6e28bdc1d7883a6d47aba828e1fe13fffb97b01fd" gracePeriod=2 Nov 24 20:51:34 crc kubenswrapper[4812]: I1124 20:51:34.623073 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bd8r2" Nov 24 20:51:34 crc kubenswrapper[4812]: I1124 20:51:34.741970 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6csmg\" (UniqueName: \"kubernetes.io/projected/0745be11-3cf3-45d7-94bb-8c55b91ffe15-kube-api-access-6csmg\") pod \"0745be11-3cf3-45d7-94bb-8c55b91ffe15\" (UID: \"0745be11-3cf3-45d7-94bb-8c55b91ffe15\") " Nov 24 20:51:34 crc kubenswrapper[4812]: I1124 20:51:34.742052 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0745be11-3cf3-45d7-94bb-8c55b91ffe15-catalog-content\") pod \"0745be11-3cf3-45d7-94bb-8c55b91ffe15\" (UID: \"0745be11-3cf3-45d7-94bb-8c55b91ffe15\") " Nov 24 20:51:34 crc kubenswrapper[4812]: I1124 20:51:34.742314 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0745be11-3cf3-45d7-94bb-8c55b91ffe15-utilities\") pod \"0745be11-3cf3-45d7-94bb-8c55b91ffe15\" (UID: \"0745be11-3cf3-45d7-94bb-8c55b91ffe15\") " Nov 24 20:51:34 crc kubenswrapper[4812]: I1124 20:51:34.743657 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0745be11-3cf3-45d7-94bb-8c55b91ffe15-utilities" (OuterVolumeSpecName: "utilities") pod "0745be11-3cf3-45d7-94bb-8c55b91ffe15" (UID: "0745be11-3cf3-45d7-94bb-8c55b91ffe15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:51:34 crc kubenswrapper[4812]: I1124 20:51:34.754200 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0745be11-3cf3-45d7-94bb-8c55b91ffe15-kube-api-access-6csmg" (OuterVolumeSpecName: "kube-api-access-6csmg") pod "0745be11-3cf3-45d7-94bb-8c55b91ffe15" (UID: "0745be11-3cf3-45d7-94bb-8c55b91ffe15"). InnerVolumeSpecName "kube-api-access-6csmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:51:34 crc kubenswrapper[4812]: I1124 20:51:34.758128 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0745be11-3cf3-45d7-94bb-8c55b91ffe15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0745be11-3cf3-45d7-94bb-8c55b91ffe15" (UID: "0745be11-3cf3-45d7-94bb-8c55b91ffe15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:51:34 crc kubenswrapper[4812]: I1124 20:51:34.844197 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0745be11-3cf3-45d7-94bb-8c55b91ffe15-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:34 crc kubenswrapper[4812]: I1124 20:51:34.844227 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6csmg\" (UniqueName: \"kubernetes.io/projected/0745be11-3cf3-45d7-94bb-8c55b91ffe15-kube-api-access-6csmg\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:34 crc kubenswrapper[4812]: I1124 20:51:34.844237 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0745be11-3cf3-45d7-94bb-8c55b91ffe15-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:35 crc kubenswrapper[4812]: I1124 20:51:35.089961 4812 generic.go:334] "Generic (PLEG): container finished" podID="0745be11-3cf3-45d7-94bb-8c55b91ffe15" containerID="4e3e2bfe93fbce8aab36a7f6e28bdc1d7883a6d47aba828e1fe13fffb97b01fd" exitCode=0 Nov 24 20:51:35 crc kubenswrapper[4812]: I1124 20:51:35.090005 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bd8r2" event={"ID":"0745be11-3cf3-45d7-94bb-8c55b91ffe15","Type":"ContainerDied","Data":"4e3e2bfe93fbce8aab36a7f6e28bdc1d7883a6d47aba828e1fe13fffb97b01fd"} Nov 24 20:51:35 crc kubenswrapper[4812]: I1124 20:51:35.090038 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bd8r2" event={"ID":"0745be11-3cf3-45d7-94bb-8c55b91ffe15","Type":"ContainerDied","Data":"91121c026be43f438c46aee1d69b76eadcca8fb1cc1988ed3c1f9b5aaeb82dbf"} Nov 24 20:51:35 crc kubenswrapper[4812]: I1124 20:51:35.090055 4812 scope.go:117] "RemoveContainer" containerID="4e3e2bfe93fbce8aab36a7f6e28bdc1d7883a6d47aba828e1fe13fffb97b01fd" Nov 24 20:51:35 crc kubenswrapper[4812]: I1124 20:51:35.090050 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bd8r2" Nov 24 20:51:35 crc kubenswrapper[4812]: I1124 20:51:35.134509 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bd8r2"] Nov 24 20:51:35 crc kubenswrapper[4812]: I1124 20:51:35.148456 4812 scope.go:117] "RemoveContainer" containerID="7a78de3607fdc51a03cfc79dbc00846fd6f9ec99888a3ff0c9be189fde3ac6da" Nov 24 20:51:35 crc kubenswrapper[4812]: I1124 20:51:35.149294 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bd8r2"] Nov 24 20:51:35 crc kubenswrapper[4812]: I1124 20:51:35.175850 4812 scope.go:117] "RemoveContainer" containerID="818e60fa5b8774303d0ca560548b829137b822292316c4b68f17e16bb54a4ff9" Nov 24 20:51:35 crc kubenswrapper[4812]: I1124 20:51:35.239367 4812 scope.go:117] "RemoveContainer" containerID="4e3e2bfe93fbce8aab36a7f6e28bdc1d7883a6d47aba828e1fe13fffb97b01fd" Nov 24 20:51:35 crc kubenswrapper[4812]: E1124 20:51:35.239936 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e3e2bfe93fbce8aab36a7f6e28bdc1d7883a6d47aba828e1fe13fffb97b01fd\": container with ID starting with 4e3e2bfe93fbce8aab36a7f6e28bdc1d7883a6d47aba828e1fe13fffb97b01fd not found: ID does not exist" containerID="4e3e2bfe93fbce8aab36a7f6e28bdc1d7883a6d47aba828e1fe13fffb97b01fd" Nov 24 20:51:35 crc kubenswrapper[4812]: I1124 20:51:35.239972 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e3e2bfe93fbce8aab36a7f6e28bdc1d7883a6d47aba828e1fe13fffb97b01fd"} err="failed to get container status \"4e3e2bfe93fbce8aab36a7f6e28bdc1d7883a6d47aba828e1fe13fffb97b01fd\": rpc error: code = NotFound desc = could not find container \"4e3e2bfe93fbce8aab36a7f6e28bdc1d7883a6d47aba828e1fe13fffb97b01fd\": container with ID starting with 4e3e2bfe93fbce8aab36a7f6e28bdc1d7883a6d47aba828e1fe13fffb97b01fd not found: ID does not exist" Nov 24 20:51:35 crc kubenswrapper[4812]: I1124 20:51:35.239997 4812 scope.go:117] "RemoveContainer" containerID="7a78de3607fdc51a03cfc79dbc00846fd6f9ec99888a3ff0c9be189fde3ac6da" Nov 24 20:51:35 crc kubenswrapper[4812]: E1124 20:51:35.240450 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a78de3607fdc51a03cfc79dbc00846fd6f9ec99888a3ff0c9be189fde3ac6da\": container with ID starting with 7a78de3607fdc51a03cfc79dbc00846fd6f9ec99888a3ff0c9be189fde3ac6da not found: ID does not exist" containerID="7a78de3607fdc51a03cfc79dbc00846fd6f9ec99888a3ff0c9be189fde3ac6da" Nov 24 20:51:35 crc kubenswrapper[4812]: I1124 20:51:35.240476 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a78de3607fdc51a03cfc79dbc00846fd6f9ec99888a3ff0c9be189fde3ac6da"} err="failed to get container status \"7a78de3607fdc51a03cfc79dbc00846fd6f9ec99888a3ff0c9be189fde3ac6da\": rpc error: code = NotFound desc = could not find container \"7a78de3607fdc51a03cfc79dbc00846fd6f9ec99888a3ff0c9be189fde3ac6da\": container with ID starting with 7a78de3607fdc51a03cfc79dbc00846fd6f9ec99888a3ff0c9be189fde3ac6da not found: ID does not exist" Nov 24 20:51:35 crc kubenswrapper[4812]: I1124 20:51:35.240505 4812 scope.go:117] "RemoveContainer" containerID="818e60fa5b8774303d0ca560548b829137b822292316c4b68f17e16bb54a4ff9" Nov 24 20:51:35 crc kubenswrapper[4812]: E1124 20:51:35.240866 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"818e60fa5b8774303d0ca560548b829137b822292316c4b68f17e16bb54a4ff9\": container with ID starting with 818e60fa5b8774303d0ca560548b829137b822292316c4b68f17e16bb54a4ff9 not found: ID does not exist" containerID="818e60fa5b8774303d0ca560548b829137b822292316c4b68f17e16bb54a4ff9" Nov 24 20:51:35 crc kubenswrapper[4812]: I1124 20:51:35.240890 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"818e60fa5b8774303d0ca560548b829137b822292316c4b68f17e16bb54a4ff9"} err="failed to get container status \"818e60fa5b8774303d0ca560548b829137b822292316c4b68f17e16bb54a4ff9\": rpc error: code = NotFound desc = could not find container \"818e60fa5b8774303d0ca560548b829137b822292316c4b68f17e16bb54a4ff9\": container with ID starting with 818e60fa5b8774303d0ca560548b829137b822292316c4b68f17e16bb54a4ff9 not found: ID does not exist" Nov 24 20:51:36 crc kubenswrapper[4812]: I1124 20:51:36.145242 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 24 20:51:36 crc kubenswrapper[4812]: I1124 20:51:36.974813 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0745be11-3cf3-45d7-94bb-8c55b91ffe15" path="/var/lib/kubelet/pods/0745be11-3cf3-45d7-94bb-8c55b91ffe15/volumes" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.729450 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 20:51:53 crc kubenswrapper[4812]: E1124 20:51:53.730455 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00419f9-6c65-4198-9faf-28a1ad214606" containerName="init" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.730473 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00419f9-6c65-4198-9faf-28a1ad214606" containerName="init" Nov 24 20:51:53 crc kubenswrapper[4812]: E1124 20:51:53.730491 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e00419f9-6c65-4198-9faf-28a1ad214606" containerName="dnsmasq-dns" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.730501 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e00419f9-6c65-4198-9faf-28a1ad214606" containerName="dnsmasq-dns" Nov 24 20:51:53 crc kubenswrapper[4812]: E1124 20:51:53.730517 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0745be11-3cf3-45d7-94bb-8c55b91ffe15" containerName="registry-server" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.730526 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0745be11-3cf3-45d7-94bb-8c55b91ffe15" containerName="registry-server" Nov 24 20:51:53 crc kubenswrapper[4812]: E1124 20:51:53.730555 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0745be11-3cf3-45d7-94bb-8c55b91ffe15" containerName="extract-content" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.730562 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0745be11-3cf3-45d7-94bb-8c55b91ffe15" containerName="extract-content" Nov 24 20:51:53 crc kubenswrapper[4812]: E1124 20:51:53.730579 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0745be11-3cf3-45d7-94bb-8c55b91ffe15" containerName="extract-utilities" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.730587 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0745be11-3cf3-45d7-94bb-8c55b91ffe15" containerName="extract-utilities" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.730830 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e00419f9-6c65-4198-9faf-28a1ad214606" containerName="dnsmasq-dns" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.730870 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0745be11-3cf3-45d7-94bb-8c55b91ffe15" containerName="registry-server" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.732147 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.741808 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.742443 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.832444 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-scripts\") pod \"cinder-scheduler-0\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " pod="openstack/cinder-scheduler-0" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.832502 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1dbf6e06-273f-4234-b658-130816d97f84-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " pod="openstack/cinder-scheduler-0" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.832619 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " pod="openstack/cinder-scheduler-0" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.832663 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-config-data\") pod \"cinder-scheduler-0\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " pod="openstack/cinder-scheduler-0" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.832745 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " pod="openstack/cinder-scheduler-0" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.832940 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsnxz\" (UniqueName: \"kubernetes.io/projected/1dbf6e06-273f-4234-b658-130816d97f84-kube-api-access-qsnxz\") pod \"cinder-scheduler-0\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " pod="openstack/cinder-scheduler-0" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.873004 4812 scope.go:117] "RemoveContainer" containerID="921c1f7cc32b005d7a429c998937e7d805af23852fb987f712c062dcd679d94b" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.934775 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-config-data\") pod \"cinder-scheduler-0\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " pod="openstack/cinder-scheduler-0" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.934905 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " pod="openstack/cinder-scheduler-0" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.935025 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsnxz\" (UniqueName: \"kubernetes.io/projected/1dbf6e06-273f-4234-b658-130816d97f84-kube-api-access-qsnxz\") pod \"cinder-scheduler-0\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " pod="openstack/cinder-scheduler-0" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.935110 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-scripts\") pod \"cinder-scheduler-0\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " pod="openstack/cinder-scheduler-0" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.935154 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1dbf6e06-273f-4234-b658-130816d97f84-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " pod="openstack/cinder-scheduler-0" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.935232 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " pod="openstack/cinder-scheduler-0" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.935667 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1dbf6e06-273f-4234-b658-130816d97f84-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " pod="openstack/cinder-scheduler-0" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.941062 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-scripts\") pod \"cinder-scheduler-0\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " pod="openstack/cinder-scheduler-0" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.945021 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " pod="openstack/cinder-scheduler-0" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.945808 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " pod="openstack/cinder-scheduler-0" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.953223 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-config-data\") pod \"cinder-scheduler-0\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " pod="openstack/cinder-scheduler-0" Nov 24 20:51:53 crc kubenswrapper[4812]: I1124 20:51:53.968280 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsnxz\" (UniqueName: \"kubernetes.io/projected/1dbf6e06-273f-4234-b658-130816d97f84-kube-api-access-qsnxz\") pod \"cinder-scheduler-0\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " pod="openstack/cinder-scheduler-0" Nov 24 20:51:54 crc kubenswrapper[4812]: I1124 20:51:54.057802 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 20:51:54 crc kubenswrapper[4812]: I1124 20:51:54.567480 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 20:51:54 crc kubenswrapper[4812]: I1124 20:51:54.866318 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 20:51:54 crc kubenswrapper[4812]: I1124 20:51:54.868786 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="36f90f79-9656-4aee-8606-c4faab60548a" containerName="cinder-api-log" containerID="cri-o://49034c9f53f2af3629330144340573846b8f7eb731bf5153f398533fdf41583b" gracePeriod=30 Nov 24 20:51:54 crc kubenswrapper[4812]: I1124 20:51:54.868873 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="36f90f79-9656-4aee-8606-c4faab60548a" containerName="cinder-api" containerID="cri-o://97e378f6b10fd1526fd1f0a3bba9b7dcaa0a54d5547311e753a6c742f7bc05ee" gracePeriod=30 Nov 24 20:51:55 crc kubenswrapper[4812]: I1124 20:51:55.304481 4812 generic.go:334] "Generic (PLEG): container finished" podID="36f90f79-9656-4aee-8606-c4faab60548a" containerID="49034c9f53f2af3629330144340573846b8f7eb731bf5153f398533fdf41583b" exitCode=143 Nov 24 20:51:55 crc kubenswrapper[4812]: I1124 20:51:55.304563 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"36f90f79-9656-4aee-8606-c4faab60548a","Type":"ContainerDied","Data":"49034c9f53f2af3629330144340573846b8f7eb731bf5153f398533fdf41583b"} Nov 24 20:51:55 crc kubenswrapper[4812]: I1124 20:51:55.310229 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1dbf6e06-273f-4234-b658-130816d97f84","Type":"ContainerStarted","Data":"f23c5a6412c883037ae37cbf4c288217f16ef5a059ef0b95b648086df08e6764"} Nov 24 20:51:55 crc kubenswrapper[4812]: I1124 20:51:55.310494 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1dbf6e06-273f-4234-b658-130816d97f84","Type":"ContainerStarted","Data":"f750c66d59e413159d8c8d199ffea8ee70bc7cc0a4030d67f1dcb439f4d26eed"} Nov 24 20:51:56 crc kubenswrapper[4812]: I1124 20:51:56.323499 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1dbf6e06-273f-4234-b658-130816d97f84","Type":"ContainerStarted","Data":"721233e041fa49723985f53cdae3e50453a47658b5acc0968f65677061d94e00"} Nov 24 20:51:56 crc kubenswrapper[4812]: I1124 20:51:56.356957 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.356934845 podStartE2EDuration="3.356934845s" podCreationTimestamp="2025-11-24 20:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:51:56.349393611 +0000 UTC m=+5710.138346022" watchObservedRunningTime="2025-11-24 20:51:56.356934845 +0000 UTC m=+5710.145887226" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.325708 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.348737 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.348978 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"36f90f79-9656-4aee-8606-c4faab60548a","Type":"ContainerDied","Data":"97e378f6b10fd1526fd1f0a3bba9b7dcaa0a54d5547311e753a6c742f7bc05ee"} Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.349464 4812 scope.go:117] "RemoveContainer" containerID="97e378f6b10fd1526fd1f0a3bba9b7dcaa0a54d5547311e753a6c742f7bc05ee" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.348664 4812 generic.go:334] "Generic (PLEG): container finished" podID="36f90f79-9656-4aee-8606-c4faab60548a" containerID="97e378f6b10fd1526fd1f0a3bba9b7dcaa0a54d5547311e753a6c742f7bc05ee" exitCode=0 Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.349893 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"36f90f79-9656-4aee-8606-c4faab60548a","Type":"ContainerDied","Data":"d1e4da434f4a9d63dc85e51ca509cd9b0d261bd69d7196b047de65660411544f"} Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.393592 4812 scope.go:117] "RemoveContainer" containerID="49034c9f53f2af3629330144340573846b8f7eb731bf5153f398533fdf41583b" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.420757 4812 scope.go:117] "RemoveContainer" containerID="97e378f6b10fd1526fd1f0a3bba9b7dcaa0a54d5547311e753a6c742f7bc05ee" Nov 24 20:51:58 crc kubenswrapper[4812]: E1124 20:51:58.421361 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97e378f6b10fd1526fd1f0a3bba9b7dcaa0a54d5547311e753a6c742f7bc05ee\": container with ID starting with 97e378f6b10fd1526fd1f0a3bba9b7dcaa0a54d5547311e753a6c742f7bc05ee not found: ID does not exist" containerID="97e378f6b10fd1526fd1f0a3bba9b7dcaa0a54d5547311e753a6c742f7bc05ee" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.421454 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97e378f6b10fd1526fd1f0a3bba9b7dcaa0a54d5547311e753a6c742f7bc05ee"} err="failed to get container status \"97e378f6b10fd1526fd1f0a3bba9b7dcaa0a54d5547311e753a6c742f7bc05ee\": rpc error: code = NotFound desc = could not find container \"97e378f6b10fd1526fd1f0a3bba9b7dcaa0a54d5547311e753a6c742f7bc05ee\": container with ID starting with 97e378f6b10fd1526fd1f0a3bba9b7dcaa0a54d5547311e753a6c742f7bc05ee not found: ID does not exist" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.421558 4812 scope.go:117] "RemoveContainer" containerID="49034c9f53f2af3629330144340573846b8f7eb731bf5153f398533fdf41583b" Nov 24 20:51:58 crc kubenswrapper[4812]: E1124 20:51:58.422277 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49034c9f53f2af3629330144340573846b8f7eb731bf5153f398533fdf41583b\": container with ID starting with 49034c9f53f2af3629330144340573846b8f7eb731bf5153f398533fdf41583b not found: ID does not exist" containerID="49034c9f53f2af3629330144340573846b8f7eb731bf5153f398533fdf41583b" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.422818 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49034c9f53f2af3629330144340573846b8f7eb731bf5153f398533fdf41583b"} err="failed to get container status \"49034c9f53f2af3629330144340573846b8f7eb731bf5153f398533fdf41583b\": rpc error: code = NotFound desc = could not find container \"49034c9f53f2af3629330144340573846b8f7eb731bf5153f398533fdf41583b\": container with ID starting with 49034c9f53f2af3629330144340573846b8f7eb731bf5153f398533fdf41583b not found: ID does not exist" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.430362 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-scripts\") pod \"36f90f79-9656-4aee-8606-c4faab60548a\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.430595 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-config-data\") pod \"36f90f79-9656-4aee-8606-c4faab60548a\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.430688 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-config-data-custom\") pod \"36f90f79-9656-4aee-8606-c4faab60548a\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.430759 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6m2\" (UniqueName: \"kubernetes.io/projected/36f90f79-9656-4aee-8606-c4faab60548a-kube-api-access-ks6m2\") pod \"36f90f79-9656-4aee-8606-c4faab60548a\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.430879 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36f90f79-9656-4aee-8606-c4faab60548a-logs\") pod \"36f90f79-9656-4aee-8606-c4faab60548a\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.431095 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-internal-tls-certs\") pod \"36f90f79-9656-4aee-8606-c4faab60548a\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.431241 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-combined-ca-bundle\") pod \"36f90f79-9656-4aee-8606-c4faab60548a\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.431319 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-public-tls-certs\") pod \"36f90f79-9656-4aee-8606-c4faab60548a\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.431456 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/36f90f79-9656-4aee-8606-c4faab60548a-etc-machine-id\") pod \"36f90f79-9656-4aee-8606-c4faab60548a\" (UID: \"36f90f79-9656-4aee-8606-c4faab60548a\") " Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.431877 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36f90f79-9656-4aee-8606-c4faab60548a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "36f90f79-9656-4aee-8606-c4faab60548a" (UID: "36f90f79-9656-4aee-8606-c4faab60548a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.432478 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36f90f79-9656-4aee-8606-c4faab60548a-logs" (OuterVolumeSpecName: "logs") pod "36f90f79-9656-4aee-8606-c4faab60548a" (UID: "36f90f79-9656-4aee-8606-c4faab60548a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.437588 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "36f90f79-9656-4aee-8606-c4faab60548a" (UID: "36f90f79-9656-4aee-8606-c4faab60548a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.437824 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36f90f79-9656-4aee-8606-c4faab60548a-kube-api-access-ks6m2" (OuterVolumeSpecName: "kube-api-access-ks6m2") pod "36f90f79-9656-4aee-8606-c4faab60548a" (UID: "36f90f79-9656-4aee-8606-c4faab60548a"). InnerVolumeSpecName "kube-api-access-ks6m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.453488 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-scripts" (OuterVolumeSpecName: "scripts") pod "36f90f79-9656-4aee-8606-c4faab60548a" (UID: "36f90f79-9656-4aee-8606-c4faab60548a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.484146 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36f90f79-9656-4aee-8606-c4faab60548a" (UID: "36f90f79-9656-4aee-8606-c4faab60548a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.497309 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-config-data" (OuterVolumeSpecName: "config-data") pod "36f90f79-9656-4aee-8606-c4faab60548a" (UID: "36f90f79-9656-4aee-8606-c4faab60548a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.504528 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "36f90f79-9656-4aee-8606-c4faab60548a" (UID: "36f90f79-9656-4aee-8606-c4faab60548a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.507988 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "36f90f79-9656-4aee-8606-c4faab60548a" (UID: "36f90f79-9656-4aee-8606-c4faab60548a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.537294 4812 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.537329 4812 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/36f90f79-9656-4aee-8606-c4faab60548a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.537349 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.537357 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.537366 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.537374 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks6m2\" (UniqueName: \"kubernetes.io/projected/36f90f79-9656-4aee-8606-c4faab60548a-kube-api-access-ks6m2\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.537384 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36f90f79-9656-4aee-8606-c4faab60548a-logs\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.537391 4812 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.537399 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f90f79-9656-4aee-8606-c4faab60548a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.689344 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.702523 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.722365 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 20:51:58 crc kubenswrapper[4812]: E1124 20:51:58.723180 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f90f79-9656-4aee-8606-c4faab60548a" containerName="cinder-api-log" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.723323 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f90f79-9656-4aee-8606-c4faab60548a" containerName="cinder-api-log" Nov 24 20:51:58 crc kubenswrapper[4812]: E1124 20:51:58.723476 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f90f79-9656-4aee-8606-c4faab60548a" containerName="cinder-api" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.723573 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f90f79-9656-4aee-8606-c4faab60548a" containerName="cinder-api" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.723981 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f90f79-9656-4aee-8606-c4faab60548a" containerName="cinder-api-log" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.724131 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f90f79-9656-4aee-8606-c4faab60548a" containerName="cinder-api" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.726491 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.728888 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.731029 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.731498 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.749030 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.843452 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5754dce5-a565-4102-9655-a8c736c9aa70-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.843548 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2b8x\" (UniqueName: \"kubernetes.io/projected/5754dce5-a565-4102-9655-a8c736c9aa70-kube-api-access-x2b8x\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.843866 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5754dce5-a565-4102-9655-a8c736c9aa70-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.843943 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5754dce5-a565-4102-9655-a8c736c9aa70-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.844057 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5754dce5-a565-4102-9655-a8c736c9aa70-scripts\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.844219 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5754dce5-a565-4102-9655-a8c736c9aa70-config-data\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.844283 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5754dce5-a565-4102-9655-a8c736c9aa70-logs\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.844375 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5754dce5-a565-4102-9655-a8c736c9aa70-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.844454 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5754dce5-a565-4102-9655-a8c736c9aa70-config-data-custom\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.946520 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5754dce5-a565-4102-9655-a8c736c9aa70-config-data\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.946604 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5754dce5-a565-4102-9655-a8c736c9aa70-logs\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.946667 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5754dce5-a565-4102-9655-a8c736c9aa70-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.946725 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5754dce5-a565-4102-9655-a8c736c9aa70-config-data-custom\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.946762 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5754dce5-a565-4102-9655-a8c736c9aa70-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.946854 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2b8x\" (UniqueName: \"kubernetes.io/projected/5754dce5-a565-4102-9655-a8c736c9aa70-kube-api-access-x2b8x\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.946979 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5754dce5-a565-4102-9655-a8c736c9aa70-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.947015 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5754dce5-a565-4102-9655-a8c736c9aa70-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.947070 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5754dce5-a565-4102-9655-a8c736c9aa70-scripts\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.948001 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5754dce5-a565-4102-9655-a8c736c9aa70-logs\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.948639 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5754dce5-a565-4102-9655-a8c736c9aa70-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.951991 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5754dce5-a565-4102-9655-a8c736c9aa70-scripts\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.953566 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5754dce5-a565-4102-9655-a8c736c9aa70-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.954259 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5754dce5-a565-4102-9655-a8c736c9aa70-config-data-custom\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.954408 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5754dce5-a565-4102-9655-a8c736c9aa70-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.954958 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5754dce5-a565-4102-9655-a8c736c9aa70-config-data\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.956462 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5754dce5-a565-4102-9655-a8c736c9aa70-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.970216 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2b8x\" (UniqueName: \"kubernetes.io/projected/5754dce5-a565-4102-9655-a8c736c9aa70-kube-api-access-x2b8x\") pod \"cinder-api-0\" (UID: \"5754dce5-a565-4102-9655-a8c736c9aa70\") " pod="openstack/cinder-api-0" Nov 24 20:51:58 crc kubenswrapper[4812]: I1124 20:51:58.985086 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36f90f79-9656-4aee-8606-c4faab60548a" path="/var/lib/kubelet/pods/36f90f79-9656-4aee-8606-c4faab60548a/volumes" Nov 24 20:51:59 crc kubenswrapper[4812]: I1124 20:51:59.044294 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 20:51:59 crc kubenswrapper[4812]: I1124 20:51:59.061899 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 20:51:59 crc kubenswrapper[4812]: I1124 20:51:59.571444 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 20:51:59 crc kubenswrapper[4812]: W1124 20:51:59.578068 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5754dce5_a565_4102_9655_a8c736c9aa70.slice/crio-9c18c6f04a3277f83a4c71c9373f7ae76a673af9952ab161a67d92d4afdec61b WatchSource:0}: Error finding container 9c18c6f04a3277f83a4c71c9373f7ae76a673af9952ab161a67d92d4afdec61b: Status 404 returned error can't find the container with id 9c18c6f04a3277f83a4c71c9373f7ae76a673af9952ab161a67d92d4afdec61b Nov 24 20:52:00 crc kubenswrapper[4812]: I1124 20:52:00.383133 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5754dce5-a565-4102-9655-a8c736c9aa70","Type":"ContainerStarted","Data":"5d6424c0b8a5eb203df76705a19115d9de467509a7cf7f8e2256764fe29e4aa3"} Nov 24 20:52:00 crc kubenswrapper[4812]: I1124 20:52:00.383526 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5754dce5-a565-4102-9655-a8c736c9aa70","Type":"ContainerStarted","Data":"9c18c6f04a3277f83a4c71c9373f7ae76a673af9952ab161a67d92d4afdec61b"} Nov 24 20:52:01 crc kubenswrapper[4812]: I1124 20:52:01.394176 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5754dce5-a565-4102-9655-a8c736c9aa70","Type":"ContainerStarted","Data":"9a076ab1b40e7160a5b5dfd0a1ac03aa1fb96613722a22ae4bbe666ce5bca7cd"} Nov 24 20:52:01 crc kubenswrapper[4812]: I1124 20:52:01.394933 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 20:52:01 crc kubenswrapper[4812]: I1124 20:52:01.421726 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.421701998 podStartE2EDuration="3.421701998s" podCreationTimestamp="2025-11-24 20:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:52:01.41153812 +0000 UTC m=+5715.200490501" watchObservedRunningTime="2025-11-24 20:52:01.421701998 +0000 UTC m=+5715.210654369" Nov 24 20:52:04 crc kubenswrapper[4812]: I1124 20:52:04.283744 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 20:52:04 crc kubenswrapper[4812]: I1124 20:52:04.369848 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 20:52:04 crc kubenswrapper[4812]: I1124 20:52:04.426795 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1dbf6e06-273f-4234-b658-130816d97f84" containerName="cinder-scheduler" containerID="cri-o://f23c5a6412c883037ae37cbf4c288217f16ef5a059ef0b95b648086df08e6764" gracePeriod=30 Nov 24 20:52:04 crc kubenswrapper[4812]: I1124 20:52:04.426937 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="1dbf6e06-273f-4234-b658-130816d97f84" containerName="probe" containerID="cri-o://721233e041fa49723985f53cdae3e50453a47658b5acc0968f65677061d94e00" gracePeriod=30 Nov 24 20:52:05 crc kubenswrapper[4812]: I1124 20:52:05.459066 4812 generic.go:334] "Generic (PLEG): container finished" podID="1dbf6e06-273f-4234-b658-130816d97f84" containerID="721233e041fa49723985f53cdae3e50453a47658b5acc0968f65677061d94e00" exitCode=0 Nov 24 20:52:05 crc kubenswrapper[4812]: I1124 20:52:05.459173 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1dbf6e06-273f-4234-b658-130816d97f84","Type":"ContainerDied","Data":"721233e041fa49723985f53cdae3e50453a47658b5acc0968f65677061d94e00"} Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.349185 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.408173 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-combined-ca-bundle\") pod \"1dbf6e06-273f-4234-b658-130816d97f84\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.408255 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-config-data\") pod \"1dbf6e06-273f-4234-b658-130816d97f84\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.408275 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1dbf6e06-273f-4234-b658-130816d97f84-etc-machine-id\") pod \"1dbf6e06-273f-4234-b658-130816d97f84\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.408326 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-config-data-custom\") pod \"1dbf6e06-273f-4234-b658-130816d97f84\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.408385 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-scripts\") pod \"1dbf6e06-273f-4234-b658-130816d97f84\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.408450 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsnxz\" (UniqueName: \"kubernetes.io/projected/1dbf6e06-273f-4234-b658-130816d97f84-kube-api-access-qsnxz\") pod \"1dbf6e06-273f-4234-b658-130816d97f84\" (UID: \"1dbf6e06-273f-4234-b658-130816d97f84\") " Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.410229 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1dbf6e06-273f-4234-b658-130816d97f84-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1dbf6e06-273f-4234-b658-130816d97f84" (UID: "1dbf6e06-273f-4234-b658-130816d97f84"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.416498 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1dbf6e06-273f-4234-b658-130816d97f84" (UID: "1dbf6e06-273f-4234-b658-130816d97f84"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.422518 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-scripts" (OuterVolumeSpecName: "scripts") pod "1dbf6e06-273f-4234-b658-130816d97f84" (UID: "1dbf6e06-273f-4234-b658-130816d97f84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.429919 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dbf6e06-273f-4234-b658-130816d97f84-kube-api-access-qsnxz" (OuterVolumeSpecName: "kube-api-access-qsnxz") pod "1dbf6e06-273f-4234-b658-130816d97f84" (UID: "1dbf6e06-273f-4234-b658-130816d97f84"). InnerVolumeSpecName "kube-api-access-qsnxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.480560 4812 generic.go:334] "Generic (PLEG): container finished" podID="1dbf6e06-273f-4234-b658-130816d97f84" containerID="f23c5a6412c883037ae37cbf4c288217f16ef5a059ef0b95b648086df08e6764" exitCode=0 Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.480606 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1dbf6e06-273f-4234-b658-130816d97f84","Type":"ContainerDied","Data":"f23c5a6412c883037ae37cbf4c288217f16ef5a059ef0b95b648086df08e6764"} Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.480613 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.480636 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"1dbf6e06-273f-4234-b658-130816d97f84","Type":"ContainerDied","Data":"f750c66d59e413159d8c8d199ffea8ee70bc7cc0a4030d67f1dcb439f4d26eed"} Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.480657 4812 scope.go:117] "RemoveContainer" containerID="721233e041fa49723985f53cdae3e50453a47658b5acc0968f65677061d94e00" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.486466 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1dbf6e06-273f-4234-b658-130816d97f84" (UID: "1dbf6e06-273f-4234-b658-130816d97f84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.511610 4812 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1dbf6e06-273f-4234-b658-130816d97f84-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.511886 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.512015 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.512135 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsnxz\" (UniqueName: \"kubernetes.io/projected/1dbf6e06-273f-4234-b658-130816d97f84-kube-api-access-qsnxz\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.512277 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.545584 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-config-data" (OuterVolumeSpecName: "config-data") pod "1dbf6e06-273f-4234-b658-130816d97f84" (UID: "1dbf6e06-273f-4234-b658-130816d97f84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.594597 4812 scope.go:117] "RemoveContainer" containerID="f23c5a6412c883037ae37cbf4c288217f16ef5a059ef0b95b648086df08e6764" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.614493 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1dbf6e06-273f-4234-b658-130816d97f84-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.614621 4812 scope.go:117] "RemoveContainer" containerID="721233e041fa49723985f53cdae3e50453a47658b5acc0968f65677061d94e00" Nov 24 20:52:06 crc kubenswrapper[4812]: E1124 20:52:06.615225 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"721233e041fa49723985f53cdae3e50453a47658b5acc0968f65677061d94e00\": container with ID starting with 721233e041fa49723985f53cdae3e50453a47658b5acc0968f65677061d94e00 not found: ID does not exist" containerID="721233e041fa49723985f53cdae3e50453a47658b5acc0968f65677061d94e00" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.615266 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721233e041fa49723985f53cdae3e50453a47658b5acc0968f65677061d94e00"} err="failed to get container status \"721233e041fa49723985f53cdae3e50453a47658b5acc0968f65677061d94e00\": rpc error: code = NotFound desc = could not find container \"721233e041fa49723985f53cdae3e50453a47658b5acc0968f65677061d94e00\": container with ID starting with 721233e041fa49723985f53cdae3e50453a47658b5acc0968f65677061d94e00 not found: ID does not exist" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.615307 4812 scope.go:117] "RemoveContainer" containerID="f23c5a6412c883037ae37cbf4c288217f16ef5a059ef0b95b648086df08e6764" Nov 24 20:52:06 crc kubenswrapper[4812]: E1124 20:52:06.615655 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f23c5a6412c883037ae37cbf4c288217f16ef5a059ef0b95b648086df08e6764\": container with ID starting with f23c5a6412c883037ae37cbf4c288217f16ef5a059ef0b95b648086df08e6764 not found: ID does not exist" containerID="f23c5a6412c883037ae37cbf4c288217f16ef5a059ef0b95b648086df08e6764" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.615675 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f23c5a6412c883037ae37cbf4c288217f16ef5a059ef0b95b648086df08e6764"} err="failed to get container status \"f23c5a6412c883037ae37cbf4c288217f16ef5a059ef0b95b648086df08e6764\": rpc error: code = NotFound desc = could not find container \"f23c5a6412c883037ae37cbf4c288217f16ef5a059ef0b95b648086df08e6764\": container with ID starting with f23c5a6412c883037ae37cbf4c288217f16ef5a059ef0b95b648086df08e6764 not found: ID does not exist" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.825246 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.842352 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.853641 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 20:52:06 crc kubenswrapper[4812]: E1124 20:52:06.854134 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dbf6e06-273f-4234-b658-130816d97f84" containerName="cinder-scheduler" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.854163 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dbf6e06-273f-4234-b658-130816d97f84" containerName="cinder-scheduler" Nov 24 20:52:06 crc kubenswrapper[4812]: E1124 20:52:06.854203 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dbf6e06-273f-4234-b658-130816d97f84" containerName="probe" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.854213 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dbf6e06-273f-4234-b658-130816d97f84" containerName="probe" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.854473 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dbf6e06-273f-4234-b658-130816d97f84" containerName="probe" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.854526 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dbf6e06-273f-4234-b658-130816d97f84" containerName="cinder-scheduler" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.855945 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.857949 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.864891 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.919662 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/115cede8-0b10-43ab-bf8b-b7ce50ad787e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"115cede8-0b10-43ab-bf8b-b7ce50ad787e\") " pod="openstack/cinder-scheduler-0" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.919734 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115cede8-0b10-43ab-bf8b-b7ce50ad787e-scripts\") pod \"cinder-scheduler-0\" (UID: \"115cede8-0b10-43ab-bf8b-b7ce50ad787e\") " pod="openstack/cinder-scheduler-0" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.919767 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115cede8-0b10-43ab-bf8b-b7ce50ad787e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"115cede8-0b10-43ab-bf8b-b7ce50ad787e\") " pod="openstack/cinder-scheduler-0" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.919816 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115cede8-0b10-43ab-bf8b-b7ce50ad787e-config-data\") pod \"cinder-scheduler-0\" (UID: \"115cede8-0b10-43ab-bf8b-b7ce50ad787e\") " pod="openstack/cinder-scheduler-0" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.919840 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scpdk\" (UniqueName: \"kubernetes.io/projected/115cede8-0b10-43ab-bf8b-b7ce50ad787e-kube-api-access-scpdk\") pod \"cinder-scheduler-0\" (UID: \"115cede8-0b10-43ab-bf8b-b7ce50ad787e\") " pod="openstack/cinder-scheduler-0" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.919891 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/115cede8-0b10-43ab-bf8b-b7ce50ad787e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"115cede8-0b10-43ab-bf8b-b7ce50ad787e\") " pod="openstack/cinder-scheduler-0" Nov 24 20:52:06 crc kubenswrapper[4812]: I1124 20:52:06.979666 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dbf6e06-273f-4234-b658-130816d97f84" path="/var/lib/kubelet/pods/1dbf6e06-273f-4234-b658-130816d97f84/volumes" Nov 24 20:52:07 crc kubenswrapper[4812]: I1124 20:52:07.021932 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scpdk\" (UniqueName: \"kubernetes.io/projected/115cede8-0b10-43ab-bf8b-b7ce50ad787e-kube-api-access-scpdk\") pod \"cinder-scheduler-0\" (UID: \"115cede8-0b10-43ab-bf8b-b7ce50ad787e\") " pod="openstack/cinder-scheduler-0" Nov 24 20:52:07 crc kubenswrapper[4812]: I1124 20:52:07.022251 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/115cede8-0b10-43ab-bf8b-b7ce50ad787e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"115cede8-0b10-43ab-bf8b-b7ce50ad787e\") " pod="openstack/cinder-scheduler-0" Nov 24 20:52:07 crc kubenswrapper[4812]: I1124 20:52:07.022565 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/115cede8-0b10-43ab-bf8b-b7ce50ad787e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"115cede8-0b10-43ab-bf8b-b7ce50ad787e\") " pod="openstack/cinder-scheduler-0" Nov 24 20:52:07 crc kubenswrapper[4812]: I1124 20:52:07.022660 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115cede8-0b10-43ab-bf8b-b7ce50ad787e-scripts\") pod \"cinder-scheduler-0\" (UID: \"115cede8-0b10-43ab-bf8b-b7ce50ad787e\") " pod="openstack/cinder-scheduler-0" Nov 24 20:52:07 crc kubenswrapper[4812]: I1124 20:52:07.022732 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115cede8-0b10-43ab-bf8b-b7ce50ad787e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"115cede8-0b10-43ab-bf8b-b7ce50ad787e\") " pod="openstack/cinder-scheduler-0" Nov 24 20:52:07 crc kubenswrapper[4812]: I1124 20:52:07.022814 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115cede8-0b10-43ab-bf8b-b7ce50ad787e-config-data\") pod \"cinder-scheduler-0\" (UID: \"115cede8-0b10-43ab-bf8b-b7ce50ad787e\") " pod="openstack/cinder-scheduler-0" Nov 24 20:52:07 crc kubenswrapper[4812]: I1124 20:52:07.023728 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/115cede8-0b10-43ab-bf8b-b7ce50ad787e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"115cede8-0b10-43ab-bf8b-b7ce50ad787e\") " pod="openstack/cinder-scheduler-0" Nov 24 20:52:07 crc kubenswrapper[4812]: I1124 20:52:07.042182 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115cede8-0b10-43ab-bf8b-b7ce50ad787e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"115cede8-0b10-43ab-bf8b-b7ce50ad787e\") " pod="openstack/cinder-scheduler-0" Nov 24 20:52:07 crc kubenswrapper[4812]: I1124 20:52:07.042254 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/115cede8-0b10-43ab-bf8b-b7ce50ad787e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"115cede8-0b10-43ab-bf8b-b7ce50ad787e\") " pod="openstack/cinder-scheduler-0" Nov 24 20:52:07 crc kubenswrapper[4812]: I1124 20:52:07.042590 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115cede8-0b10-43ab-bf8b-b7ce50ad787e-config-data\") pod \"cinder-scheduler-0\" (UID: \"115cede8-0b10-43ab-bf8b-b7ce50ad787e\") " pod="openstack/cinder-scheduler-0" Nov 24 20:52:07 crc kubenswrapper[4812]: I1124 20:52:07.042679 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115cede8-0b10-43ab-bf8b-b7ce50ad787e-scripts\") pod \"cinder-scheduler-0\" (UID: \"115cede8-0b10-43ab-bf8b-b7ce50ad787e\") " pod="openstack/cinder-scheduler-0" Nov 24 20:52:07 crc kubenswrapper[4812]: I1124 20:52:07.045031 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scpdk\" (UniqueName: \"kubernetes.io/projected/115cede8-0b10-43ab-bf8b-b7ce50ad787e-kube-api-access-scpdk\") pod \"cinder-scheduler-0\" (UID: \"115cede8-0b10-43ab-bf8b-b7ce50ad787e\") " pod="openstack/cinder-scheduler-0" Nov 24 20:52:07 crc kubenswrapper[4812]: I1124 20:52:07.188296 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 20:52:07 crc kubenswrapper[4812]: I1124 20:52:07.658444 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 20:52:08 crc kubenswrapper[4812]: I1124 20:52:08.500205 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"115cede8-0b10-43ab-bf8b-b7ce50ad787e","Type":"ContainerStarted","Data":"96c89b19d4796090f250ccccca1b7d3a2a2f8c9a570f07ec330521f045f0053d"} Nov 24 20:52:08 crc kubenswrapper[4812]: I1124 20:52:08.500811 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"115cede8-0b10-43ab-bf8b-b7ce50ad787e","Type":"ContainerStarted","Data":"5d4f90018a3268332386747ef76d85a7e0657ea6dee90fa51a3e76eba888d3f7"} Nov 24 20:52:09 crc kubenswrapper[4812]: I1124 20:52:09.513399 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"115cede8-0b10-43ab-bf8b-b7ce50ad787e","Type":"ContainerStarted","Data":"9a7b0824253f7c3231b110af150f427836a3d77cb7ab9f2c85a62cb2dbe57252"} Nov 24 20:52:09 crc kubenswrapper[4812]: I1124 20:52:09.543973 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.543944223 podStartE2EDuration="3.543944223s" podCreationTimestamp="2025-11-24 20:52:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:52:09.532306912 +0000 UTC m=+5723.321259343" watchObservedRunningTime="2025-11-24 20:52:09.543944223 +0000 UTC m=+5723.332896614" Nov 24 20:52:10 crc kubenswrapper[4812]: I1124 20:52:10.932187 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 24 20:52:12 crc kubenswrapper[4812]: I1124 20:52:12.188782 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 20:52:17 crc kubenswrapper[4812]: I1124 20:52:17.443645 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 20:52:18 crc kubenswrapper[4812]: I1124 20:52:18.569240 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-r76nd"] Nov 24 20:52:18 crc kubenswrapper[4812]: I1124 20:52:18.571357 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r76nd" Nov 24 20:52:18 crc kubenswrapper[4812]: I1124 20:52:18.588329 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-r76nd"] Nov 24 20:52:18 crc kubenswrapper[4812]: I1124 20:52:18.659164 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qskhz\" (UniqueName: \"kubernetes.io/projected/8f4e56a8-83ee-4dc6-93d9-5a02a1175e90-kube-api-access-qskhz\") pod \"glance-db-create-r76nd\" (UID: \"8f4e56a8-83ee-4dc6-93d9-5a02a1175e90\") " pod="openstack/glance-db-create-r76nd" Nov 24 20:52:18 crc kubenswrapper[4812]: I1124 20:52:18.659278 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f4e56a8-83ee-4dc6-93d9-5a02a1175e90-operator-scripts\") pod \"glance-db-create-r76nd\" (UID: \"8f4e56a8-83ee-4dc6-93d9-5a02a1175e90\") " pod="openstack/glance-db-create-r76nd" Nov 24 20:52:18 crc kubenswrapper[4812]: I1124 20:52:18.674556 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1c3c-account-create-vf6wg"] Nov 24 20:52:18 crc kubenswrapper[4812]: I1124 20:52:18.675928 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1c3c-account-create-vf6wg" Nov 24 20:52:18 crc kubenswrapper[4812]: I1124 20:52:18.677819 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 24 20:52:18 crc kubenswrapper[4812]: I1124 20:52:18.684285 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1c3c-account-create-vf6wg"] Nov 24 20:52:18 crc kubenswrapper[4812]: I1124 20:52:18.760701 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67a7a55d-20ce-43b0-a4b7-4e26064e31d1-operator-scripts\") pod \"glance-1c3c-account-create-vf6wg\" (UID: \"67a7a55d-20ce-43b0-a4b7-4e26064e31d1\") " pod="openstack/glance-1c3c-account-create-vf6wg" Nov 24 20:52:18 crc kubenswrapper[4812]: I1124 20:52:18.760752 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qskhz\" (UniqueName: \"kubernetes.io/projected/8f4e56a8-83ee-4dc6-93d9-5a02a1175e90-kube-api-access-qskhz\") pod \"glance-db-create-r76nd\" (UID: \"8f4e56a8-83ee-4dc6-93d9-5a02a1175e90\") " pod="openstack/glance-db-create-r76nd" Nov 24 20:52:18 crc kubenswrapper[4812]: I1124 20:52:18.760785 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzvzk\" (UniqueName: \"kubernetes.io/projected/67a7a55d-20ce-43b0-a4b7-4e26064e31d1-kube-api-access-nzvzk\") pod \"glance-1c3c-account-create-vf6wg\" (UID: \"67a7a55d-20ce-43b0-a4b7-4e26064e31d1\") " pod="openstack/glance-1c3c-account-create-vf6wg" Nov 24 20:52:18 crc kubenswrapper[4812]: I1124 20:52:18.760856 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f4e56a8-83ee-4dc6-93d9-5a02a1175e90-operator-scripts\") pod \"glance-db-create-r76nd\" (UID: \"8f4e56a8-83ee-4dc6-93d9-5a02a1175e90\") " pod="openstack/glance-db-create-r76nd" Nov 24 20:52:18 crc kubenswrapper[4812]: I1124 20:52:18.761574 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f4e56a8-83ee-4dc6-93d9-5a02a1175e90-operator-scripts\") pod \"glance-db-create-r76nd\" (UID: \"8f4e56a8-83ee-4dc6-93d9-5a02a1175e90\") " pod="openstack/glance-db-create-r76nd" Nov 24 20:52:18 crc kubenswrapper[4812]: I1124 20:52:18.795109 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qskhz\" (UniqueName: \"kubernetes.io/projected/8f4e56a8-83ee-4dc6-93d9-5a02a1175e90-kube-api-access-qskhz\") pod \"glance-db-create-r76nd\" (UID: \"8f4e56a8-83ee-4dc6-93d9-5a02a1175e90\") " pod="openstack/glance-db-create-r76nd" Nov 24 20:52:18 crc kubenswrapper[4812]: I1124 20:52:18.862118 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67a7a55d-20ce-43b0-a4b7-4e26064e31d1-operator-scripts\") pod \"glance-1c3c-account-create-vf6wg\" (UID: \"67a7a55d-20ce-43b0-a4b7-4e26064e31d1\") " pod="openstack/glance-1c3c-account-create-vf6wg" Nov 24 20:52:18 crc kubenswrapper[4812]: I1124 20:52:18.862186 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzvzk\" (UniqueName: \"kubernetes.io/projected/67a7a55d-20ce-43b0-a4b7-4e26064e31d1-kube-api-access-nzvzk\") pod \"glance-1c3c-account-create-vf6wg\" (UID: \"67a7a55d-20ce-43b0-a4b7-4e26064e31d1\") " pod="openstack/glance-1c3c-account-create-vf6wg" Nov 24 20:52:18 crc kubenswrapper[4812]: I1124 20:52:18.863226 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67a7a55d-20ce-43b0-a4b7-4e26064e31d1-operator-scripts\") pod \"glance-1c3c-account-create-vf6wg\" (UID: \"67a7a55d-20ce-43b0-a4b7-4e26064e31d1\") " pod="openstack/glance-1c3c-account-create-vf6wg" Nov 24 20:52:18 crc kubenswrapper[4812]: I1124 20:52:18.883000 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzvzk\" (UniqueName: \"kubernetes.io/projected/67a7a55d-20ce-43b0-a4b7-4e26064e31d1-kube-api-access-nzvzk\") pod \"glance-1c3c-account-create-vf6wg\" (UID: \"67a7a55d-20ce-43b0-a4b7-4e26064e31d1\") " pod="openstack/glance-1c3c-account-create-vf6wg" Nov 24 20:52:18 crc kubenswrapper[4812]: I1124 20:52:18.891039 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r76nd" Nov 24 20:52:18 crc kubenswrapper[4812]: I1124 20:52:18.995380 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1c3c-account-create-vf6wg" Nov 24 20:52:19 crc kubenswrapper[4812]: I1124 20:52:19.426600 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-r76nd"] Nov 24 20:52:19 crc kubenswrapper[4812]: W1124 20:52:19.430682 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f4e56a8_83ee_4dc6_93d9_5a02a1175e90.slice/crio-235a431e833d5622978fe4f701915d7cc3d04064278fa862d84ac5776e7157fc WatchSource:0}: Error finding container 235a431e833d5622978fe4f701915d7cc3d04064278fa862d84ac5776e7157fc: Status 404 returned error can't find the container with id 235a431e833d5622978fe4f701915d7cc3d04064278fa862d84ac5776e7157fc Nov 24 20:52:19 crc kubenswrapper[4812]: I1124 20:52:19.567324 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1c3c-account-create-vf6wg"] Nov 24 20:52:19 crc kubenswrapper[4812]: W1124 20:52:19.580693 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67a7a55d_20ce_43b0_a4b7_4e26064e31d1.slice/crio-01d5570af050b94f8ecad2d9b0c653c95d6d213b584c40ee082c245a792a976a WatchSource:0}: Error finding container 01d5570af050b94f8ecad2d9b0c653c95d6d213b584c40ee082c245a792a976a: Status 404 returned error can't find the container with id 01d5570af050b94f8ecad2d9b0c653c95d6d213b584c40ee082c245a792a976a Nov 24 20:52:19 crc kubenswrapper[4812]: I1124 20:52:19.623542 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1c3c-account-create-vf6wg" event={"ID":"67a7a55d-20ce-43b0-a4b7-4e26064e31d1","Type":"ContainerStarted","Data":"01d5570af050b94f8ecad2d9b0c653c95d6d213b584c40ee082c245a792a976a"} Nov 24 20:52:19 crc kubenswrapper[4812]: I1124 20:52:19.624679 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-r76nd" event={"ID":"8f4e56a8-83ee-4dc6-93d9-5a02a1175e90","Type":"ContainerStarted","Data":"235a431e833d5622978fe4f701915d7cc3d04064278fa862d84ac5776e7157fc"} Nov 24 20:52:20 crc kubenswrapper[4812]: I1124 20:52:20.641454 4812 generic.go:334] "Generic (PLEG): container finished" podID="67a7a55d-20ce-43b0-a4b7-4e26064e31d1" containerID="c723f575776c611264ba81577f9f13a7fd68241671eb51a5d5cea5340c5950ff" exitCode=0 Nov 24 20:52:20 crc kubenswrapper[4812]: I1124 20:52:20.641538 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1c3c-account-create-vf6wg" event={"ID":"67a7a55d-20ce-43b0-a4b7-4e26064e31d1","Type":"ContainerDied","Data":"c723f575776c611264ba81577f9f13a7fd68241671eb51a5d5cea5340c5950ff"} Nov 24 20:52:20 crc kubenswrapper[4812]: I1124 20:52:20.647918 4812 generic.go:334] "Generic (PLEG): container finished" podID="8f4e56a8-83ee-4dc6-93d9-5a02a1175e90" containerID="bc340bc102ec7e1253bb436bbd557051fad2569885134985579154a38b405cba" exitCode=0 Nov 24 20:52:20 crc kubenswrapper[4812]: I1124 20:52:20.647980 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-r76nd" event={"ID":"8f4e56a8-83ee-4dc6-93d9-5a02a1175e90","Type":"ContainerDied","Data":"bc340bc102ec7e1253bb436bbd557051fad2569885134985579154a38b405cba"} Nov 24 20:52:22 crc kubenswrapper[4812]: I1124 20:52:22.128647 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r76nd" Nov 24 20:52:22 crc kubenswrapper[4812]: I1124 20:52:22.135087 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1c3c-account-create-vf6wg" Nov 24 20:52:22 crc kubenswrapper[4812]: I1124 20:52:22.255106 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qskhz\" (UniqueName: \"kubernetes.io/projected/8f4e56a8-83ee-4dc6-93d9-5a02a1175e90-kube-api-access-qskhz\") pod \"8f4e56a8-83ee-4dc6-93d9-5a02a1175e90\" (UID: \"8f4e56a8-83ee-4dc6-93d9-5a02a1175e90\") " Nov 24 20:52:22 crc kubenswrapper[4812]: I1124 20:52:22.255264 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67a7a55d-20ce-43b0-a4b7-4e26064e31d1-operator-scripts\") pod \"67a7a55d-20ce-43b0-a4b7-4e26064e31d1\" (UID: \"67a7a55d-20ce-43b0-a4b7-4e26064e31d1\") " Nov 24 20:52:22 crc kubenswrapper[4812]: I1124 20:52:22.255350 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzvzk\" (UniqueName: \"kubernetes.io/projected/67a7a55d-20ce-43b0-a4b7-4e26064e31d1-kube-api-access-nzvzk\") pod \"67a7a55d-20ce-43b0-a4b7-4e26064e31d1\" (UID: \"67a7a55d-20ce-43b0-a4b7-4e26064e31d1\") " Nov 24 20:52:22 crc kubenswrapper[4812]: I1124 20:52:22.255382 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f4e56a8-83ee-4dc6-93d9-5a02a1175e90-operator-scripts\") pod \"8f4e56a8-83ee-4dc6-93d9-5a02a1175e90\" (UID: \"8f4e56a8-83ee-4dc6-93d9-5a02a1175e90\") " Nov 24 20:52:22 crc kubenswrapper[4812]: I1124 20:52:22.256287 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f4e56a8-83ee-4dc6-93d9-5a02a1175e90-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f4e56a8-83ee-4dc6-93d9-5a02a1175e90" (UID: "8f4e56a8-83ee-4dc6-93d9-5a02a1175e90"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:52:22 crc kubenswrapper[4812]: I1124 20:52:22.256374 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67a7a55d-20ce-43b0-a4b7-4e26064e31d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67a7a55d-20ce-43b0-a4b7-4e26064e31d1" (UID: "67a7a55d-20ce-43b0-a4b7-4e26064e31d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:52:22 crc kubenswrapper[4812]: I1124 20:52:22.263846 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67a7a55d-20ce-43b0-a4b7-4e26064e31d1-kube-api-access-nzvzk" (OuterVolumeSpecName: "kube-api-access-nzvzk") pod "67a7a55d-20ce-43b0-a4b7-4e26064e31d1" (UID: "67a7a55d-20ce-43b0-a4b7-4e26064e31d1"). InnerVolumeSpecName "kube-api-access-nzvzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:52:22 crc kubenswrapper[4812]: I1124 20:52:22.265778 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f4e56a8-83ee-4dc6-93d9-5a02a1175e90-kube-api-access-qskhz" (OuterVolumeSpecName: "kube-api-access-qskhz") pod "8f4e56a8-83ee-4dc6-93d9-5a02a1175e90" (UID: "8f4e56a8-83ee-4dc6-93d9-5a02a1175e90"). InnerVolumeSpecName "kube-api-access-qskhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:52:22 crc kubenswrapper[4812]: I1124 20:52:22.357222 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67a7a55d-20ce-43b0-a4b7-4e26064e31d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:22 crc kubenswrapper[4812]: I1124 20:52:22.357257 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzvzk\" (UniqueName: \"kubernetes.io/projected/67a7a55d-20ce-43b0-a4b7-4e26064e31d1-kube-api-access-nzvzk\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:22 crc kubenswrapper[4812]: I1124 20:52:22.357270 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f4e56a8-83ee-4dc6-93d9-5a02a1175e90-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:22 crc kubenswrapper[4812]: I1124 20:52:22.357279 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qskhz\" (UniqueName: \"kubernetes.io/projected/8f4e56a8-83ee-4dc6-93d9-5a02a1175e90-kube-api-access-qskhz\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:22 crc kubenswrapper[4812]: I1124 20:52:22.677009 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1c3c-account-create-vf6wg" event={"ID":"67a7a55d-20ce-43b0-a4b7-4e26064e31d1","Type":"ContainerDied","Data":"01d5570af050b94f8ecad2d9b0c653c95d6d213b584c40ee082c245a792a976a"} Nov 24 20:52:22 crc kubenswrapper[4812]: I1124 20:52:22.677068 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01d5570af050b94f8ecad2d9b0c653c95d6d213b584c40ee082c245a792a976a" Nov 24 20:52:22 crc kubenswrapper[4812]: I1124 20:52:22.677152 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1c3c-account-create-vf6wg" Nov 24 20:52:22 crc kubenswrapper[4812]: I1124 20:52:22.685492 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-r76nd" event={"ID":"8f4e56a8-83ee-4dc6-93d9-5a02a1175e90","Type":"ContainerDied","Data":"235a431e833d5622978fe4f701915d7cc3d04064278fa862d84ac5776e7157fc"} Nov 24 20:52:22 crc kubenswrapper[4812]: I1124 20:52:22.685565 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="235a431e833d5622978fe4f701915d7cc3d04064278fa862d84ac5776e7157fc" Nov 24 20:52:22 crc kubenswrapper[4812]: I1124 20:52:22.685575 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-r76nd" Nov 24 20:52:23 crc kubenswrapper[4812]: I1124 20:52:23.961220 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-sstpq"] Nov 24 20:52:23 crc kubenswrapper[4812]: E1124 20:52:23.962013 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f4e56a8-83ee-4dc6-93d9-5a02a1175e90" containerName="mariadb-database-create" Nov 24 20:52:23 crc kubenswrapper[4812]: I1124 20:52:23.962029 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f4e56a8-83ee-4dc6-93d9-5a02a1175e90" containerName="mariadb-database-create" Nov 24 20:52:23 crc kubenswrapper[4812]: E1124 20:52:23.962048 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a7a55d-20ce-43b0-a4b7-4e26064e31d1" containerName="mariadb-account-create" Nov 24 20:52:23 crc kubenswrapper[4812]: I1124 20:52:23.962054 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a7a55d-20ce-43b0-a4b7-4e26064e31d1" containerName="mariadb-account-create" Nov 24 20:52:23 crc kubenswrapper[4812]: I1124 20:52:23.962229 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f4e56a8-83ee-4dc6-93d9-5a02a1175e90" containerName="mariadb-database-create" Nov 24 20:52:23 crc kubenswrapper[4812]: I1124 20:52:23.962251 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a7a55d-20ce-43b0-a4b7-4e26064e31d1" containerName="mariadb-account-create" Nov 24 20:52:23 crc kubenswrapper[4812]: I1124 20:52:23.968055 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-sstpq" Nov 24 20:52:23 crc kubenswrapper[4812]: I1124 20:52:23.970549 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-d6xxb" Nov 24 20:52:23 crc kubenswrapper[4812]: I1124 20:52:23.971276 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-sstpq"] Nov 24 20:52:23 crc kubenswrapper[4812]: I1124 20:52:23.971585 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 24 20:52:24 crc kubenswrapper[4812]: I1124 20:52:24.083880 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28hmj\" (UniqueName: \"kubernetes.io/projected/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-kube-api-access-28hmj\") pod \"glance-db-sync-sstpq\" (UID: \"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e\") " pod="openstack/glance-db-sync-sstpq" Nov 24 20:52:24 crc kubenswrapper[4812]: I1124 20:52:24.083929 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-combined-ca-bundle\") pod \"glance-db-sync-sstpq\" (UID: \"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e\") " pod="openstack/glance-db-sync-sstpq" Nov 24 20:52:24 crc kubenswrapper[4812]: I1124 20:52:24.084089 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-config-data\") pod \"glance-db-sync-sstpq\" (UID: \"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e\") " pod="openstack/glance-db-sync-sstpq" Nov 24 20:52:24 crc kubenswrapper[4812]: I1124 20:52:24.084500 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-db-sync-config-data\") pod \"glance-db-sync-sstpq\" (UID: \"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e\") " pod="openstack/glance-db-sync-sstpq" Nov 24 20:52:24 crc kubenswrapper[4812]: I1124 20:52:24.186535 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-db-sync-config-data\") pod \"glance-db-sync-sstpq\" (UID: \"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e\") " pod="openstack/glance-db-sync-sstpq" Nov 24 20:52:24 crc kubenswrapper[4812]: I1124 20:52:24.186592 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28hmj\" (UniqueName: \"kubernetes.io/projected/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-kube-api-access-28hmj\") pod \"glance-db-sync-sstpq\" (UID: \"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e\") " pod="openstack/glance-db-sync-sstpq" Nov 24 20:52:24 crc kubenswrapper[4812]: I1124 20:52:24.186626 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-combined-ca-bundle\") pod \"glance-db-sync-sstpq\" (UID: \"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e\") " pod="openstack/glance-db-sync-sstpq" Nov 24 20:52:24 crc kubenswrapper[4812]: I1124 20:52:24.186690 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-config-data\") pod \"glance-db-sync-sstpq\" (UID: \"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e\") " pod="openstack/glance-db-sync-sstpq" Nov 24 20:52:24 crc kubenswrapper[4812]: I1124 20:52:24.193295 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-config-data\") pod \"glance-db-sync-sstpq\" (UID: \"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e\") " pod="openstack/glance-db-sync-sstpq" Nov 24 20:52:24 crc kubenswrapper[4812]: I1124 20:52:24.193707 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-combined-ca-bundle\") pod \"glance-db-sync-sstpq\" (UID: \"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e\") " pod="openstack/glance-db-sync-sstpq" Nov 24 20:52:24 crc kubenswrapper[4812]: I1124 20:52:24.194402 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-db-sync-config-data\") pod \"glance-db-sync-sstpq\" (UID: \"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e\") " pod="openstack/glance-db-sync-sstpq" Nov 24 20:52:24 crc kubenswrapper[4812]: I1124 20:52:24.220778 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28hmj\" (UniqueName: \"kubernetes.io/projected/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-kube-api-access-28hmj\") pod \"glance-db-sync-sstpq\" (UID: \"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e\") " pod="openstack/glance-db-sync-sstpq" Nov 24 20:52:24 crc kubenswrapper[4812]: I1124 20:52:24.291051 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-sstpq" Nov 24 20:52:24 crc kubenswrapper[4812]: I1124 20:52:24.594247 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-sstpq"] Nov 24 20:52:24 crc kubenswrapper[4812]: W1124 20:52:24.600019 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod180b0dab_3a40_4d9c_aa4a_6bc8ec20f82e.slice/crio-814acc45c4bc213b062551eeef5aed89f8210a1e8f0ab49045bfbe6286d74283 WatchSource:0}: Error finding container 814acc45c4bc213b062551eeef5aed89f8210a1e8f0ab49045bfbe6286d74283: Status 404 returned error can't find the container with id 814acc45c4bc213b062551eeef5aed89f8210a1e8f0ab49045bfbe6286d74283 Nov 24 20:52:24 crc kubenswrapper[4812]: I1124 20:52:24.705383 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-sstpq" event={"ID":"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e","Type":"ContainerStarted","Data":"814acc45c4bc213b062551eeef5aed89f8210a1e8f0ab49045bfbe6286d74283"} Nov 24 20:52:25 crc kubenswrapper[4812]: I1124 20:52:25.718318 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-sstpq" event={"ID":"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e","Type":"ContainerStarted","Data":"a2f676d6b8ae5c685f768d0507d185c509f0d56f5792833b7ad9a0a681f109cb"} Nov 24 20:52:25 crc kubenswrapper[4812]: I1124 20:52:25.751426 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-sstpq" podStartSLOduration=2.75141 podStartE2EDuration="2.75141s" podCreationTimestamp="2025-11-24 20:52:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:52:25.747103328 +0000 UTC m=+5739.536055719" watchObservedRunningTime="2025-11-24 20:52:25.75141 +0000 UTC m=+5739.540362371" Nov 24 20:52:28 crc kubenswrapper[4812]: I1124 20:52:28.749905 4812 generic.go:334] "Generic (PLEG): container finished" podID="180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e" containerID="a2f676d6b8ae5c685f768d0507d185c509f0d56f5792833b7ad9a0a681f109cb" exitCode=0 Nov 24 20:52:28 crc kubenswrapper[4812]: I1124 20:52:28.750059 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-sstpq" event={"ID":"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e","Type":"ContainerDied","Data":"a2f676d6b8ae5c685f768d0507d185c509f0d56f5792833b7ad9a0a681f109cb"} Nov 24 20:52:30 crc kubenswrapper[4812]: I1124 20:52:30.268948 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-sstpq" Nov 24 20:52:30 crc kubenswrapper[4812]: I1124 20:52:30.327526 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-combined-ca-bundle\") pod \"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e\" (UID: \"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e\") " Nov 24 20:52:30 crc kubenswrapper[4812]: I1124 20:52:30.327645 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28hmj\" (UniqueName: \"kubernetes.io/projected/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-kube-api-access-28hmj\") pod \"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e\" (UID: \"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e\") " Nov 24 20:52:30 crc kubenswrapper[4812]: I1124 20:52:30.327676 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-config-data\") pod \"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e\" (UID: \"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e\") " Nov 24 20:52:30 crc kubenswrapper[4812]: I1124 20:52:30.328065 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-db-sync-config-data\") pod \"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e\" (UID: \"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e\") " Nov 24 20:52:30 crc kubenswrapper[4812]: I1124 20:52:30.333961 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-kube-api-access-28hmj" (OuterVolumeSpecName: "kube-api-access-28hmj") pod "180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e" (UID: "180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e"). InnerVolumeSpecName "kube-api-access-28hmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:52:30 crc kubenswrapper[4812]: I1124 20:52:30.337509 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e" (UID: "180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:52:30 crc kubenswrapper[4812]: I1124 20:52:30.353077 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e" (UID: "180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:52:30 crc kubenswrapper[4812]: I1124 20:52:30.383527 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-config-data" (OuterVolumeSpecName: "config-data") pod "180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e" (UID: "180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:52:30 crc kubenswrapper[4812]: I1124 20:52:30.431036 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:30 crc kubenswrapper[4812]: I1124 20:52:30.431159 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28hmj\" (UniqueName: \"kubernetes.io/projected/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-kube-api-access-28hmj\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:30 crc kubenswrapper[4812]: I1124 20:52:30.431232 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:30 crc kubenswrapper[4812]: I1124 20:52:30.431316 4812 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:30 crc kubenswrapper[4812]: I1124 20:52:30.778624 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-sstpq" event={"ID":"180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e","Type":"ContainerDied","Data":"814acc45c4bc213b062551eeef5aed89f8210a1e8f0ab49045bfbe6286d74283"} Nov 24 20:52:30 crc kubenswrapper[4812]: I1124 20:52:30.778686 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="814acc45c4bc213b062551eeef5aed89f8210a1e8f0ab49045bfbe6286d74283" Nov 24 20:52:30 crc kubenswrapper[4812]: I1124 20:52:30.778727 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-sstpq" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.145327 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 20:52:31 crc kubenswrapper[4812]: E1124 20:52:31.146067 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e" containerName="glance-db-sync" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.146089 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e" containerName="glance-db-sync" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.146303 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e" containerName="glance-db-sync" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.147201 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.150811 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.150893 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-d6xxb" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.151598 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.154081 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.231362 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b46cbf9dc-v8jvf"] Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.232809 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.240846 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b46cbf9dc-v8jvf"] Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.251302 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a642b25-8427-48ae-9d81-713d5e612d00-scripts\") pod \"glance-default-external-api-0\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.251364 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a642b25-8427-48ae-9d81-713d5e612d00-config-data\") pod \"glance-default-external-api-0\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.251386 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a642b25-8427-48ae-9d81-713d5e612d00-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.251414 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtlp5\" (UniqueName: \"kubernetes.io/projected/5a642b25-8427-48ae-9d81-713d5e612d00-kube-api-access-qtlp5\") pod \"glance-default-external-api-0\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.251438 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a642b25-8427-48ae-9d81-713d5e612d00-logs\") pod \"glance-default-external-api-0\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.251495 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a642b25-8427-48ae-9d81-713d5e612d00-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.288530 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.290388 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.295143 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.309732 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.355253 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvmc7\" (UniqueName: \"kubernetes.io/projected/a5b0636b-01c1-413f-baca-8c52574c73b3-kube-api-access-fvmc7\") pod \"glance-default-internal-api-0\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.361803 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b46cbf9dc-v8jvf\" (UID: \"52541eeb-4f7f-44d2-80e6-92372aecb24e\") " pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.362251 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a642b25-8427-48ae-9d81-713d5e612d00-scripts\") pod \"glance-default-external-api-0\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.362426 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j89z9\" (UniqueName: \"kubernetes.io/projected/52541eeb-4f7f-44d2-80e6-92372aecb24e-kube-api-access-j89z9\") pod \"dnsmasq-dns-5b46cbf9dc-v8jvf\" (UID: \"52541eeb-4f7f-44d2-80e6-92372aecb24e\") " pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.362513 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5b0636b-01c1-413f-baca-8c52574c73b3-logs\") pod \"glance-default-internal-api-0\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.362593 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a642b25-8427-48ae-9d81-713d5e612d00-config-data\") pod \"glance-default-external-api-0\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.362652 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5b0636b-01c1-413f-baca-8c52574c73b3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.362713 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a642b25-8427-48ae-9d81-713d5e612d00-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.362839 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtlp5\" (UniqueName: \"kubernetes.io/projected/5a642b25-8427-48ae-9d81-713d5e612d00-kube-api-access-qtlp5\") pod \"glance-default-external-api-0\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.362903 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5b0636b-01c1-413f-baca-8c52574c73b3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.362978 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a642b25-8427-48ae-9d81-713d5e612d00-logs\") pod \"glance-default-external-api-0\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.363042 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b0636b-01c1-413f-baca-8c52574c73b3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.363197 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-dns-svc\") pod \"dnsmasq-dns-5b46cbf9dc-v8jvf\" (UID: \"52541eeb-4f7f-44d2-80e6-92372aecb24e\") " pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.363293 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-config\") pod \"dnsmasq-dns-5b46cbf9dc-v8jvf\" (UID: \"52541eeb-4f7f-44d2-80e6-92372aecb24e\") " pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.363378 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a642b25-8427-48ae-9d81-713d5e612d00-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.363503 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a5b0636b-01c1-413f-baca-8c52574c73b3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.363598 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b46cbf9dc-v8jvf\" (UID: \"52541eeb-4f7f-44d2-80e6-92372aecb24e\") " pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.363864 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a642b25-8427-48ae-9d81-713d5e612d00-logs\") pod \"glance-default-external-api-0\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.363960 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a642b25-8427-48ae-9d81-713d5e612d00-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.367247 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a642b25-8427-48ae-9d81-713d5e612d00-config-data\") pod \"glance-default-external-api-0\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.380976 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a642b25-8427-48ae-9d81-713d5e612d00-scripts\") pod \"glance-default-external-api-0\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.381105 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a642b25-8427-48ae-9d81-713d5e612d00-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.381114 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtlp5\" (UniqueName: \"kubernetes.io/projected/5a642b25-8427-48ae-9d81-713d5e612d00-kube-api-access-qtlp5\") pod \"glance-default-external-api-0\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.465187 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b46cbf9dc-v8jvf\" (UID: \"52541eeb-4f7f-44d2-80e6-92372aecb24e\") " pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.465251 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvmc7\" (UniqueName: \"kubernetes.io/projected/a5b0636b-01c1-413f-baca-8c52574c73b3-kube-api-access-fvmc7\") pod \"glance-default-internal-api-0\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.465299 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b46cbf9dc-v8jvf\" (UID: \"52541eeb-4f7f-44d2-80e6-92372aecb24e\") " pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.465370 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j89z9\" (UniqueName: \"kubernetes.io/projected/52541eeb-4f7f-44d2-80e6-92372aecb24e-kube-api-access-j89z9\") pod \"dnsmasq-dns-5b46cbf9dc-v8jvf\" (UID: \"52541eeb-4f7f-44d2-80e6-92372aecb24e\") " pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.465424 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5b0636b-01c1-413f-baca-8c52574c73b3-logs\") pod \"glance-default-internal-api-0\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.465448 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5b0636b-01c1-413f-baca-8c52574c73b3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.465479 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5b0636b-01c1-413f-baca-8c52574c73b3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.465509 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b0636b-01c1-413f-baca-8c52574c73b3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.465563 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-dns-svc\") pod \"dnsmasq-dns-5b46cbf9dc-v8jvf\" (UID: \"52541eeb-4f7f-44d2-80e6-92372aecb24e\") " pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.465609 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-config\") pod \"dnsmasq-dns-5b46cbf9dc-v8jvf\" (UID: \"52541eeb-4f7f-44d2-80e6-92372aecb24e\") " pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.465661 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a5b0636b-01c1-413f-baca-8c52574c73b3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.466097 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5b0636b-01c1-413f-baca-8c52574c73b3-logs\") pod \"glance-default-internal-api-0\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.466191 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a5b0636b-01c1-413f-baca-8c52574c73b3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.466398 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.466537 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b46cbf9dc-v8jvf\" (UID: \"52541eeb-4f7f-44d2-80e6-92372aecb24e\") " pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.466573 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-dns-svc\") pod \"dnsmasq-dns-5b46cbf9dc-v8jvf\" (UID: \"52541eeb-4f7f-44d2-80e6-92372aecb24e\") " pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.467007 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b46cbf9dc-v8jvf\" (UID: \"52541eeb-4f7f-44d2-80e6-92372aecb24e\") " pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.467141 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-config\") pod \"dnsmasq-dns-5b46cbf9dc-v8jvf\" (UID: \"52541eeb-4f7f-44d2-80e6-92372aecb24e\") " pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.470118 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5b0636b-01c1-413f-baca-8c52574c73b3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.470838 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b0636b-01c1-413f-baca-8c52574c73b3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.471181 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5b0636b-01c1-413f-baca-8c52574c73b3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.486191 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j89z9\" (UniqueName: \"kubernetes.io/projected/52541eeb-4f7f-44d2-80e6-92372aecb24e-kube-api-access-j89z9\") pod \"dnsmasq-dns-5b46cbf9dc-v8jvf\" (UID: \"52541eeb-4f7f-44d2-80e6-92372aecb24e\") " pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.497087 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvmc7\" (UniqueName: \"kubernetes.io/projected/a5b0636b-01c1-413f-baca-8c52574c73b3-kube-api-access-fvmc7\") pod \"glance-default-internal-api-0\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.560094 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" Nov 24 20:52:31 crc kubenswrapper[4812]: I1124 20:52:31.640288 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 20:52:32 crc kubenswrapper[4812]: I1124 20:52:32.025521 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 20:52:32 crc kubenswrapper[4812]: W1124 20:52:32.027028 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a642b25_8427_48ae_9d81_713d5e612d00.slice/crio-59f3c2a83af0595a427b49fa4dbbcec8697709f498010bf9085d1abf250a6a97 WatchSource:0}: Error finding container 59f3c2a83af0595a427b49fa4dbbcec8697709f498010bf9085d1abf250a6a97: Status 404 returned error can't find the container with id 59f3c2a83af0595a427b49fa4dbbcec8697709f498010bf9085d1abf250a6a97 Nov 24 20:52:32 crc kubenswrapper[4812]: I1124 20:52:32.128301 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b46cbf9dc-v8jvf"] Nov 24 20:52:32 crc kubenswrapper[4812]: W1124 20:52:32.138703 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52541eeb_4f7f_44d2_80e6_92372aecb24e.slice/crio-ab44e0e67f18cc7bcf2dcd44ccf2667e73b16e9bcda0a6173e8063e6b5355d63 WatchSource:0}: Error finding container ab44e0e67f18cc7bcf2dcd44ccf2667e73b16e9bcda0a6173e8063e6b5355d63: Status 404 returned error can't find the container with id ab44e0e67f18cc7bcf2dcd44ccf2667e73b16e9bcda0a6173e8063e6b5355d63 Nov 24 20:52:32 crc kubenswrapper[4812]: I1124 20:52:32.243749 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 20:52:32 crc kubenswrapper[4812]: I1124 20:52:32.420778 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 20:52:32 crc kubenswrapper[4812]: I1124 20:52:32.810000 4812 generic.go:334] "Generic (PLEG): container finished" podID="52541eeb-4f7f-44d2-80e6-92372aecb24e" containerID="76053d8fd85cc68eb1a7936af940ab2829f3c10bc5b677bbc093f05012254821" exitCode=0 Nov 24 20:52:32 crc kubenswrapper[4812]: I1124 20:52:32.810110 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" event={"ID":"52541eeb-4f7f-44d2-80e6-92372aecb24e","Type":"ContainerDied","Data":"76053d8fd85cc68eb1a7936af940ab2829f3c10bc5b677bbc093f05012254821"} Nov 24 20:52:32 crc kubenswrapper[4812]: I1124 20:52:32.810409 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" event={"ID":"52541eeb-4f7f-44d2-80e6-92372aecb24e","Type":"ContainerStarted","Data":"ab44e0e67f18cc7bcf2dcd44ccf2667e73b16e9bcda0a6173e8063e6b5355d63"} Nov 24 20:52:32 crc kubenswrapper[4812]: I1124 20:52:32.813400 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a5b0636b-01c1-413f-baca-8c52574c73b3","Type":"ContainerStarted","Data":"c6e8281669e44a43b8ac710a6a0e6144fb7fb0c5c5cc7bced2b641def36ac859"} Nov 24 20:52:32 crc kubenswrapper[4812]: I1124 20:52:32.816085 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a642b25-8427-48ae-9d81-713d5e612d00","Type":"ContainerStarted","Data":"a7377b918d33aa6d537b7a1f5e69ffb76e005a06db2b2b1bff09d9cf433e08d6"} Nov 24 20:52:32 crc kubenswrapper[4812]: I1124 20:52:32.816118 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a642b25-8427-48ae-9d81-713d5e612d00","Type":"ContainerStarted","Data":"59f3c2a83af0595a427b49fa4dbbcec8697709f498010bf9085d1abf250a6a97"} Nov 24 20:52:32 crc kubenswrapper[4812]: I1124 20:52:32.998055 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:52:32 crc kubenswrapper[4812]: I1124 20:52:32.998114 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:52:33 crc kubenswrapper[4812]: I1124 20:52:33.598432 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 20:52:33 crc kubenswrapper[4812]: I1124 20:52:33.840487 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" event={"ID":"52541eeb-4f7f-44d2-80e6-92372aecb24e","Type":"ContainerStarted","Data":"caedff0fb729c11188f9b834f9a0fd10405c8eac8ef46661bf77dd5b89bf1fc2"} Nov 24 20:52:33 crc kubenswrapper[4812]: I1124 20:52:33.840653 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" Nov 24 20:52:33 crc kubenswrapper[4812]: I1124 20:52:33.842486 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a5b0636b-01c1-413f-baca-8c52574c73b3","Type":"ContainerStarted","Data":"3d503c125185a269a0f7a434fee96bec358e4dbecd528288649af5e76848a71f"} Nov 24 20:52:33 crc kubenswrapper[4812]: I1124 20:52:33.842524 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a5b0636b-01c1-413f-baca-8c52574c73b3","Type":"ContainerStarted","Data":"5331830151d3ae663fce0f71a852fde477a0f576d64eaa6929edfd12bae64ba7"} Nov 24 20:52:33 crc kubenswrapper[4812]: I1124 20:52:33.850245 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a642b25-8427-48ae-9d81-713d5e612d00","Type":"ContainerStarted","Data":"6da5af4bf311d543465c4c1316b89b8ba7d667d47cf72ae030785cbab4c56a46"} Nov 24 20:52:33 crc kubenswrapper[4812]: I1124 20:52:33.850511 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5a642b25-8427-48ae-9d81-713d5e612d00" containerName="glance-httpd" containerID="cri-o://6da5af4bf311d543465c4c1316b89b8ba7d667d47cf72ae030785cbab4c56a46" gracePeriod=30 Nov 24 20:52:33 crc kubenswrapper[4812]: I1124 20:52:33.850505 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5a642b25-8427-48ae-9d81-713d5e612d00" containerName="glance-log" containerID="cri-o://a7377b918d33aa6d537b7a1f5e69ffb76e005a06db2b2b1bff09d9cf433e08d6" gracePeriod=30 Nov 24 20:52:33 crc kubenswrapper[4812]: I1124 20:52:33.862156 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" podStartSLOduration=2.862136167 podStartE2EDuration="2.862136167s" podCreationTimestamp="2025-11-24 20:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:52:33.8565677 +0000 UTC m=+5747.645520071" watchObservedRunningTime="2025-11-24 20:52:33.862136167 +0000 UTC m=+5747.651088538" Nov 24 20:52:33 crc kubenswrapper[4812]: I1124 20:52:33.877388 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=2.87737026 podStartE2EDuration="2.87737026s" podCreationTimestamp="2025-11-24 20:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:52:33.87595882 +0000 UTC m=+5747.664911201" watchObservedRunningTime="2025-11-24 20:52:33.87737026 +0000 UTC m=+5747.666322631" Nov 24 20:52:33 crc kubenswrapper[4812]: I1124 20:52:33.896807 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.896790901 podStartE2EDuration="2.896790901s" podCreationTimestamp="2025-11-24 20:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:52:33.894108625 +0000 UTC m=+5747.683061036" watchObservedRunningTime="2025-11-24 20:52:33.896790901 +0000 UTC m=+5747.685743262" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.440621 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.520052 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtlp5\" (UniqueName: \"kubernetes.io/projected/5a642b25-8427-48ae-9d81-713d5e612d00-kube-api-access-qtlp5\") pod \"5a642b25-8427-48ae-9d81-713d5e612d00\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.520137 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a642b25-8427-48ae-9d81-713d5e612d00-logs\") pod \"5a642b25-8427-48ae-9d81-713d5e612d00\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.520215 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a642b25-8427-48ae-9d81-713d5e612d00-config-data\") pod \"5a642b25-8427-48ae-9d81-713d5e612d00\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.520293 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a642b25-8427-48ae-9d81-713d5e612d00-scripts\") pod \"5a642b25-8427-48ae-9d81-713d5e612d00\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.520322 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a642b25-8427-48ae-9d81-713d5e612d00-httpd-run\") pod \"5a642b25-8427-48ae-9d81-713d5e612d00\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.520392 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a642b25-8427-48ae-9d81-713d5e612d00-combined-ca-bundle\") pod \"5a642b25-8427-48ae-9d81-713d5e612d00\" (UID: \"5a642b25-8427-48ae-9d81-713d5e612d00\") " Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.520667 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a642b25-8427-48ae-9d81-713d5e612d00-logs" (OuterVolumeSpecName: "logs") pod "5a642b25-8427-48ae-9d81-713d5e612d00" (UID: "5a642b25-8427-48ae-9d81-713d5e612d00"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.520699 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a642b25-8427-48ae-9d81-713d5e612d00-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5a642b25-8427-48ae-9d81-713d5e612d00" (UID: "5a642b25-8427-48ae-9d81-713d5e612d00"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.521182 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a642b25-8427-48ae-9d81-713d5e612d00-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.521209 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a642b25-8427-48ae-9d81-713d5e612d00-logs\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.525591 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a642b25-8427-48ae-9d81-713d5e612d00-kube-api-access-qtlp5" (OuterVolumeSpecName: "kube-api-access-qtlp5") pod "5a642b25-8427-48ae-9d81-713d5e612d00" (UID: "5a642b25-8427-48ae-9d81-713d5e612d00"). InnerVolumeSpecName "kube-api-access-qtlp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.550806 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a642b25-8427-48ae-9d81-713d5e612d00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a642b25-8427-48ae-9d81-713d5e612d00" (UID: "5a642b25-8427-48ae-9d81-713d5e612d00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.552673 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a642b25-8427-48ae-9d81-713d5e612d00-scripts" (OuterVolumeSpecName: "scripts") pod "5a642b25-8427-48ae-9d81-713d5e612d00" (UID: "5a642b25-8427-48ae-9d81-713d5e612d00"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.582906 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a642b25-8427-48ae-9d81-713d5e612d00-config-data" (OuterVolumeSpecName: "config-data") pod "5a642b25-8427-48ae-9d81-713d5e612d00" (UID: "5a642b25-8427-48ae-9d81-713d5e612d00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.622676 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a642b25-8427-48ae-9d81-713d5e612d00-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.622718 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a642b25-8427-48ae-9d81-713d5e612d00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.622733 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtlp5\" (UniqueName: \"kubernetes.io/projected/5a642b25-8427-48ae-9d81-713d5e612d00-kube-api-access-qtlp5\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.622745 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a642b25-8427-48ae-9d81-713d5e612d00-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.861696 4812 generic.go:334] "Generic (PLEG): container finished" podID="5a642b25-8427-48ae-9d81-713d5e612d00" containerID="6da5af4bf311d543465c4c1316b89b8ba7d667d47cf72ae030785cbab4c56a46" exitCode=0 Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.861729 4812 generic.go:334] "Generic (PLEG): container finished" podID="5a642b25-8427-48ae-9d81-713d5e612d00" containerID="a7377b918d33aa6d537b7a1f5e69ffb76e005a06db2b2b1bff09d9cf433e08d6" exitCode=143 Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.861785 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a642b25-8427-48ae-9d81-713d5e612d00","Type":"ContainerDied","Data":"6da5af4bf311d543465c4c1316b89b8ba7d667d47cf72ae030785cbab4c56a46"} Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.861840 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a642b25-8427-48ae-9d81-713d5e612d00","Type":"ContainerDied","Data":"a7377b918d33aa6d537b7a1f5e69ffb76e005a06db2b2b1bff09d9cf433e08d6"} Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.861852 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a642b25-8427-48ae-9d81-713d5e612d00","Type":"ContainerDied","Data":"59f3c2a83af0595a427b49fa4dbbcec8697709f498010bf9085d1abf250a6a97"} Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.861866 4812 scope.go:117] "RemoveContainer" containerID="6da5af4bf311d543465c4c1316b89b8ba7d667d47cf72ae030785cbab4c56a46" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.862073 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.862139 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a5b0636b-01c1-413f-baca-8c52574c73b3" containerName="glance-log" containerID="cri-o://5331830151d3ae663fce0f71a852fde477a0f576d64eaa6929edfd12bae64ba7" gracePeriod=30 Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.862175 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a5b0636b-01c1-413f-baca-8c52574c73b3" containerName="glance-httpd" containerID="cri-o://3d503c125185a269a0f7a434fee96bec358e4dbecd528288649af5e76848a71f" gracePeriod=30 Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.897965 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.898156 4812 scope.go:117] "RemoveContainer" containerID="a7377b918d33aa6d537b7a1f5e69ffb76e005a06db2b2b1bff09d9cf433e08d6" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.916158 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.931779 4812 scope.go:117] "RemoveContainer" containerID="6da5af4bf311d543465c4c1316b89b8ba7d667d47cf72ae030785cbab4c56a46" Nov 24 20:52:34 crc kubenswrapper[4812]: E1124 20:52:34.932252 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6da5af4bf311d543465c4c1316b89b8ba7d667d47cf72ae030785cbab4c56a46\": container with ID starting with 6da5af4bf311d543465c4c1316b89b8ba7d667d47cf72ae030785cbab4c56a46 not found: ID does not exist" containerID="6da5af4bf311d543465c4c1316b89b8ba7d667d47cf72ae030785cbab4c56a46" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.932314 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6da5af4bf311d543465c4c1316b89b8ba7d667d47cf72ae030785cbab4c56a46"} err="failed to get container status \"6da5af4bf311d543465c4c1316b89b8ba7d667d47cf72ae030785cbab4c56a46\": rpc error: code = NotFound desc = could not find container \"6da5af4bf311d543465c4c1316b89b8ba7d667d47cf72ae030785cbab4c56a46\": container with ID starting with 6da5af4bf311d543465c4c1316b89b8ba7d667d47cf72ae030785cbab4c56a46 not found: ID does not exist" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.932426 4812 scope.go:117] "RemoveContainer" containerID="a7377b918d33aa6d537b7a1f5e69ffb76e005a06db2b2b1bff09d9cf433e08d6" Nov 24 20:52:34 crc kubenswrapper[4812]: E1124 20:52:34.932998 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7377b918d33aa6d537b7a1f5e69ffb76e005a06db2b2b1bff09d9cf433e08d6\": container with ID starting with a7377b918d33aa6d537b7a1f5e69ffb76e005a06db2b2b1bff09d9cf433e08d6 not found: ID does not exist" containerID="a7377b918d33aa6d537b7a1f5e69ffb76e005a06db2b2b1bff09d9cf433e08d6" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.933065 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7377b918d33aa6d537b7a1f5e69ffb76e005a06db2b2b1bff09d9cf433e08d6"} err="failed to get container status \"a7377b918d33aa6d537b7a1f5e69ffb76e005a06db2b2b1bff09d9cf433e08d6\": rpc error: code = NotFound desc = could not find container \"a7377b918d33aa6d537b7a1f5e69ffb76e005a06db2b2b1bff09d9cf433e08d6\": container with ID starting with a7377b918d33aa6d537b7a1f5e69ffb76e005a06db2b2b1bff09d9cf433e08d6 not found: ID does not exist" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.933105 4812 scope.go:117] "RemoveContainer" containerID="6da5af4bf311d543465c4c1316b89b8ba7d667d47cf72ae030785cbab4c56a46" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.935571 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6da5af4bf311d543465c4c1316b89b8ba7d667d47cf72ae030785cbab4c56a46"} err="failed to get container status \"6da5af4bf311d543465c4c1316b89b8ba7d667d47cf72ae030785cbab4c56a46\": rpc error: code = NotFound desc = could not find container \"6da5af4bf311d543465c4c1316b89b8ba7d667d47cf72ae030785cbab4c56a46\": container with ID starting with 6da5af4bf311d543465c4c1316b89b8ba7d667d47cf72ae030785cbab4c56a46 not found: ID does not exist" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.935620 4812 scope.go:117] "RemoveContainer" containerID="a7377b918d33aa6d537b7a1f5e69ffb76e005a06db2b2b1bff09d9cf433e08d6" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.936138 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7377b918d33aa6d537b7a1f5e69ffb76e005a06db2b2b1bff09d9cf433e08d6"} err="failed to get container status \"a7377b918d33aa6d537b7a1f5e69ffb76e005a06db2b2b1bff09d9cf433e08d6\": rpc error: code = NotFound desc = could not find container \"a7377b918d33aa6d537b7a1f5e69ffb76e005a06db2b2b1bff09d9cf433e08d6\": container with ID starting with a7377b918d33aa6d537b7a1f5e69ffb76e005a06db2b2b1bff09d9cf433e08d6 not found: ID does not exist" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.946043 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 20:52:34 crc kubenswrapper[4812]: E1124 20:52:34.946457 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a642b25-8427-48ae-9d81-713d5e612d00" containerName="glance-httpd" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.946474 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a642b25-8427-48ae-9d81-713d5e612d00" containerName="glance-httpd" Nov 24 20:52:34 crc kubenswrapper[4812]: E1124 20:52:34.946488 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a642b25-8427-48ae-9d81-713d5e612d00" containerName="glance-log" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.946495 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a642b25-8427-48ae-9d81-713d5e612d00" containerName="glance-log" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.946684 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a642b25-8427-48ae-9d81-713d5e612d00" containerName="glance-httpd" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.946711 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a642b25-8427-48ae-9d81-713d5e612d00" containerName="glance-log" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.947680 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.952035 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.952923 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.957258 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 20:52:34 crc kubenswrapper[4812]: I1124 20:52:34.983898 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a642b25-8427-48ae-9d81-713d5e612d00" path="/var/lib/kubelet/pods/5a642b25-8427-48ae-9d81-713d5e612d00/volumes" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.033839 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38935c29-7af7-4470-8eb2-a752feb40275-logs\") pod \"glance-default-external-api-0\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.033890 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.033928 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcmfl\" (UniqueName: \"kubernetes.io/projected/38935c29-7af7-4470-8eb2-a752feb40275-kube-api-access-qcmfl\") pod \"glance-default-external-api-0\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.033954 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38935c29-7af7-4470-8eb2-a752feb40275-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.033971 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.034044 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-scripts\") pod \"glance-default-external-api-0\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.034184 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-config-data\") pod \"glance-default-external-api-0\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.135780 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38935c29-7af7-4470-8eb2-a752feb40275-logs\") pod \"glance-default-external-api-0\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.136136 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.136190 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcmfl\" (UniqueName: \"kubernetes.io/projected/38935c29-7af7-4470-8eb2-a752feb40275-kube-api-access-qcmfl\") pod \"glance-default-external-api-0\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.136227 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38935c29-7af7-4470-8eb2-a752feb40275-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.136249 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.136249 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38935c29-7af7-4470-8eb2-a752feb40275-logs\") pod \"glance-default-external-api-0\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.136274 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-scripts\") pod \"glance-default-external-api-0\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.136314 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-config-data\") pod \"glance-default-external-api-0\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.136654 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38935c29-7af7-4470-8eb2-a752feb40275-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.144131 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.144377 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-config-data\") pod \"glance-default-external-api-0\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.147903 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-scripts\") pod \"glance-default-external-api-0\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.148838 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.154331 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcmfl\" (UniqueName: \"kubernetes.io/projected/38935c29-7af7-4470-8eb2-a752feb40275-kube-api-access-qcmfl\") pod \"glance-default-external-api-0\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.308747 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.626961 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.647690 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5b0636b-01c1-413f-baca-8c52574c73b3-config-data\") pod \"a5b0636b-01c1-413f-baca-8c52574c73b3\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.647764 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5b0636b-01c1-413f-baca-8c52574c73b3-scripts\") pod \"a5b0636b-01c1-413f-baca-8c52574c73b3\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.647831 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvmc7\" (UniqueName: \"kubernetes.io/projected/a5b0636b-01c1-413f-baca-8c52574c73b3-kube-api-access-fvmc7\") pod \"a5b0636b-01c1-413f-baca-8c52574c73b3\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.647904 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5b0636b-01c1-413f-baca-8c52574c73b3-logs\") pod \"a5b0636b-01c1-413f-baca-8c52574c73b3\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.647976 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b0636b-01c1-413f-baca-8c52574c73b3-combined-ca-bundle\") pod \"a5b0636b-01c1-413f-baca-8c52574c73b3\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.648008 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a5b0636b-01c1-413f-baca-8c52574c73b3-httpd-run\") pod \"a5b0636b-01c1-413f-baca-8c52574c73b3\" (UID: \"a5b0636b-01c1-413f-baca-8c52574c73b3\") " Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.648569 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5b0636b-01c1-413f-baca-8c52574c73b3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a5b0636b-01c1-413f-baca-8c52574c73b3" (UID: "a5b0636b-01c1-413f-baca-8c52574c73b3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.648702 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5b0636b-01c1-413f-baca-8c52574c73b3-logs" (OuterVolumeSpecName: "logs") pod "a5b0636b-01c1-413f-baca-8c52574c73b3" (UID: "a5b0636b-01c1-413f-baca-8c52574c73b3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.648909 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5b0636b-01c1-413f-baca-8c52574c73b3-logs\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.648938 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a5b0636b-01c1-413f-baca-8c52574c73b3-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.653006 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5b0636b-01c1-413f-baca-8c52574c73b3-scripts" (OuterVolumeSpecName: "scripts") pod "a5b0636b-01c1-413f-baca-8c52574c73b3" (UID: "a5b0636b-01c1-413f-baca-8c52574c73b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.653758 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5b0636b-01c1-413f-baca-8c52574c73b3-kube-api-access-fvmc7" (OuterVolumeSpecName: "kube-api-access-fvmc7") pod "a5b0636b-01c1-413f-baca-8c52574c73b3" (UID: "a5b0636b-01c1-413f-baca-8c52574c73b3"). InnerVolumeSpecName "kube-api-access-fvmc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.699861 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5b0636b-01c1-413f-baca-8c52574c73b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5b0636b-01c1-413f-baca-8c52574c73b3" (UID: "a5b0636b-01c1-413f-baca-8c52574c73b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.706996 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5b0636b-01c1-413f-baca-8c52574c73b3-config-data" (OuterVolumeSpecName: "config-data") pod "a5b0636b-01c1-413f-baca-8c52574c73b3" (UID: "a5b0636b-01c1-413f-baca-8c52574c73b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.750903 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5b0636b-01c1-413f-baca-8c52574c73b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.751503 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5b0636b-01c1-413f-baca-8c52574c73b3-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.751661 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5b0636b-01c1-413f-baca-8c52574c73b3-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.751766 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvmc7\" (UniqueName: \"kubernetes.io/projected/a5b0636b-01c1-413f-baca-8c52574c73b3-kube-api-access-fvmc7\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.845075 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 20:52:35 crc kubenswrapper[4812]: W1124 20:52:35.851165 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38935c29_7af7_4470_8eb2_a752feb40275.slice/crio-b2e1ea1d4ceb8b86bbc5634a5cedbb47c45da0486497480ed102b5fcb6a63ce3 WatchSource:0}: Error finding container b2e1ea1d4ceb8b86bbc5634a5cedbb47c45da0486497480ed102b5fcb6a63ce3: Status 404 returned error can't find the container with id b2e1ea1d4ceb8b86bbc5634a5cedbb47c45da0486497480ed102b5fcb6a63ce3 Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.885436 4812 generic.go:334] "Generic (PLEG): container finished" podID="a5b0636b-01c1-413f-baca-8c52574c73b3" containerID="3d503c125185a269a0f7a434fee96bec358e4dbecd528288649af5e76848a71f" exitCode=0 Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.885472 4812 generic.go:334] "Generic (PLEG): container finished" podID="a5b0636b-01c1-413f-baca-8c52574c73b3" containerID="5331830151d3ae663fce0f71a852fde477a0f576d64eaa6929edfd12bae64ba7" exitCode=143 Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.885522 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a5b0636b-01c1-413f-baca-8c52574c73b3","Type":"ContainerDied","Data":"3d503c125185a269a0f7a434fee96bec358e4dbecd528288649af5e76848a71f"} Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.885550 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a5b0636b-01c1-413f-baca-8c52574c73b3","Type":"ContainerDied","Data":"5331830151d3ae663fce0f71a852fde477a0f576d64eaa6929edfd12bae64ba7"} Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.885549 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.885564 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a5b0636b-01c1-413f-baca-8c52574c73b3","Type":"ContainerDied","Data":"c6e8281669e44a43b8ac710a6a0e6144fb7fb0c5c5cc7bced2b641def36ac859"} Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.885583 4812 scope.go:117] "RemoveContainer" containerID="3d503c125185a269a0f7a434fee96bec358e4dbecd528288649af5e76848a71f" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.888904 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"38935c29-7af7-4470-8eb2-a752feb40275","Type":"ContainerStarted","Data":"b2e1ea1d4ceb8b86bbc5634a5cedbb47c45da0486497480ed102b5fcb6a63ce3"} Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.905594 4812 scope.go:117] "RemoveContainer" containerID="5331830151d3ae663fce0f71a852fde477a0f576d64eaa6929edfd12bae64ba7" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.934634 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.937216 4812 scope.go:117] "RemoveContainer" containerID="3d503c125185a269a0f7a434fee96bec358e4dbecd528288649af5e76848a71f" Nov 24 20:52:35 crc kubenswrapper[4812]: E1124 20:52:35.937708 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d503c125185a269a0f7a434fee96bec358e4dbecd528288649af5e76848a71f\": container with ID starting with 3d503c125185a269a0f7a434fee96bec358e4dbecd528288649af5e76848a71f not found: ID does not exist" containerID="3d503c125185a269a0f7a434fee96bec358e4dbecd528288649af5e76848a71f" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.937752 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d503c125185a269a0f7a434fee96bec358e4dbecd528288649af5e76848a71f"} err="failed to get container status \"3d503c125185a269a0f7a434fee96bec358e4dbecd528288649af5e76848a71f\": rpc error: code = NotFound desc = could not find container \"3d503c125185a269a0f7a434fee96bec358e4dbecd528288649af5e76848a71f\": container with ID starting with 3d503c125185a269a0f7a434fee96bec358e4dbecd528288649af5e76848a71f not found: ID does not exist" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.937784 4812 scope.go:117] "RemoveContainer" containerID="5331830151d3ae663fce0f71a852fde477a0f576d64eaa6929edfd12bae64ba7" Nov 24 20:52:35 crc kubenswrapper[4812]: E1124 20:52:35.938181 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5331830151d3ae663fce0f71a852fde477a0f576d64eaa6929edfd12bae64ba7\": container with ID starting with 5331830151d3ae663fce0f71a852fde477a0f576d64eaa6929edfd12bae64ba7 not found: ID does not exist" containerID="5331830151d3ae663fce0f71a852fde477a0f576d64eaa6929edfd12bae64ba7" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.938220 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5331830151d3ae663fce0f71a852fde477a0f576d64eaa6929edfd12bae64ba7"} err="failed to get container status \"5331830151d3ae663fce0f71a852fde477a0f576d64eaa6929edfd12bae64ba7\": rpc error: code = NotFound desc = could not find container \"5331830151d3ae663fce0f71a852fde477a0f576d64eaa6929edfd12bae64ba7\": container with ID starting with 5331830151d3ae663fce0f71a852fde477a0f576d64eaa6929edfd12bae64ba7 not found: ID does not exist" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.938256 4812 scope.go:117] "RemoveContainer" containerID="3d503c125185a269a0f7a434fee96bec358e4dbecd528288649af5e76848a71f" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.938546 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d503c125185a269a0f7a434fee96bec358e4dbecd528288649af5e76848a71f"} err="failed to get container status \"3d503c125185a269a0f7a434fee96bec358e4dbecd528288649af5e76848a71f\": rpc error: code = NotFound desc = could not find container \"3d503c125185a269a0f7a434fee96bec358e4dbecd528288649af5e76848a71f\": container with ID starting with 3d503c125185a269a0f7a434fee96bec358e4dbecd528288649af5e76848a71f not found: ID does not exist" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.938576 4812 scope.go:117] "RemoveContainer" containerID="5331830151d3ae663fce0f71a852fde477a0f576d64eaa6929edfd12bae64ba7" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.938863 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5331830151d3ae663fce0f71a852fde477a0f576d64eaa6929edfd12bae64ba7"} err="failed to get container status \"5331830151d3ae663fce0f71a852fde477a0f576d64eaa6929edfd12bae64ba7\": rpc error: code = NotFound desc = could not find container \"5331830151d3ae663fce0f71a852fde477a0f576d64eaa6929edfd12bae64ba7\": container with ID starting with 5331830151d3ae663fce0f71a852fde477a0f576d64eaa6929edfd12bae64ba7 not found: ID does not exist" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.943516 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.965322 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 20:52:35 crc kubenswrapper[4812]: E1124 20:52:35.965654 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5b0636b-01c1-413f-baca-8c52574c73b3" containerName="glance-log" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.965671 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5b0636b-01c1-413f-baca-8c52574c73b3" containerName="glance-log" Nov 24 20:52:35 crc kubenswrapper[4812]: E1124 20:52:35.965697 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5b0636b-01c1-413f-baca-8c52574c73b3" containerName="glance-httpd" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.965704 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5b0636b-01c1-413f-baca-8c52574c73b3" containerName="glance-httpd" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.965867 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5b0636b-01c1-413f-baca-8c52574c73b3" containerName="glance-log" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.965883 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5b0636b-01c1-413f-baca-8c52574c73b3" containerName="glance-httpd" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.966734 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.975192 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.975298 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 20:52:35 crc kubenswrapper[4812]: I1124 20:52:35.980327 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.060820 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5151fc6f-3b0d-460e-8c31-08092b0fca85-logs\") pod \"glance-default-internal-api-0\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.060948 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.061075 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5151fc6f-3b0d-460e-8c31-08092b0fca85-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.061106 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.061133 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.061203 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-862mb\" (UniqueName: \"kubernetes.io/projected/5151fc6f-3b0d-460e-8c31-08092b0fca85-kube-api-access-862mb\") pod \"glance-default-internal-api-0\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.061256 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.162658 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-862mb\" (UniqueName: \"kubernetes.io/projected/5151fc6f-3b0d-460e-8c31-08092b0fca85-kube-api-access-862mb\") pod \"glance-default-internal-api-0\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.162980 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.163062 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5151fc6f-3b0d-460e-8c31-08092b0fca85-logs\") pod \"glance-default-internal-api-0\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.163086 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.163133 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5151fc6f-3b0d-460e-8c31-08092b0fca85-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.163157 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.163183 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.163745 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5151fc6f-3b0d-460e-8c31-08092b0fca85-logs\") pod \"glance-default-internal-api-0\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.163888 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5151fc6f-3b0d-460e-8c31-08092b0fca85-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.168831 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.169018 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.169050 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.174656 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.183611 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-862mb\" (UniqueName: \"kubernetes.io/projected/5151fc6f-3b0d-460e-8c31-08092b0fca85-kube-api-access-862mb\") pod \"glance-default-internal-api-0\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.288290 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.812141 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 20:52:36 crc kubenswrapper[4812]: W1124 20:52:36.824164 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5151fc6f_3b0d_460e_8c31_08092b0fca85.slice/crio-386ef997536b56f1296628a9b070fb3041e9eae4b9a5b3132c5220043b3b7004 WatchSource:0}: Error finding container 386ef997536b56f1296628a9b070fb3041e9eae4b9a5b3132c5220043b3b7004: Status 404 returned error can't find the container with id 386ef997536b56f1296628a9b070fb3041e9eae4b9a5b3132c5220043b3b7004 Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.903135 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5151fc6f-3b0d-460e-8c31-08092b0fca85","Type":"ContainerStarted","Data":"386ef997536b56f1296628a9b070fb3041e9eae4b9a5b3132c5220043b3b7004"} Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.915165 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"38935c29-7af7-4470-8eb2-a752feb40275","Type":"ContainerStarted","Data":"9064a24928d30ca1c3ca4873fe3a0e3b7e19158aaf515861322062c22cf08c99"} Nov 24 20:52:36 crc kubenswrapper[4812]: I1124 20:52:36.986419 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5b0636b-01c1-413f-baca-8c52574c73b3" path="/var/lib/kubelet/pods/a5b0636b-01c1-413f-baca-8c52574c73b3/volumes" Nov 24 20:52:37 crc kubenswrapper[4812]: I1124 20:52:37.925675 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5151fc6f-3b0d-460e-8c31-08092b0fca85","Type":"ContainerStarted","Data":"fc88945011acdb81802d0f1f81a82f9e6fec3cba1b30d5e34202ad175332ef29"} Nov 24 20:52:37 crc kubenswrapper[4812]: I1124 20:52:37.927659 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"38935c29-7af7-4470-8eb2-a752feb40275","Type":"ContainerStarted","Data":"805b40c140e1f30d927a2501a4fcd6ddb259ee88d15535d5e69940fff038ab45"} Nov 24 20:52:37 crc kubenswrapper[4812]: I1124 20:52:37.949378 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.949325149 podStartE2EDuration="3.949325149s" podCreationTimestamp="2025-11-24 20:52:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:52:37.943994518 +0000 UTC m=+5751.732946929" watchObservedRunningTime="2025-11-24 20:52:37.949325149 +0000 UTC m=+5751.738277560" Nov 24 20:52:38 crc kubenswrapper[4812]: I1124 20:52:38.942277 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5151fc6f-3b0d-460e-8c31-08092b0fca85","Type":"ContainerStarted","Data":"17557d078fd00911d446e9cb7f16481c0a94a12726468b56b1a8932b6cacace9"} Nov 24 20:52:39 crc kubenswrapper[4812]: I1124 20:52:39.004586 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.004545893 podStartE2EDuration="4.004545893s" podCreationTimestamp="2025-11-24 20:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:52:38.974669255 +0000 UTC m=+5752.763621686" watchObservedRunningTime="2025-11-24 20:52:39.004545893 +0000 UTC m=+5752.793498314" Nov 24 20:52:41 crc kubenswrapper[4812]: I1124 20:52:41.568677 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" Nov 24 20:52:41 crc kubenswrapper[4812]: I1124 20:52:41.688865 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fb6bf6ddc-8wnfp"] Nov 24 20:52:41 crc kubenswrapper[4812]: I1124 20:52:41.689143 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" podUID="1423088d-0ab4-40ae-b307-27988d74b383" containerName="dnsmasq-dns" containerID="cri-o://76ac57bd7c3d41cf85577663362aa609e197d9c59005ef450835b7e37ff6bce4" gracePeriod=10 Nov 24 20:52:41 crc kubenswrapper[4812]: I1124 20:52:41.992165 4812 generic.go:334] "Generic (PLEG): container finished" podID="1423088d-0ab4-40ae-b307-27988d74b383" containerID="76ac57bd7c3d41cf85577663362aa609e197d9c59005ef450835b7e37ff6bce4" exitCode=0 Nov 24 20:52:41 crc kubenswrapper[4812]: I1124 20:52:41.992446 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" event={"ID":"1423088d-0ab4-40ae-b307-27988d74b383","Type":"ContainerDied","Data":"76ac57bd7c3d41cf85577663362aa609e197d9c59005ef450835b7e37ff6bce4"} Nov 24 20:52:42 crc kubenswrapper[4812]: I1124 20:52:42.204695 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" Nov 24 20:52:42 crc kubenswrapper[4812]: I1124 20:52:42.209929 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbmhb\" (UniqueName: \"kubernetes.io/projected/1423088d-0ab4-40ae-b307-27988d74b383-kube-api-access-cbmhb\") pod \"1423088d-0ab4-40ae-b307-27988d74b383\" (UID: \"1423088d-0ab4-40ae-b307-27988d74b383\") " Nov 24 20:52:42 crc kubenswrapper[4812]: I1124 20:52:42.209969 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-ovsdbserver-sb\") pod \"1423088d-0ab4-40ae-b307-27988d74b383\" (UID: \"1423088d-0ab4-40ae-b307-27988d74b383\") " Nov 24 20:52:42 crc kubenswrapper[4812]: I1124 20:52:42.210039 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-ovsdbserver-nb\") pod \"1423088d-0ab4-40ae-b307-27988d74b383\" (UID: \"1423088d-0ab4-40ae-b307-27988d74b383\") " Nov 24 20:52:42 crc kubenswrapper[4812]: I1124 20:52:42.210065 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-config\") pod \"1423088d-0ab4-40ae-b307-27988d74b383\" (UID: \"1423088d-0ab4-40ae-b307-27988d74b383\") " Nov 24 20:52:42 crc kubenswrapper[4812]: I1124 20:52:42.210096 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-dns-svc\") pod \"1423088d-0ab4-40ae-b307-27988d74b383\" (UID: \"1423088d-0ab4-40ae-b307-27988d74b383\") " Nov 24 20:52:42 crc kubenswrapper[4812]: I1124 20:52:42.217989 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1423088d-0ab4-40ae-b307-27988d74b383-kube-api-access-cbmhb" (OuterVolumeSpecName: "kube-api-access-cbmhb") pod "1423088d-0ab4-40ae-b307-27988d74b383" (UID: "1423088d-0ab4-40ae-b307-27988d74b383"). InnerVolumeSpecName "kube-api-access-cbmhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:52:42 crc kubenswrapper[4812]: I1124 20:52:42.276188 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1423088d-0ab4-40ae-b307-27988d74b383" (UID: "1423088d-0ab4-40ae-b307-27988d74b383"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:52:42 crc kubenswrapper[4812]: I1124 20:52:42.284259 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-config" (OuterVolumeSpecName: "config") pod "1423088d-0ab4-40ae-b307-27988d74b383" (UID: "1423088d-0ab4-40ae-b307-27988d74b383"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:52:42 crc kubenswrapper[4812]: I1124 20:52:42.284912 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1423088d-0ab4-40ae-b307-27988d74b383" (UID: "1423088d-0ab4-40ae-b307-27988d74b383"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:52:42 crc kubenswrapper[4812]: I1124 20:52:42.296563 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1423088d-0ab4-40ae-b307-27988d74b383" (UID: "1423088d-0ab4-40ae-b307-27988d74b383"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:52:42 crc kubenswrapper[4812]: I1124 20:52:42.312689 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbmhb\" (UniqueName: \"kubernetes.io/projected/1423088d-0ab4-40ae-b307-27988d74b383-kube-api-access-cbmhb\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:42 crc kubenswrapper[4812]: I1124 20:52:42.312721 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:42 crc kubenswrapper[4812]: I1124 20:52:42.312730 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:42 crc kubenswrapper[4812]: I1124 20:52:42.312740 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-config\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:42 crc kubenswrapper[4812]: I1124 20:52:42.312749 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1423088d-0ab4-40ae-b307-27988d74b383-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:43 crc kubenswrapper[4812]: I1124 20:52:43.010325 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" event={"ID":"1423088d-0ab4-40ae-b307-27988d74b383","Type":"ContainerDied","Data":"821a12e2b7f2a4743827d1268c4e153b2c7b2372ebaca39e86d98378f7ddd961"} Nov 24 20:52:43 crc kubenswrapper[4812]: I1124 20:52:43.010549 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb6bf6ddc-8wnfp" Nov 24 20:52:43 crc kubenswrapper[4812]: I1124 20:52:43.010736 4812 scope.go:117] "RemoveContainer" containerID="76ac57bd7c3d41cf85577663362aa609e197d9c59005ef450835b7e37ff6bce4" Nov 24 20:52:43 crc kubenswrapper[4812]: I1124 20:52:43.056045 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fb6bf6ddc-8wnfp"] Nov 24 20:52:43 crc kubenswrapper[4812]: I1124 20:52:43.062209 4812 scope.go:117] "RemoveContainer" containerID="1bd2818a88cc669a11f0d2d833715d539ce18e8eff48ae07c3a0ea3533349353" Nov 24 20:52:43 crc kubenswrapper[4812]: I1124 20:52:43.069297 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fb6bf6ddc-8wnfp"] Nov 24 20:52:44 crc kubenswrapper[4812]: I1124 20:52:44.985811 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1423088d-0ab4-40ae-b307-27988d74b383" path="/var/lib/kubelet/pods/1423088d-0ab4-40ae-b307-27988d74b383/volumes" Nov 24 20:52:45 crc kubenswrapper[4812]: I1124 20:52:45.309539 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 20:52:45 crc kubenswrapper[4812]: I1124 20:52:45.309637 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 20:52:45 crc kubenswrapper[4812]: I1124 20:52:45.360837 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 20:52:45 crc kubenswrapper[4812]: I1124 20:52:45.388112 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 20:52:46 crc kubenswrapper[4812]: I1124 20:52:46.051726 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 20:52:46 crc kubenswrapper[4812]: I1124 20:52:46.051818 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 20:52:46 crc kubenswrapper[4812]: I1124 20:52:46.289429 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 20:52:46 crc kubenswrapper[4812]: I1124 20:52:46.289539 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 20:52:46 crc kubenswrapper[4812]: I1124 20:52:46.330720 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 20:52:46 crc kubenswrapper[4812]: I1124 20:52:46.362281 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 20:52:47 crc kubenswrapper[4812]: I1124 20:52:47.074399 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 20:52:47 crc kubenswrapper[4812]: I1124 20:52:47.074513 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 20:52:47 crc kubenswrapper[4812]: I1124 20:52:47.869272 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 20:52:47 crc kubenswrapper[4812]: I1124 20:52:47.968312 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 20:52:48 crc kubenswrapper[4812]: I1124 20:52:48.895236 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 20:52:48 crc kubenswrapper[4812]: I1124 20:52:48.907325 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 20:52:54 crc kubenswrapper[4812]: I1124 20:52:54.034182 4812 scope.go:117] "RemoveContainer" containerID="759c0b19a97ef6c1657ae321955603f1f6fe10c9fb2c3ff5a45759cfa1e13292" Nov 24 20:52:54 crc kubenswrapper[4812]: I1124 20:52:54.100106 4812 scope.go:117] "RemoveContainer" containerID="5a0c663e423acdb666c9ed5ed757c544bcb4868d0fc1983beb188ab8da26e35a" Nov 24 20:52:54 crc kubenswrapper[4812]: I1124 20:52:54.126610 4812 scope.go:117] "RemoveContainer" containerID="6913aac632e096748fa34e803f443ea3495dbfce7873545e4efa3e3fb5acec7e" Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.370012 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-khlxn"] Nov 24 20:52:55 crc kubenswrapper[4812]: E1124 20:52:55.370954 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1423088d-0ab4-40ae-b307-27988d74b383" containerName="dnsmasq-dns" Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.370979 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1423088d-0ab4-40ae-b307-27988d74b383" containerName="dnsmasq-dns" Nov 24 20:52:55 crc kubenswrapper[4812]: E1124 20:52:55.371011 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1423088d-0ab4-40ae-b307-27988d74b383" containerName="init" Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.371023 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1423088d-0ab4-40ae-b307-27988d74b383" containerName="init" Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.371290 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1423088d-0ab4-40ae-b307-27988d74b383" containerName="dnsmasq-dns" Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.372246 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-khlxn" Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.379104 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-khlxn"] Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.414589 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76be24ae-cec2-4e47-a425-c3fd4f80897e-operator-scripts\") pod \"placement-db-create-khlxn\" (UID: \"76be24ae-cec2-4e47-a425-c3fd4f80897e\") " pod="openstack/placement-db-create-khlxn" Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.414645 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvv7f\" (UniqueName: \"kubernetes.io/projected/76be24ae-cec2-4e47-a425-c3fd4f80897e-kube-api-access-cvv7f\") pod \"placement-db-create-khlxn\" (UID: \"76be24ae-cec2-4e47-a425-c3fd4f80897e\") " pod="openstack/placement-db-create-khlxn" Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.471525 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c39f-account-create-wbbb6"] Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.473111 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c39f-account-create-wbbb6" Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.474837 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.481010 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c39f-account-create-wbbb6"] Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.517198 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76be24ae-cec2-4e47-a425-c3fd4f80897e-operator-scripts\") pod \"placement-db-create-khlxn\" (UID: \"76be24ae-cec2-4e47-a425-c3fd4f80897e\") " pod="openstack/placement-db-create-khlxn" Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.518068 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvv7f\" (UniqueName: \"kubernetes.io/projected/76be24ae-cec2-4e47-a425-c3fd4f80897e-kube-api-access-cvv7f\") pod \"placement-db-create-khlxn\" (UID: \"76be24ae-cec2-4e47-a425-c3fd4f80897e\") " pod="openstack/placement-db-create-khlxn" Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.518014 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76be24ae-cec2-4e47-a425-c3fd4f80897e-operator-scripts\") pod \"placement-db-create-khlxn\" (UID: \"76be24ae-cec2-4e47-a425-c3fd4f80897e\") " pod="openstack/placement-db-create-khlxn" Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.518144 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b-operator-scripts\") pod \"placement-c39f-account-create-wbbb6\" (UID: \"73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b\") " pod="openstack/placement-c39f-account-create-wbbb6" Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.518212 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f6kl\" (UniqueName: \"kubernetes.io/projected/73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b-kube-api-access-6f6kl\") pod \"placement-c39f-account-create-wbbb6\" (UID: \"73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b\") " pod="openstack/placement-c39f-account-create-wbbb6" Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.543271 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvv7f\" (UniqueName: \"kubernetes.io/projected/76be24ae-cec2-4e47-a425-c3fd4f80897e-kube-api-access-cvv7f\") pod \"placement-db-create-khlxn\" (UID: \"76be24ae-cec2-4e47-a425-c3fd4f80897e\") " pod="openstack/placement-db-create-khlxn" Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.620432 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f6kl\" (UniqueName: \"kubernetes.io/projected/73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b-kube-api-access-6f6kl\") pod \"placement-c39f-account-create-wbbb6\" (UID: \"73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b\") " pod="openstack/placement-c39f-account-create-wbbb6" Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.620786 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b-operator-scripts\") pod \"placement-c39f-account-create-wbbb6\" (UID: \"73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b\") " pod="openstack/placement-c39f-account-create-wbbb6" Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.621597 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b-operator-scripts\") pod \"placement-c39f-account-create-wbbb6\" (UID: \"73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b\") " pod="openstack/placement-c39f-account-create-wbbb6" Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.640270 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f6kl\" (UniqueName: \"kubernetes.io/projected/73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b-kube-api-access-6f6kl\") pod \"placement-c39f-account-create-wbbb6\" (UID: \"73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b\") " pod="openstack/placement-c39f-account-create-wbbb6" Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.698530 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-khlxn" Nov 24 20:52:55 crc kubenswrapper[4812]: I1124 20:52:55.789168 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c39f-account-create-wbbb6" Nov 24 20:52:56 crc kubenswrapper[4812]: I1124 20:52:56.154463 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-khlxn"] Nov 24 20:52:56 crc kubenswrapper[4812]: W1124 20:52:56.154562 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76be24ae_cec2_4e47_a425_c3fd4f80897e.slice/crio-f6a222b8f2a874d92a26b12ed7a7dbd124839789f49aa5873cc0e1d7f68af20a WatchSource:0}: Error finding container f6a222b8f2a874d92a26b12ed7a7dbd124839789f49aa5873cc0e1d7f68af20a: Status 404 returned error can't find the container with id f6a222b8f2a874d92a26b12ed7a7dbd124839789f49aa5873cc0e1d7f68af20a Nov 24 20:52:56 crc kubenswrapper[4812]: I1124 20:52:56.181676 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-khlxn" event={"ID":"76be24ae-cec2-4e47-a425-c3fd4f80897e","Type":"ContainerStarted","Data":"f6a222b8f2a874d92a26b12ed7a7dbd124839789f49aa5873cc0e1d7f68af20a"} Nov 24 20:52:56 crc kubenswrapper[4812]: W1124 20:52:56.302380 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73ec7d8a_ecc8_4fed_bfdc_6dc7077a351b.slice/crio-3faa8926b7ed6313f5722a19fa95e0e79e33bac7b662612e1638a09885459a1e WatchSource:0}: Error finding container 3faa8926b7ed6313f5722a19fa95e0e79e33bac7b662612e1638a09885459a1e: Status 404 returned error can't find the container with id 3faa8926b7ed6313f5722a19fa95e0e79e33bac7b662612e1638a09885459a1e Nov 24 20:52:56 crc kubenswrapper[4812]: I1124 20:52:56.307660 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c39f-account-create-wbbb6"] Nov 24 20:52:57 crc kubenswrapper[4812]: I1124 20:52:57.193381 4812 generic.go:334] "Generic (PLEG): container finished" podID="73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b" containerID="c09b6ddfcbf4f3545bd309b24e14737259ce67e005a61121e2600456ac7086fa" exitCode=0 Nov 24 20:52:57 crc kubenswrapper[4812]: I1124 20:52:57.193461 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c39f-account-create-wbbb6" event={"ID":"73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b","Type":"ContainerDied","Data":"c09b6ddfcbf4f3545bd309b24e14737259ce67e005a61121e2600456ac7086fa"} Nov 24 20:52:57 crc kubenswrapper[4812]: I1124 20:52:57.193488 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c39f-account-create-wbbb6" event={"ID":"73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b","Type":"ContainerStarted","Data":"3faa8926b7ed6313f5722a19fa95e0e79e33bac7b662612e1638a09885459a1e"} Nov 24 20:52:57 crc kubenswrapper[4812]: I1124 20:52:57.195136 4812 generic.go:334] "Generic (PLEG): container finished" podID="76be24ae-cec2-4e47-a425-c3fd4f80897e" containerID="39fbe1215e4991671722376c2339d8e264aeb8e3c219f841207fba5102ef9d57" exitCode=0 Nov 24 20:52:57 crc kubenswrapper[4812]: I1124 20:52:57.195160 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-khlxn" event={"ID":"76be24ae-cec2-4e47-a425-c3fd4f80897e","Type":"ContainerDied","Data":"39fbe1215e4991671722376c2339d8e264aeb8e3c219f841207fba5102ef9d57"} Nov 24 20:52:58 crc kubenswrapper[4812]: I1124 20:52:58.711052 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c39f-account-create-wbbb6" Nov 24 20:52:58 crc kubenswrapper[4812]: I1124 20:52:58.724675 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-khlxn" Nov 24 20:52:58 crc kubenswrapper[4812]: I1124 20:52:58.801080 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76be24ae-cec2-4e47-a425-c3fd4f80897e-operator-scripts\") pod \"76be24ae-cec2-4e47-a425-c3fd4f80897e\" (UID: \"76be24ae-cec2-4e47-a425-c3fd4f80897e\") " Nov 24 20:52:58 crc kubenswrapper[4812]: I1124 20:52:58.801212 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvv7f\" (UniqueName: \"kubernetes.io/projected/76be24ae-cec2-4e47-a425-c3fd4f80897e-kube-api-access-cvv7f\") pod \"76be24ae-cec2-4e47-a425-c3fd4f80897e\" (UID: \"76be24ae-cec2-4e47-a425-c3fd4f80897e\") " Nov 24 20:52:58 crc kubenswrapper[4812]: I1124 20:52:58.801442 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f6kl\" (UniqueName: \"kubernetes.io/projected/73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b-kube-api-access-6f6kl\") pod \"73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b\" (UID: \"73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b\") " Nov 24 20:52:58 crc kubenswrapper[4812]: I1124 20:52:58.801513 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b-operator-scripts\") pod \"73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b\" (UID: \"73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b\") " Nov 24 20:52:58 crc kubenswrapper[4812]: I1124 20:52:58.803218 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76be24ae-cec2-4e47-a425-c3fd4f80897e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "76be24ae-cec2-4e47-a425-c3fd4f80897e" (UID: "76be24ae-cec2-4e47-a425-c3fd4f80897e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:52:58 crc kubenswrapper[4812]: I1124 20:52:58.805151 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b" (UID: "73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:52:58 crc kubenswrapper[4812]: I1124 20:52:58.809573 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76be24ae-cec2-4e47-a425-c3fd4f80897e-kube-api-access-cvv7f" (OuterVolumeSpecName: "kube-api-access-cvv7f") pod "76be24ae-cec2-4e47-a425-c3fd4f80897e" (UID: "76be24ae-cec2-4e47-a425-c3fd4f80897e"). InnerVolumeSpecName "kube-api-access-cvv7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:52:58 crc kubenswrapper[4812]: I1124 20:52:58.811522 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b-kube-api-access-6f6kl" (OuterVolumeSpecName: "kube-api-access-6f6kl") pod "73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b" (UID: "73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b"). InnerVolumeSpecName "kube-api-access-6f6kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:52:58 crc kubenswrapper[4812]: I1124 20:52:58.904916 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f6kl\" (UniqueName: \"kubernetes.io/projected/73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b-kube-api-access-6f6kl\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:58 crc kubenswrapper[4812]: I1124 20:52:58.904987 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:58 crc kubenswrapper[4812]: I1124 20:52:58.905006 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76be24ae-cec2-4e47-a425-c3fd4f80897e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:58 crc kubenswrapper[4812]: I1124 20:52:58.905024 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvv7f\" (UniqueName: \"kubernetes.io/projected/76be24ae-cec2-4e47-a425-c3fd4f80897e-kube-api-access-cvv7f\") on node \"crc\" DevicePath \"\"" Nov 24 20:52:59 crc kubenswrapper[4812]: I1124 20:52:59.221797 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c39f-account-create-wbbb6" event={"ID":"73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b","Type":"ContainerDied","Data":"3faa8926b7ed6313f5722a19fa95e0e79e33bac7b662612e1638a09885459a1e"} Nov 24 20:52:59 crc kubenswrapper[4812]: I1124 20:52:59.221853 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3faa8926b7ed6313f5722a19fa95e0e79e33bac7b662612e1638a09885459a1e" Nov 24 20:52:59 crc kubenswrapper[4812]: I1124 20:52:59.221863 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c39f-account-create-wbbb6" Nov 24 20:52:59 crc kubenswrapper[4812]: I1124 20:52:59.225127 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-khlxn" event={"ID":"76be24ae-cec2-4e47-a425-c3fd4f80897e","Type":"ContainerDied","Data":"f6a222b8f2a874d92a26b12ed7a7dbd124839789f49aa5873cc0e1d7f68af20a"} Nov 24 20:52:59 crc kubenswrapper[4812]: I1124 20:52:59.225177 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6a222b8f2a874d92a26b12ed7a7dbd124839789f49aa5873cc0e1d7f68af20a" Nov 24 20:52:59 crc kubenswrapper[4812]: I1124 20:52:59.225284 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-khlxn" Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.869614 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f9cd89ff9-dsg2v"] Nov 24 20:53:00 crc kubenswrapper[4812]: E1124 20:53:00.870206 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76be24ae-cec2-4e47-a425-c3fd4f80897e" containerName="mariadb-database-create" Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.870224 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="76be24ae-cec2-4e47-a425-c3fd4f80897e" containerName="mariadb-database-create" Nov 24 20:53:00 crc kubenswrapper[4812]: E1124 20:53:00.870237 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b" containerName="mariadb-account-create" Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.870245 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b" containerName="mariadb-account-create" Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.870474 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="76be24ae-cec2-4e47-a425-c3fd4f80897e" containerName="mariadb-database-create" Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.870491 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b" containerName="mariadb-account-create" Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.871591 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.895225 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-k4kdh"] Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.896469 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-k4kdh" Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.902728 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.902922 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-nr7cb" Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.902970 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.912617 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f9cd89ff9-dsg2v"] Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.929871 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-k4kdh"] Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.943290 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-config-data\") pod \"placement-db-sync-k4kdh\" (UID: \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\") " pod="openstack/placement-db-sync-k4kdh" Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.943603 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-config\") pod \"dnsmasq-dns-7f9cd89ff9-dsg2v\" (UID: \"ef27b779-8611-4ba0-a021-de389bb54713\") " pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.943736 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-combined-ca-bundle\") pod \"placement-db-sync-k4kdh\" (UID: \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\") " pod="openstack/placement-db-sync-k4kdh" Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.943883 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-ovsdbserver-nb\") pod \"dnsmasq-dns-7f9cd89ff9-dsg2v\" (UID: \"ef27b779-8611-4ba0-a021-de389bb54713\") " pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.943986 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwszh\" (UniqueName: \"kubernetes.io/projected/ef27b779-8611-4ba0-a021-de389bb54713-kube-api-access-pwszh\") pod \"dnsmasq-dns-7f9cd89ff9-dsg2v\" (UID: \"ef27b779-8611-4ba0-a021-de389bb54713\") " pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.944081 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvwrj\" (UniqueName: \"kubernetes.io/projected/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-kube-api-access-vvwrj\") pod \"placement-db-sync-k4kdh\" (UID: \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\") " pod="openstack/placement-db-sync-k4kdh" Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.944190 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-scripts\") pod \"placement-db-sync-k4kdh\" (UID: \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\") " pod="openstack/placement-db-sync-k4kdh" Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.944327 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-logs\") pod \"placement-db-sync-k4kdh\" (UID: \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\") " pod="openstack/placement-db-sync-k4kdh" Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.944519 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-dns-svc\") pod \"dnsmasq-dns-7f9cd89ff9-dsg2v\" (UID: \"ef27b779-8611-4ba0-a021-de389bb54713\") " pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" Nov 24 20:53:00 crc kubenswrapper[4812]: I1124 20:53:00.944691 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-ovsdbserver-sb\") pod \"dnsmasq-dns-7f9cd89ff9-dsg2v\" (UID: \"ef27b779-8611-4ba0-a021-de389bb54713\") " pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.047094 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-ovsdbserver-nb\") pod \"dnsmasq-dns-7f9cd89ff9-dsg2v\" (UID: \"ef27b779-8611-4ba0-a021-de389bb54713\") " pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.047145 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwszh\" (UniqueName: \"kubernetes.io/projected/ef27b779-8611-4ba0-a021-de389bb54713-kube-api-access-pwszh\") pod \"dnsmasq-dns-7f9cd89ff9-dsg2v\" (UID: \"ef27b779-8611-4ba0-a021-de389bb54713\") " pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.047165 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvwrj\" (UniqueName: \"kubernetes.io/projected/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-kube-api-access-vvwrj\") pod \"placement-db-sync-k4kdh\" (UID: \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\") " pod="openstack/placement-db-sync-k4kdh" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.047185 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-scripts\") pod \"placement-db-sync-k4kdh\" (UID: \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\") " pod="openstack/placement-db-sync-k4kdh" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.047234 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-logs\") pod \"placement-db-sync-k4kdh\" (UID: \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\") " pod="openstack/placement-db-sync-k4kdh" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.047251 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-dns-svc\") pod \"dnsmasq-dns-7f9cd89ff9-dsg2v\" (UID: \"ef27b779-8611-4ba0-a021-de389bb54713\") " pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.047296 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-ovsdbserver-sb\") pod \"dnsmasq-dns-7f9cd89ff9-dsg2v\" (UID: \"ef27b779-8611-4ba0-a021-de389bb54713\") " pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.047326 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-config-data\") pod \"placement-db-sync-k4kdh\" (UID: \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\") " pod="openstack/placement-db-sync-k4kdh" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.047357 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-config\") pod \"dnsmasq-dns-7f9cd89ff9-dsg2v\" (UID: \"ef27b779-8611-4ba0-a021-de389bb54713\") " pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.047373 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-combined-ca-bundle\") pod \"placement-db-sync-k4kdh\" (UID: \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\") " pod="openstack/placement-db-sync-k4kdh" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.049785 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-logs\") pod \"placement-db-sync-k4kdh\" (UID: \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\") " pod="openstack/placement-db-sync-k4kdh" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.050271 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-ovsdbserver-nb\") pod \"dnsmasq-dns-7f9cd89ff9-dsg2v\" (UID: \"ef27b779-8611-4ba0-a021-de389bb54713\") " pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.050667 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-ovsdbserver-sb\") pod \"dnsmasq-dns-7f9cd89ff9-dsg2v\" (UID: \"ef27b779-8611-4ba0-a021-de389bb54713\") " pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.050842 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-config\") pod \"dnsmasq-dns-7f9cd89ff9-dsg2v\" (UID: \"ef27b779-8611-4ba0-a021-de389bb54713\") " pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.051189 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-dns-svc\") pod \"dnsmasq-dns-7f9cd89ff9-dsg2v\" (UID: \"ef27b779-8611-4ba0-a021-de389bb54713\") " pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.057290 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-scripts\") pod \"placement-db-sync-k4kdh\" (UID: \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\") " pod="openstack/placement-db-sync-k4kdh" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.058017 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-config-data\") pod \"placement-db-sync-k4kdh\" (UID: \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\") " pod="openstack/placement-db-sync-k4kdh" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.069989 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvwrj\" (UniqueName: \"kubernetes.io/projected/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-kube-api-access-vvwrj\") pod \"placement-db-sync-k4kdh\" (UID: \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\") " pod="openstack/placement-db-sync-k4kdh" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.071465 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-combined-ca-bundle\") pod \"placement-db-sync-k4kdh\" (UID: \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\") " pod="openstack/placement-db-sync-k4kdh" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.076414 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwszh\" (UniqueName: \"kubernetes.io/projected/ef27b779-8611-4ba0-a021-de389bb54713-kube-api-access-pwszh\") pod \"dnsmasq-dns-7f9cd89ff9-dsg2v\" (UID: \"ef27b779-8611-4ba0-a021-de389bb54713\") " pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.208110 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.233471 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-k4kdh" Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.675525 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f9cd89ff9-dsg2v"] Nov 24 20:53:01 crc kubenswrapper[4812]: W1124 20:53:01.776118 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ab3c432_0cc9_45e9_b3c2_5a96f901c3d0.slice/crio-b5ffbcf464d5632e552a07d48fb8bf2c1ae793c33172a16c92d70accd284602d WatchSource:0}: Error finding container b5ffbcf464d5632e552a07d48fb8bf2c1ae793c33172a16c92d70accd284602d: Status 404 returned error can't find the container with id b5ffbcf464d5632e552a07d48fb8bf2c1ae793c33172a16c92d70accd284602d Nov 24 20:53:01 crc kubenswrapper[4812]: I1124 20:53:01.776599 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-k4kdh"] Nov 24 20:53:02 crc kubenswrapper[4812]: I1124 20:53:02.268984 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-k4kdh" event={"ID":"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0","Type":"ContainerStarted","Data":"ff429dcfaf040aff6ae0dda13aed7deb09b8efc304d2b742ecf17720af05bf1d"} Nov 24 20:53:02 crc kubenswrapper[4812]: I1124 20:53:02.269360 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-k4kdh" event={"ID":"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0","Type":"ContainerStarted","Data":"b5ffbcf464d5632e552a07d48fb8bf2c1ae793c33172a16c92d70accd284602d"} Nov 24 20:53:02 crc kubenswrapper[4812]: I1124 20:53:02.273579 4812 generic.go:334] "Generic (PLEG): container finished" podID="ef27b779-8611-4ba0-a021-de389bb54713" containerID="3e6ed54f02ba37a8188b11faa3f92113150a6824bc96c95775be94f41da23b40" exitCode=0 Nov 24 20:53:02 crc kubenswrapper[4812]: I1124 20:53:02.273657 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" event={"ID":"ef27b779-8611-4ba0-a021-de389bb54713","Type":"ContainerDied","Data":"3e6ed54f02ba37a8188b11faa3f92113150a6824bc96c95775be94f41da23b40"} Nov 24 20:53:02 crc kubenswrapper[4812]: I1124 20:53:02.273699 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" event={"ID":"ef27b779-8611-4ba0-a021-de389bb54713","Type":"ContainerStarted","Data":"52d3f6f17e582aaaa4f0d733ce022b321d6e4df2dc5860e1352ecfa70d2e6b17"} Nov 24 20:53:02 crc kubenswrapper[4812]: I1124 20:53:02.334116 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-k4kdh" podStartSLOduration=2.334093413 podStartE2EDuration="2.334093413s" podCreationTimestamp="2025-11-24 20:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:53:02.300938082 +0000 UTC m=+5776.089890493" watchObservedRunningTime="2025-11-24 20:53:02.334093413 +0000 UTC m=+5776.123045794" Nov 24 20:53:02 crc kubenswrapper[4812]: I1124 20:53:02.998687 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:53:02 crc kubenswrapper[4812]: I1124 20:53:02.999014 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:53:03 crc kubenswrapper[4812]: I1124 20:53:03.289036 4812 generic.go:334] "Generic (PLEG): container finished" podID="0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0" containerID="ff429dcfaf040aff6ae0dda13aed7deb09b8efc304d2b742ecf17720af05bf1d" exitCode=0 Nov 24 20:53:03 crc kubenswrapper[4812]: I1124 20:53:03.289115 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-k4kdh" event={"ID":"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0","Type":"ContainerDied","Data":"ff429dcfaf040aff6ae0dda13aed7deb09b8efc304d2b742ecf17720af05bf1d"} Nov 24 20:53:03 crc kubenswrapper[4812]: I1124 20:53:03.293692 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" event={"ID":"ef27b779-8611-4ba0-a021-de389bb54713","Type":"ContainerStarted","Data":"a4edb38be07ab8982447384ea7f74bee958d3b36d7ad68eb697765544a7e7f48"} Nov 24 20:53:03 crc kubenswrapper[4812]: I1124 20:53:03.294532 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" Nov 24 20:53:03 crc kubenswrapper[4812]: I1124 20:53:03.331067 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" podStartSLOduration=3.331039414 podStartE2EDuration="3.331039414s" podCreationTimestamp="2025-11-24 20:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:53:03.327327088 +0000 UTC m=+5777.116279479" watchObservedRunningTime="2025-11-24 20:53:03.331039414 +0000 UTC m=+5777.119991795" Nov 24 20:53:04 crc kubenswrapper[4812]: I1124 20:53:04.775645 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-k4kdh" Nov 24 20:53:04 crc kubenswrapper[4812]: I1124 20:53:04.842094 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvwrj\" (UniqueName: \"kubernetes.io/projected/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-kube-api-access-vvwrj\") pod \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\" (UID: \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\") " Nov 24 20:53:04 crc kubenswrapper[4812]: I1124 20:53:04.842522 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-combined-ca-bundle\") pod \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\" (UID: \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\") " Nov 24 20:53:04 crc kubenswrapper[4812]: I1124 20:53:04.842563 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-logs\") pod \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\" (UID: \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\") " Nov 24 20:53:04 crc kubenswrapper[4812]: I1124 20:53:04.842746 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-config-data\") pod \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\" (UID: \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\") " Nov 24 20:53:04 crc kubenswrapper[4812]: I1124 20:53:04.842787 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-scripts\") pod \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\" (UID: \"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0\") " Nov 24 20:53:04 crc kubenswrapper[4812]: I1124 20:53:04.843468 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-logs" (OuterVolumeSpecName: "logs") pod "0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0" (UID: "0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:53:04 crc kubenswrapper[4812]: I1124 20:53:04.862558 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-scripts" (OuterVolumeSpecName: "scripts") pod "0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0" (UID: "0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:53:04 crc kubenswrapper[4812]: I1124 20:53:04.863091 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-kube-api-access-vvwrj" (OuterVolumeSpecName: "kube-api-access-vvwrj") pod "0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0" (UID: "0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0"). InnerVolumeSpecName "kube-api-access-vvwrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:53:04 crc kubenswrapper[4812]: I1124 20:53:04.874622 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-config-data" (OuterVolumeSpecName: "config-data") pod "0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0" (UID: "0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:53:04 crc kubenswrapper[4812]: I1124 20:53:04.879409 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0" (UID: "0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:53:04 crc kubenswrapper[4812]: I1124 20:53:04.945223 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:53:04 crc kubenswrapper[4812]: I1124 20:53:04.945269 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:53:04 crc kubenswrapper[4812]: I1124 20:53:04.945292 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvwrj\" (UniqueName: \"kubernetes.io/projected/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-kube-api-access-vvwrj\") on node \"crc\" DevicePath \"\"" Nov 24 20:53:04 crc kubenswrapper[4812]: I1124 20:53:04.945311 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:53:04 crc kubenswrapper[4812]: I1124 20:53:04.945348 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0-logs\") on node \"crc\" DevicePath \"\"" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.322744 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-k4kdh" event={"ID":"0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0","Type":"ContainerDied","Data":"b5ffbcf464d5632e552a07d48fb8bf2c1ae793c33172a16c92d70accd284602d"} Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.322802 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5ffbcf464d5632e552a07d48fb8bf2c1ae793c33172a16c92d70accd284602d" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.322820 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-k4kdh" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.406967 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-55749cf576-r94b9"] Nov 24 20:53:05 crc kubenswrapper[4812]: E1124 20:53:05.407461 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0" containerName="placement-db-sync" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.407489 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0" containerName="placement-db-sync" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.407800 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0" containerName="placement-db-sync" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.409004 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.412170 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.412451 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-nr7cb" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.412761 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.414173 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.422205 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.447600 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55749cf576-r94b9"] Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.473584 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81353499-bb5a-4a56-85b1-1e009e60e610-logs\") pod \"placement-55749cf576-r94b9\" (UID: \"81353499-bb5a-4a56-85b1-1e009e60e610\") " pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.473701 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81353499-bb5a-4a56-85b1-1e009e60e610-scripts\") pod \"placement-55749cf576-r94b9\" (UID: \"81353499-bb5a-4a56-85b1-1e009e60e610\") " pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.473803 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/81353499-bb5a-4a56-85b1-1e009e60e610-public-tls-certs\") pod \"placement-55749cf576-r94b9\" (UID: \"81353499-bb5a-4a56-85b1-1e009e60e610\") " pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.473875 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81353499-bb5a-4a56-85b1-1e009e60e610-combined-ca-bundle\") pod \"placement-55749cf576-r94b9\" (UID: \"81353499-bb5a-4a56-85b1-1e009e60e610\") " pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.473954 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81353499-bb5a-4a56-85b1-1e009e60e610-internal-tls-certs\") pod \"placement-55749cf576-r94b9\" (UID: \"81353499-bb5a-4a56-85b1-1e009e60e610\") " pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.474169 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6645\" (UniqueName: \"kubernetes.io/projected/81353499-bb5a-4a56-85b1-1e009e60e610-kube-api-access-t6645\") pod \"placement-55749cf576-r94b9\" (UID: \"81353499-bb5a-4a56-85b1-1e009e60e610\") " pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.474244 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81353499-bb5a-4a56-85b1-1e009e60e610-config-data\") pod \"placement-55749cf576-r94b9\" (UID: \"81353499-bb5a-4a56-85b1-1e009e60e610\") " pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.576489 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81353499-bb5a-4a56-85b1-1e009e60e610-logs\") pod \"placement-55749cf576-r94b9\" (UID: \"81353499-bb5a-4a56-85b1-1e009e60e610\") " pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.576560 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81353499-bb5a-4a56-85b1-1e009e60e610-scripts\") pod \"placement-55749cf576-r94b9\" (UID: \"81353499-bb5a-4a56-85b1-1e009e60e610\") " pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.576595 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/81353499-bb5a-4a56-85b1-1e009e60e610-public-tls-certs\") pod \"placement-55749cf576-r94b9\" (UID: \"81353499-bb5a-4a56-85b1-1e009e60e610\") " pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.576645 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81353499-bb5a-4a56-85b1-1e009e60e610-combined-ca-bundle\") pod \"placement-55749cf576-r94b9\" (UID: \"81353499-bb5a-4a56-85b1-1e009e60e610\") " pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.576688 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81353499-bb5a-4a56-85b1-1e009e60e610-internal-tls-certs\") pod \"placement-55749cf576-r94b9\" (UID: \"81353499-bb5a-4a56-85b1-1e009e60e610\") " pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.576788 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6645\" (UniqueName: \"kubernetes.io/projected/81353499-bb5a-4a56-85b1-1e009e60e610-kube-api-access-t6645\") pod \"placement-55749cf576-r94b9\" (UID: \"81353499-bb5a-4a56-85b1-1e009e60e610\") " pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.576831 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81353499-bb5a-4a56-85b1-1e009e60e610-config-data\") pod \"placement-55749cf576-r94b9\" (UID: \"81353499-bb5a-4a56-85b1-1e009e60e610\") " pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.577017 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81353499-bb5a-4a56-85b1-1e009e60e610-logs\") pod \"placement-55749cf576-r94b9\" (UID: \"81353499-bb5a-4a56-85b1-1e009e60e610\") " pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.580869 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/81353499-bb5a-4a56-85b1-1e009e60e610-public-tls-certs\") pod \"placement-55749cf576-r94b9\" (UID: \"81353499-bb5a-4a56-85b1-1e009e60e610\") " pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.581648 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81353499-bb5a-4a56-85b1-1e009e60e610-config-data\") pod \"placement-55749cf576-r94b9\" (UID: \"81353499-bb5a-4a56-85b1-1e009e60e610\") " pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.585961 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/81353499-bb5a-4a56-85b1-1e009e60e610-internal-tls-certs\") pod \"placement-55749cf576-r94b9\" (UID: \"81353499-bb5a-4a56-85b1-1e009e60e610\") " pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.588552 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81353499-bb5a-4a56-85b1-1e009e60e610-combined-ca-bundle\") pod \"placement-55749cf576-r94b9\" (UID: \"81353499-bb5a-4a56-85b1-1e009e60e610\") " pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.590748 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81353499-bb5a-4a56-85b1-1e009e60e610-scripts\") pod \"placement-55749cf576-r94b9\" (UID: \"81353499-bb5a-4a56-85b1-1e009e60e610\") " pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.596068 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6645\" (UniqueName: \"kubernetes.io/projected/81353499-bb5a-4a56-85b1-1e009e60e610-kube-api-access-t6645\") pod \"placement-55749cf576-r94b9\" (UID: \"81353499-bb5a-4a56-85b1-1e009e60e610\") " pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:05 crc kubenswrapper[4812]: I1124 20:53:05.745528 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:06 crc kubenswrapper[4812]: I1124 20:53:06.259833 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55749cf576-r94b9"] Nov 24 20:53:06 crc kubenswrapper[4812]: W1124 20:53:06.274898 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81353499_bb5a_4a56_85b1_1e009e60e610.slice/crio-5345169511de4a55b9ea55600c2ec8706f840b5f8b08788fc5bc7e10f1300c6b WatchSource:0}: Error finding container 5345169511de4a55b9ea55600c2ec8706f840b5f8b08788fc5bc7e10f1300c6b: Status 404 returned error can't find the container with id 5345169511de4a55b9ea55600c2ec8706f840b5f8b08788fc5bc7e10f1300c6b Nov 24 20:53:06 crc kubenswrapper[4812]: I1124 20:53:06.333259 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55749cf576-r94b9" event={"ID":"81353499-bb5a-4a56-85b1-1e009e60e610","Type":"ContainerStarted","Data":"5345169511de4a55b9ea55600c2ec8706f840b5f8b08788fc5bc7e10f1300c6b"} Nov 24 20:53:07 crc kubenswrapper[4812]: I1124 20:53:07.348903 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55749cf576-r94b9" event={"ID":"81353499-bb5a-4a56-85b1-1e009e60e610","Type":"ContainerStarted","Data":"7d8dc88b4b1c452d7a672128a8d23f01699fc5e1fd762e69470ace2568fb4e4c"} Nov 24 20:53:07 crc kubenswrapper[4812]: I1124 20:53:07.349247 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:07 crc kubenswrapper[4812]: I1124 20:53:07.349269 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55749cf576-r94b9" event={"ID":"81353499-bb5a-4a56-85b1-1e009e60e610","Type":"ContainerStarted","Data":"46561cb9f1334c777a7e6d6517a93a4aae08f1c4542fe6b3f5da6f3bfa7c102d"} Nov 24 20:53:07 crc kubenswrapper[4812]: I1124 20:53:07.388367 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-55749cf576-r94b9" podStartSLOduration=2.388320937 podStartE2EDuration="2.388320937s" podCreationTimestamp="2025-11-24 20:53:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:53:07.374615518 +0000 UTC m=+5781.163567959" watchObservedRunningTime="2025-11-24 20:53:07.388320937 +0000 UTC m=+5781.177273338" Nov 24 20:53:08 crc kubenswrapper[4812]: I1124 20:53:08.361016 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:11 crc kubenswrapper[4812]: I1124 20:53:11.209586 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" Nov 24 20:53:11 crc kubenswrapper[4812]: I1124 20:53:11.298111 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b46cbf9dc-v8jvf"] Nov 24 20:53:11 crc kubenswrapper[4812]: I1124 20:53:11.298408 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" podUID="52541eeb-4f7f-44d2-80e6-92372aecb24e" containerName="dnsmasq-dns" containerID="cri-o://caedff0fb729c11188f9b834f9a0fd10405c8eac8ef46661bf77dd5b89bf1fc2" gracePeriod=10 Nov 24 20:53:11 crc kubenswrapper[4812]: I1124 20:53:11.834906 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" Nov 24 20:53:11 crc kubenswrapper[4812]: I1124 20:53:11.940259 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-dns-svc\") pod \"52541eeb-4f7f-44d2-80e6-92372aecb24e\" (UID: \"52541eeb-4f7f-44d2-80e6-92372aecb24e\") " Nov 24 20:53:11 crc kubenswrapper[4812]: I1124 20:53:11.940304 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j89z9\" (UniqueName: \"kubernetes.io/projected/52541eeb-4f7f-44d2-80e6-92372aecb24e-kube-api-access-j89z9\") pod \"52541eeb-4f7f-44d2-80e6-92372aecb24e\" (UID: \"52541eeb-4f7f-44d2-80e6-92372aecb24e\") " Nov 24 20:53:11 crc kubenswrapper[4812]: I1124 20:53:11.941250 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-ovsdbserver-sb\") pod \"52541eeb-4f7f-44d2-80e6-92372aecb24e\" (UID: \"52541eeb-4f7f-44d2-80e6-92372aecb24e\") " Nov 24 20:53:11 crc kubenswrapper[4812]: I1124 20:53:11.941317 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-config\") pod \"52541eeb-4f7f-44d2-80e6-92372aecb24e\" (UID: \"52541eeb-4f7f-44d2-80e6-92372aecb24e\") " Nov 24 20:53:11 crc kubenswrapper[4812]: I1124 20:53:11.941362 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-ovsdbserver-nb\") pod \"52541eeb-4f7f-44d2-80e6-92372aecb24e\" (UID: \"52541eeb-4f7f-44d2-80e6-92372aecb24e\") " Nov 24 20:53:11 crc kubenswrapper[4812]: I1124 20:53:11.947469 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52541eeb-4f7f-44d2-80e6-92372aecb24e-kube-api-access-j89z9" (OuterVolumeSpecName: "kube-api-access-j89z9") pod "52541eeb-4f7f-44d2-80e6-92372aecb24e" (UID: "52541eeb-4f7f-44d2-80e6-92372aecb24e"). InnerVolumeSpecName "kube-api-access-j89z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:53:11 crc kubenswrapper[4812]: I1124 20:53:11.995831 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-config" (OuterVolumeSpecName: "config") pod "52541eeb-4f7f-44d2-80e6-92372aecb24e" (UID: "52541eeb-4f7f-44d2-80e6-92372aecb24e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:53:12 crc kubenswrapper[4812]: I1124 20:53:12.001426 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "52541eeb-4f7f-44d2-80e6-92372aecb24e" (UID: "52541eeb-4f7f-44d2-80e6-92372aecb24e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:53:12 crc kubenswrapper[4812]: I1124 20:53:12.008664 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "52541eeb-4f7f-44d2-80e6-92372aecb24e" (UID: "52541eeb-4f7f-44d2-80e6-92372aecb24e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:53:12 crc kubenswrapper[4812]: I1124 20:53:12.013714 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "52541eeb-4f7f-44d2-80e6-92372aecb24e" (UID: "52541eeb-4f7f-44d2-80e6-92372aecb24e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:53:12 crc kubenswrapper[4812]: I1124 20:53:12.042846 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 20:53:12 crc kubenswrapper[4812]: I1124 20:53:12.042877 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 20:53:12 crc kubenswrapper[4812]: I1124 20:53:12.042892 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j89z9\" (UniqueName: \"kubernetes.io/projected/52541eeb-4f7f-44d2-80e6-92372aecb24e-kube-api-access-j89z9\") on node \"crc\" DevicePath \"\"" Nov 24 20:53:12 crc kubenswrapper[4812]: I1124 20:53:12.042905 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 20:53:12 crc kubenswrapper[4812]: I1124 20:53:12.042918 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52541eeb-4f7f-44d2-80e6-92372aecb24e-config\") on node \"crc\" DevicePath \"\"" Nov 24 20:53:12 crc kubenswrapper[4812]: I1124 20:53:12.423134 4812 generic.go:334] "Generic (PLEG): container finished" podID="52541eeb-4f7f-44d2-80e6-92372aecb24e" containerID="caedff0fb729c11188f9b834f9a0fd10405c8eac8ef46661bf77dd5b89bf1fc2" exitCode=0 Nov 24 20:53:12 crc kubenswrapper[4812]: I1124 20:53:12.423208 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" event={"ID":"52541eeb-4f7f-44d2-80e6-92372aecb24e","Type":"ContainerDied","Data":"caedff0fb729c11188f9b834f9a0fd10405c8eac8ef46661bf77dd5b89bf1fc2"} Nov 24 20:53:12 crc kubenswrapper[4812]: I1124 20:53:12.424540 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" event={"ID":"52541eeb-4f7f-44d2-80e6-92372aecb24e","Type":"ContainerDied","Data":"ab44e0e67f18cc7bcf2dcd44ccf2667e73b16e9bcda0a6173e8063e6b5355d63"} Nov 24 20:53:12 crc kubenswrapper[4812]: I1124 20:53:12.424578 4812 scope.go:117] "RemoveContainer" containerID="caedff0fb729c11188f9b834f9a0fd10405c8eac8ef46661bf77dd5b89bf1fc2" Nov 24 20:53:12 crc kubenswrapper[4812]: I1124 20:53:12.423253 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" Nov 24 20:53:12 crc kubenswrapper[4812]: I1124 20:53:12.456036 4812 scope.go:117] "RemoveContainer" containerID="76053d8fd85cc68eb1a7936af940ab2829f3c10bc5b677bbc093f05012254821" Nov 24 20:53:12 crc kubenswrapper[4812]: I1124 20:53:12.482603 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b46cbf9dc-v8jvf"] Nov 24 20:53:12 crc kubenswrapper[4812]: I1124 20:53:12.503145 4812 scope.go:117] "RemoveContainer" containerID="caedff0fb729c11188f9b834f9a0fd10405c8eac8ef46661bf77dd5b89bf1fc2" Nov 24 20:53:12 crc kubenswrapper[4812]: I1124 20:53:12.503808 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b46cbf9dc-v8jvf"] Nov 24 20:53:12 crc kubenswrapper[4812]: E1124 20:53:12.503812 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caedff0fb729c11188f9b834f9a0fd10405c8eac8ef46661bf77dd5b89bf1fc2\": container with ID starting with caedff0fb729c11188f9b834f9a0fd10405c8eac8ef46661bf77dd5b89bf1fc2 not found: ID does not exist" containerID="caedff0fb729c11188f9b834f9a0fd10405c8eac8ef46661bf77dd5b89bf1fc2" Nov 24 20:53:12 crc kubenswrapper[4812]: I1124 20:53:12.503930 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caedff0fb729c11188f9b834f9a0fd10405c8eac8ef46661bf77dd5b89bf1fc2"} err="failed to get container status \"caedff0fb729c11188f9b834f9a0fd10405c8eac8ef46661bf77dd5b89bf1fc2\": rpc error: code = NotFound desc = could not find container \"caedff0fb729c11188f9b834f9a0fd10405c8eac8ef46661bf77dd5b89bf1fc2\": container with ID starting with caedff0fb729c11188f9b834f9a0fd10405c8eac8ef46661bf77dd5b89bf1fc2 not found: ID does not exist" Nov 24 20:53:12 crc kubenswrapper[4812]: I1124 20:53:12.503977 4812 scope.go:117] "RemoveContainer" containerID="76053d8fd85cc68eb1a7936af940ab2829f3c10bc5b677bbc093f05012254821" Nov 24 20:53:12 crc kubenswrapper[4812]: E1124 20:53:12.504783 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76053d8fd85cc68eb1a7936af940ab2829f3c10bc5b677bbc093f05012254821\": container with ID starting with 76053d8fd85cc68eb1a7936af940ab2829f3c10bc5b677bbc093f05012254821 not found: ID does not exist" containerID="76053d8fd85cc68eb1a7936af940ab2829f3c10bc5b677bbc093f05012254821" Nov 24 20:53:12 crc kubenswrapper[4812]: I1124 20:53:12.504878 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76053d8fd85cc68eb1a7936af940ab2829f3c10bc5b677bbc093f05012254821"} err="failed to get container status \"76053d8fd85cc68eb1a7936af940ab2829f3c10bc5b677bbc093f05012254821\": rpc error: code = NotFound desc = could not find container \"76053d8fd85cc68eb1a7936af940ab2829f3c10bc5b677bbc093f05012254821\": container with ID starting with 76053d8fd85cc68eb1a7936af940ab2829f3c10bc5b677bbc093f05012254821 not found: ID does not exist" Nov 24 20:53:12 crc kubenswrapper[4812]: I1124 20:53:12.986431 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52541eeb-4f7f-44d2-80e6-92372aecb24e" path="/var/lib/kubelet/pods/52541eeb-4f7f-44d2-80e6-92372aecb24e/volumes" Nov 24 20:53:16 crc kubenswrapper[4812]: I1124 20:53:16.561199 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b46cbf9dc-v8jvf" podUID="52541eeb-4f7f-44d2-80e6-92372aecb24e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.65:5353: i/o timeout" Nov 24 20:53:20 crc kubenswrapper[4812]: I1124 20:53:20.803319 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qk5t2"] Nov 24 20:53:20 crc kubenswrapper[4812]: E1124 20:53:20.804980 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52541eeb-4f7f-44d2-80e6-92372aecb24e" containerName="dnsmasq-dns" Nov 24 20:53:20 crc kubenswrapper[4812]: I1124 20:53:20.805004 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="52541eeb-4f7f-44d2-80e6-92372aecb24e" containerName="dnsmasq-dns" Nov 24 20:53:20 crc kubenswrapper[4812]: E1124 20:53:20.805037 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52541eeb-4f7f-44d2-80e6-92372aecb24e" containerName="init" Nov 24 20:53:20 crc kubenswrapper[4812]: I1124 20:53:20.805051 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="52541eeb-4f7f-44d2-80e6-92372aecb24e" containerName="init" Nov 24 20:53:20 crc kubenswrapper[4812]: I1124 20:53:20.805521 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="52541eeb-4f7f-44d2-80e6-92372aecb24e" containerName="dnsmasq-dns" Nov 24 20:53:20 crc kubenswrapper[4812]: I1124 20:53:20.810671 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qk5t2" Nov 24 20:53:20 crc kubenswrapper[4812]: I1124 20:53:20.838635 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qk5t2"] Nov 24 20:53:20 crc kubenswrapper[4812]: I1124 20:53:20.851283 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611-catalog-content\") pod \"redhat-operators-qk5t2\" (UID: \"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611\") " pod="openshift-marketplace/redhat-operators-qk5t2" Nov 24 20:53:20 crc kubenswrapper[4812]: I1124 20:53:20.851404 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611-utilities\") pod \"redhat-operators-qk5t2\" (UID: \"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611\") " pod="openshift-marketplace/redhat-operators-qk5t2" Nov 24 20:53:20 crc kubenswrapper[4812]: I1124 20:53:20.851431 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sv7z\" (UniqueName: \"kubernetes.io/projected/6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611-kube-api-access-9sv7z\") pod \"redhat-operators-qk5t2\" (UID: \"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611\") " pod="openshift-marketplace/redhat-operators-qk5t2" Nov 24 20:53:20 crc kubenswrapper[4812]: I1124 20:53:20.953455 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611-utilities\") pod \"redhat-operators-qk5t2\" (UID: \"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611\") " pod="openshift-marketplace/redhat-operators-qk5t2" Nov 24 20:53:20 crc kubenswrapper[4812]: I1124 20:53:20.953518 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sv7z\" (UniqueName: \"kubernetes.io/projected/6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611-kube-api-access-9sv7z\") pod \"redhat-operators-qk5t2\" (UID: \"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611\") " pod="openshift-marketplace/redhat-operators-qk5t2" Nov 24 20:53:20 crc kubenswrapper[4812]: I1124 20:53:20.953654 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611-catalog-content\") pod \"redhat-operators-qk5t2\" (UID: \"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611\") " pod="openshift-marketplace/redhat-operators-qk5t2" Nov 24 20:53:20 crc kubenswrapper[4812]: I1124 20:53:20.954223 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611-catalog-content\") pod \"redhat-operators-qk5t2\" (UID: \"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611\") " pod="openshift-marketplace/redhat-operators-qk5t2" Nov 24 20:53:20 crc kubenswrapper[4812]: I1124 20:53:20.954601 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611-utilities\") pod \"redhat-operators-qk5t2\" (UID: \"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611\") " pod="openshift-marketplace/redhat-operators-qk5t2" Nov 24 20:53:20 crc kubenswrapper[4812]: I1124 20:53:20.986746 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sv7z\" (UniqueName: \"kubernetes.io/projected/6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611-kube-api-access-9sv7z\") pod \"redhat-operators-qk5t2\" (UID: \"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611\") " pod="openshift-marketplace/redhat-operators-qk5t2" Nov 24 20:53:21 crc kubenswrapper[4812]: I1124 20:53:21.156262 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qk5t2" Nov 24 20:53:21 crc kubenswrapper[4812]: I1124 20:53:21.621986 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qk5t2"] Nov 24 20:53:22 crc kubenswrapper[4812]: I1124 20:53:22.548673 4812 generic.go:334] "Generic (PLEG): container finished" podID="6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611" containerID="fd8f4e72e6bb6a7dcf44c3144493856707cd9cd0b34338c87fb3fa67dc14f14d" exitCode=0 Nov 24 20:53:22 crc kubenswrapper[4812]: I1124 20:53:22.548840 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qk5t2" event={"ID":"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611","Type":"ContainerDied","Data":"fd8f4e72e6bb6a7dcf44c3144493856707cd9cd0b34338c87fb3fa67dc14f14d"} Nov 24 20:53:22 crc kubenswrapper[4812]: I1124 20:53:22.549085 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qk5t2" event={"ID":"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611","Type":"ContainerStarted","Data":"d8aee1e2f222f7fc5c92b6b471733fb8d87294805bc58d3a583cf79ffa26d886"} Nov 24 20:53:23 crc kubenswrapper[4812]: I1124 20:53:23.566141 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qk5t2" event={"ID":"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611","Type":"ContainerStarted","Data":"2b6415980dad397539905b1059d35bf6ba2f611c71c0f01cf941fe77f7a4c82b"} Nov 24 20:53:24 crc kubenswrapper[4812]: I1124 20:53:24.583702 4812 generic.go:334] "Generic (PLEG): container finished" podID="6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611" containerID="2b6415980dad397539905b1059d35bf6ba2f611c71c0f01cf941fe77f7a4c82b" exitCode=0 Nov 24 20:53:24 crc kubenswrapper[4812]: I1124 20:53:24.583773 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qk5t2" event={"ID":"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611","Type":"ContainerDied","Data":"2b6415980dad397539905b1059d35bf6ba2f611c71c0f01cf941fe77f7a4c82b"} Nov 24 20:53:25 crc kubenswrapper[4812]: I1124 20:53:25.597523 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qk5t2" event={"ID":"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611","Type":"ContainerStarted","Data":"a85bb0631a2c5a4df57af3864bbcee096bd5238bd521d7c0eb89c0e8c58272fe"} Nov 24 20:53:25 crc kubenswrapper[4812]: I1124 20:53:25.632666 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qk5t2" podStartSLOduration=3.183664678 podStartE2EDuration="5.632639644s" podCreationTimestamp="2025-11-24 20:53:20 +0000 UTC" firstStartedPulling="2025-11-24 20:53:22.551767508 +0000 UTC m=+5796.340719909" lastFinishedPulling="2025-11-24 20:53:25.000742474 +0000 UTC m=+5798.789694875" observedRunningTime="2025-11-24 20:53:25.625240794 +0000 UTC m=+5799.414193195" watchObservedRunningTime="2025-11-24 20:53:25.632639644 +0000 UTC m=+5799.421592045" Nov 24 20:53:26 crc kubenswrapper[4812]: I1124 20:53:26.784669 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9s6qt"] Nov 24 20:53:26 crc kubenswrapper[4812]: I1124 20:53:26.786362 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9s6qt" Nov 24 20:53:26 crc kubenswrapper[4812]: I1124 20:53:26.790726 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cb6407-4424-427d-9335-a44fbe01568b-catalog-content\") pod \"community-operators-9s6qt\" (UID: \"80cb6407-4424-427d-9335-a44fbe01568b\") " pod="openshift-marketplace/community-operators-9s6qt" Nov 24 20:53:26 crc kubenswrapper[4812]: I1124 20:53:26.790813 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cb6407-4424-427d-9335-a44fbe01568b-utilities\") pod \"community-operators-9s6qt\" (UID: \"80cb6407-4424-427d-9335-a44fbe01568b\") " pod="openshift-marketplace/community-operators-9s6qt" Nov 24 20:53:26 crc kubenswrapper[4812]: I1124 20:53:26.790860 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrl4w\" (UniqueName: \"kubernetes.io/projected/80cb6407-4424-427d-9335-a44fbe01568b-kube-api-access-zrl4w\") pod \"community-operators-9s6qt\" (UID: \"80cb6407-4424-427d-9335-a44fbe01568b\") " pod="openshift-marketplace/community-operators-9s6qt" Nov 24 20:53:26 crc kubenswrapper[4812]: I1124 20:53:26.862139 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9s6qt"] Nov 24 20:53:26 crc kubenswrapper[4812]: I1124 20:53:26.892996 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cb6407-4424-427d-9335-a44fbe01568b-utilities\") pod \"community-operators-9s6qt\" (UID: \"80cb6407-4424-427d-9335-a44fbe01568b\") " pod="openshift-marketplace/community-operators-9s6qt" Nov 24 20:53:26 crc kubenswrapper[4812]: I1124 20:53:26.893618 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cb6407-4424-427d-9335-a44fbe01568b-utilities\") pod \"community-operators-9s6qt\" (UID: \"80cb6407-4424-427d-9335-a44fbe01568b\") " pod="openshift-marketplace/community-operators-9s6qt" Nov 24 20:53:26 crc kubenswrapper[4812]: I1124 20:53:26.894474 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrl4w\" (UniqueName: \"kubernetes.io/projected/80cb6407-4424-427d-9335-a44fbe01568b-kube-api-access-zrl4w\") pod \"community-operators-9s6qt\" (UID: \"80cb6407-4424-427d-9335-a44fbe01568b\") " pod="openshift-marketplace/community-operators-9s6qt" Nov 24 20:53:26 crc kubenswrapper[4812]: I1124 20:53:26.894960 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cb6407-4424-427d-9335-a44fbe01568b-catalog-content\") pod \"community-operators-9s6qt\" (UID: \"80cb6407-4424-427d-9335-a44fbe01568b\") " pod="openshift-marketplace/community-operators-9s6qt" Nov 24 20:53:26 crc kubenswrapper[4812]: I1124 20:53:26.895361 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cb6407-4424-427d-9335-a44fbe01568b-catalog-content\") pod \"community-operators-9s6qt\" (UID: \"80cb6407-4424-427d-9335-a44fbe01568b\") " pod="openshift-marketplace/community-operators-9s6qt" Nov 24 20:53:26 crc kubenswrapper[4812]: I1124 20:53:26.921268 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrl4w\" (UniqueName: \"kubernetes.io/projected/80cb6407-4424-427d-9335-a44fbe01568b-kube-api-access-zrl4w\") pod \"community-operators-9s6qt\" (UID: \"80cb6407-4424-427d-9335-a44fbe01568b\") " pod="openshift-marketplace/community-operators-9s6qt" Nov 24 20:53:27 crc kubenswrapper[4812]: I1124 20:53:27.127153 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9s6qt" Nov 24 20:53:27 crc kubenswrapper[4812]: I1124 20:53:27.687558 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9s6qt"] Nov 24 20:53:27 crc kubenswrapper[4812]: W1124 20:53:27.691993 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80cb6407_4424_427d_9335_a44fbe01568b.slice/crio-e077af79fc1e5af312a7588b5c1bd5070c2650c14afc135b683cba9c030947b1 WatchSource:0}: Error finding container e077af79fc1e5af312a7588b5c1bd5070c2650c14afc135b683cba9c030947b1: Status 404 returned error can't find the container with id e077af79fc1e5af312a7588b5c1bd5070c2650c14afc135b683cba9c030947b1 Nov 24 20:53:28 crc kubenswrapper[4812]: I1124 20:53:28.630706 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9s6qt" event={"ID":"80cb6407-4424-427d-9335-a44fbe01568b","Type":"ContainerStarted","Data":"5a5ece5705f9feed108a9e08c631a8d305446a97e97615f7945610f4f04efdf1"} Nov 24 20:53:28 crc kubenswrapper[4812]: I1124 20:53:28.631042 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9s6qt" event={"ID":"80cb6407-4424-427d-9335-a44fbe01568b","Type":"ContainerStarted","Data":"e077af79fc1e5af312a7588b5c1bd5070c2650c14afc135b683cba9c030947b1"} Nov 24 20:53:29 crc kubenswrapper[4812]: I1124 20:53:29.642951 4812 generic.go:334] "Generic (PLEG): container finished" podID="80cb6407-4424-427d-9335-a44fbe01568b" containerID="5a5ece5705f9feed108a9e08c631a8d305446a97e97615f7945610f4f04efdf1" exitCode=0 Nov 24 20:53:29 crc kubenswrapper[4812]: I1124 20:53:29.643044 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9s6qt" event={"ID":"80cb6407-4424-427d-9335-a44fbe01568b","Type":"ContainerDied","Data":"5a5ece5705f9feed108a9e08c631a8d305446a97e97615f7945610f4f04efdf1"} Nov 24 20:53:30 crc kubenswrapper[4812]: I1124 20:53:30.657768 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9s6qt" event={"ID":"80cb6407-4424-427d-9335-a44fbe01568b","Type":"ContainerStarted","Data":"fe19368d0b018424ebc6651e3e8af41e2a99226952536b44cb1ebcd14aa104ff"} Nov 24 20:53:31 crc kubenswrapper[4812]: I1124 20:53:31.156503 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qk5t2" Nov 24 20:53:31 crc kubenswrapper[4812]: I1124 20:53:31.157139 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qk5t2" Nov 24 20:53:31 crc kubenswrapper[4812]: I1124 20:53:31.676401 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9s6qt" event={"ID":"80cb6407-4424-427d-9335-a44fbe01568b","Type":"ContainerDied","Data":"fe19368d0b018424ebc6651e3e8af41e2a99226952536b44cb1ebcd14aa104ff"} Nov 24 20:53:31 crc kubenswrapper[4812]: I1124 20:53:31.676406 4812 generic.go:334] "Generic (PLEG): container finished" podID="80cb6407-4424-427d-9335-a44fbe01568b" containerID="fe19368d0b018424ebc6651e3e8af41e2a99226952536b44cb1ebcd14aa104ff" exitCode=0 Nov 24 20:53:32 crc kubenswrapper[4812]: I1124 20:53:32.210400 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qk5t2" podUID="6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611" containerName="registry-server" probeResult="failure" output=< Nov 24 20:53:32 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 20:53:32 crc kubenswrapper[4812]: > Nov 24 20:53:32 crc kubenswrapper[4812]: I1124 20:53:32.692727 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9s6qt" event={"ID":"80cb6407-4424-427d-9335-a44fbe01568b","Type":"ContainerStarted","Data":"fa5f402e2d97aa29adaa4840261a4d6d451e37bc7941afb244e8cc869a25a7b6"} Nov 24 20:53:32 crc kubenswrapper[4812]: I1124 20:53:32.715556 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9s6qt" podStartSLOduration=4.268865946 podStartE2EDuration="6.715532205s" podCreationTimestamp="2025-11-24 20:53:26 +0000 UTC" firstStartedPulling="2025-11-24 20:53:29.646272209 +0000 UTC m=+5803.435224620" lastFinishedPulling="2025-11-24 20:53:32.092938498 +0000 UTC m=+5805.881890879" observedRunningTime="2025-11-24 20:53:32.709861634 +0000 UTC m=+5806.498814005" watchObservedRunningTime="2025-11-24 20:53:32.715532205 +0000 UTC m=+5806.504484596" Nov 24 20:53:32 crc kubenswrapper[4812]: I1124 20:53:32.999064 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:53:32 crc kubenswrapper[4812]: I1124 20:53:32.999326 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:53:32 crc kubenswrapper[4812]: I1124 20:53:32.999485 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 20:53:33 crc kubenswrapper[4812]: I1124 20:53:33.002178 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ed217e6b672176474a4c358d1230ff543c81838c55c07bc096791453845da5d"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 20:53:33 crc kubenswrapper[4812]: I1124 20:53:33.002419 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://1ed217e6b672176474a4c358d1230ff543c81838c55c07bc096791453845da5d" gracePeriod=600 Nov 24 20:53:35 crc kubenswrapper[4812]: I1124 20:53:35.731009 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="1ed217e6b672176474a4c358d1230ff543c81838c55c07bc096791453845da5d" exitCode=0 Nov 24 20:53:35 crc kubenswrapper[4812]: I1124 20:53:35.731076 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"1ed217e6b672176474a4c358d1230ff543c81838c55c07bc096791453845da5d"} Nov 24 20:53:35 crc kubenswrapper[4812]: I1124 20:53:35.732060 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196"} Nov 24 20:53:35 crc kubenswrapper[4812]: I1124 20:53:35.732094 4812 scope.go:117] "RemoveContainer" containerID="a6329b7de34ba423e47f0e671cb8cfa1e1827204b93ccd718eaf92b03268d445" Nov 24 20:53:36 crc kubenswrapper[4812]: I1124 20:53:36.680155 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:36 crc kubenswrapper[4812]: I1124 20:53:36.682088 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55749cf576-r94b9" Nov 24 20:53:37 crc kubenswrapper[4812]: I1124 20:53:37.127607 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9s6qt" Nov 24 20:53:37 crc kubenswrapper[4812]: I1124 20:53:37.127654 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9s6qt" Nov 24 20:53:37 crc kubenswrapper[4812]: I1124 20:53:37.194171 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9s6qt" Nov 24 20:53:37 crc kubenswrapper[4812]: I1124 20:53:37.868982 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9s6qt" Nov 24 20:53:37 crc kubenswrapper[4812]: I1124 20:53:37.917480 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9s6qt"] Nov 24 20:53:39 crc kubenswrapper[4812]: I1124 20:53:39.801610 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9s6qt" podUID="80cb6407-4424-427d-9335-a44fbe01568b" containerName="registry-server" containerID="cri-o://fa5f402e2d97aa29adaa4840261a4d6d451e37bc7941afb244e8cc869a25a7b6" gracePeriod=2 Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.349966 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9s6qt" Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.397974 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrl4w\" (UniqueName: \"kubernetes.io/projected/80cb6407-4424-427d-9335-a44fbe01568b-kube-api-access-zrl4w\") pod \"80cb6407-4424-427d-9335-a44fbe01568b\" (UID: \"80cb6407-4424-427d-9335-a44fbe01568b\") " Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.398117 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cb6407-4424-427d-9335-a44fbe01568b-catalog-content\") pod \"80cb6407-4424-427d-9335-a44fbe01568b\" (UID: \"80cb6407-4424-427d-9335-a44fbe01568b\") " Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.398314 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cb6407-4424-427d-9335-a44fbe01568b-utilities\") pod \"80cb6407-4424-427d-9335-a44fbe01568b\" (UID: \"80cb6407-4424-427d-9335-a44fbe01568b\") " Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.399711 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80cb6407-4424-427d-9335-a44fbe01568b-utilities" (OuterVolumeSpecName: "utilities") pod "80cb6407-4424-427d-9335-a44fbe01568b" (UID: "80cb6407-4424-427d-9335-a44fbe01568b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.406799 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80cb6407-4424-427d-9335-a44fbe01568b-kube-api-access-zrl4w" (OuterVolumeSpecName: "kube-api-access-zrl4w") pod "80cb6407-4424-427d-9335-a44fbe01568b" (UID: "80cb6407-4424-427d-9335-a44fbe01568b"). InnerVolumeSpecName "kube-api-access-zrl4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.485545 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80cb6407-4424-427d-9335-a44fbe01568b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80cb6407-4424-427d-9335-a44fbe01568b" (UID: "80cb6407-4424-427d-9335-a44fbe01568b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.501189 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80cb6407-4424-427d-9335-a44fbe01568b-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.501432 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrl4w\" (UniqueName: \"kubernetes.io/projected/80cb6407-4424-427d-9335-a44fbe01568b-kube-api-access-zrl4w\") on node \"crc\" DevicePath \"\"" Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.501446 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80cb6407-4424-427d-9335-a44fbe01568b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.816254 4812 generic.go:334] "Generic (PLEG): container finished" podID="80cb6407-4424-427d-9335-a44fbe01568b" containerID="fa5f402e2d97aa29adaa4840261a4d6d451e37bc7941afb244e8cc869a25a7b6" exitCode=0 Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.816315 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9s6qt" event={"ID":"80cb6407-4424-427d-9335-a44fbe01568b","Type":"ContainerDied","Data":"fa5f402e2d97aa29adaa4840261a4d6d451e37bc7941afb244e8cc869a25a7b6"} Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.816407 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9s6qt" event={"ID":"80cb6407-4424-427d-9335-a44fbe01568b","Type":"ContainerDied","Data":"e077af79fc1e5af312a7588b5c1bd5070c2650c14afc135b683cba9c030947b1"} Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.816434 4812 scope.go:117] "RemoveContainer" containerID="fa5f402e2d97aa29adaa4840261a4d6d451e37bc7941afb244e8cc869a25a7b6" Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.816714 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9s6qt" Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.847443 4812 scope.go:117] "RemoveContainer" containerID="fe19368d0b018424ebc6651e3e8af41e2a99226952536b44cb1ebcd14aa104ff" Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.890617 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9s6qt"] Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.896757 4812 scope.go:117] "RemoveContainer" containerID="5a5ece5705f9feed108a9e08c631a8d305446a97e97615f7945610f4f04efdf1" Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.907017 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9s6qt"] Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.946824 4812 scope.go:117] "RemoveContainer" containerID="fa5f402e2d97aa29adaa4840261a4d6d451e37bc7941afb244e8cc869a25a7b6" Nov 24 20:53:40 crc kubenswrapper[4812]: E1124 20:53:40.947332 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa5f402e2d97aa29adaa4840261a4d6d451e37bc7941afb244e8cc869a25a7b6\": container with ID starting with fa5f402e2d97aa29adaa4840261a4d6d451e37bc7941afb244e8cc869a25a7b6 not found: ID does not exist" containerID="fa5f402e2d97aa29adaa4840261a4d6d451e37bc7941afb244e8cc869a25a7b6" Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.947459 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa5f402e2d97aa29adaa4840261a4d6d451e37bc7941afb244e8cc869a25a7b6"} err="failed to get container status \"fa5f402e2d97aa29adaa4840261a4d6d451e37bc7941afb244e8cc869a25a7b6\": rpc error: code = NotFound desc = could not find container \"fa5f402e2d97aa29adaa4840261a4d6d451e37bc7941afb244e8cc869a25a7b6\": container with ID starting with fa5f402e2d97aa29adaa4840261a4d6d451e37bc7941afb244e8cc869a25a7b6 not found: ID does not exist" Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.947494 4812 scope.go:117] "RemoveContainer" containerID="fe19368d0b018424ebc6651e3e8af41e2a99226952536b44cb1ebcd14aa104ff" Nov 24 20:53:40 crc kubenswrapper[4812]: E1124 20:53:40.948018 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe19368d0b018424ebc6651e3e8af41e2a99226952536b44cb1ebcd14aa104ff\": container with ID starting with fe19368d0b018424ebc6651e3e8af41e2a99226952536b44cb1ebcd14aa104ff not found: ID does not exist" containerID="fe19368d0b018424ebc6651e3e8af41e2a99226952536b44cb1ebcd14aa104ff" Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.948072 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe19368d0b018424ebc6651e3e8af41e2a99226952536b44cb1ebcd14aa104ff"} err="failed to get container status \"fe19368d0b018424ebc6651e3e8af41e2a99226952536b44cb1ebcd14aa104ff\": rpc error: code = NotFound desc = could not find container \"fe19368d0b018424ebc6651e3e8af41e2a99226952536b44cb1ebcd14aa104ff\": container with ID starting with fe19368d0b018424ebc6651e3e8af41e2a99226952536b44cb1ebcd14aa104ff not found: ID does not exist" Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.948103 4812 scope.go:117] "RemoveContainer" containerID="5a5ece5705f9feed108a9e08c631a8d305446a97e97615f7945610f4f04efdf1" Nov 24 20:53:40 crc kubenswrapper[4812]: E1124 20:53:40.948566 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a5ece5705f9feed108a9e08c631a8d305446a97e97615f7945610f4f04efdf1\": container with ID starting with 5a5ece5705f9feed108a9e08c631a8d305446a97e97615f7945610f4f04efdf1 not found: ID does not exist" containerID="5a5ece5705f9feed108a9e08c631a8d305446a97e97615f7945610f4f04efdf1" Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.948598 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a5ece5705f9feed108a9e08c631a8d305446a97e97615f7945610f4f04efdf1"} err="failed to get container status \"5a5ece5705f9feed108a9e08c631a8d305446a97e97615f7945610f4f04efdf1\": rpc error: code = NotFound desc = could not find container \"5a5ece5705f9feed108a9e08c631a8d305446a97e97615f7945610f4f04efdf1\": container with ID starting with 5a5ece5705f9feed108a9e08c631a8d305446a97e97615f7945610f4f04efdf1 not found: ID does not exist" Nov 24 20:53:40 crc kubenswrapper[4812]: I1124 20:53:40.976252 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80cb6407-4424-427d-9335-a44fbe01568b" path="/var/lib/kubelet/pods/80cb6407-4424-427d-9335-a44fbe01568b/volumes" Nov 24 20:53:41 crc kubenswrapper[4812]: I1124 20:53:41.227373 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qk5t2" Nov 24 20:53:41 crc kubenswrapper[4812]: I1124 20:53:41.287040 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qk5t2" Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.182286 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qk5t2"] Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.182578 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qk5t2" podUID="6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611" containerName="registry-server" containerID="cri-o://a85bb0631a2c5a4df57af3864bbcee096bd5238bd521d7c0eb89c0e8c58272fe" gracePeriod=2 Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.674026 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qk5t2" Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.771510 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611-catalog-content\") pod \"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611\" (UID: \"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611\") " Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.771567 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sv7z\" (UniqueName: \"kubernetes.io/projected/6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611-kube-api-access-9sv7z\") pod \"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611\" (UID: \"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611\") " Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.771785 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611-utilities\") pod \"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611\" (UID: \"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611\") " Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.772556 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611-utilities" (OuterVolumeSpecName: "utilities") pod "6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611" (UID: "6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.779645 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611-kube-api-access-9sv7z" (OuterVolumeSpecName: "kube-api-access-9sv7z") pod "6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611" (UID: "6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611"). InnerVolumeSpecName "kube-api-access-9sv7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.858613 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611" (UID: "6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.860180 4812 generic.go:334] "Generic (PLEG): container finished" podID="6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611" containerID="a85bb0631a2c5a4df57af3864bbcee096bd5238bd521d7c0eb89c0e8c58272fe" exitCode=0 Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.860223 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qk5t2" event={"ID":"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611","Type":"ContainerDied","Data":"a85bb0631a2c5a4df57af3864bbcee096bd5238bd521d7c0eb89c0e8c58272fe"} Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.860264 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qk5t2" event={"ID":"6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611","Type":"ContainerDied","Data":"d8aee1e2f222f7fc5c92b6b471733fb8d87294805bc58d3a583cf79ffa26d886"} Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.860297 4812 scope.go:117] "RemoveContainer" containerID="a85bb0631a2c5a4df57af3864bbcee096bd5238bd521d7c0eb89c0e8c58272fe" Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.860302 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qk5t2" Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.873548 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.873580 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.873594 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sv7z\" (UniqueName: \"kubernetes.io/projected/6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611-kube-api-access-9sv7z\") on node \"crc\" DevicePath \"\"" Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.884106 4812 scope.go:117] "RemoveContainer" containerID="2b6415980dad397539905b1059d35bf6ba2f611c71c0f01cf941fe77f7a4c82b" Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.910767 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qk5t2"] Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.917673 4812 scope.go:117] "RemoveContainer" containerID="fd8f4e72e6bb6a7dcf44c3144493856707cd9cd0b34338c87fb3fa67dc14f14d" Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.927767 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qk5t2"] Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.966668 4812 scope.go:117] "RemoveContainer" containerID="a85bb0631a2c5a4df57af3864bbcee096bd5238bd521d7c0eb89c0e8c58272fe" Nov 24 20:53:43 crc kubenswrapper[4812]: E1124 20:53:43.967049 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a85bb0631a2c5a4df57af3864bbcee096bd5238bd521d7c0eb89c0e8c58272fe\": container with ID starting with a85bb0631a2c5a4df57af3864bbcee096bd5238bd521d7c0eb89c0e8c58272fe not found: ID does not exist" containerID="a85bb0631a2c5a4df57af3864bbcee096bd5238bd521d7c0eb89c0e8c58272fe" Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.967084 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a85bb0631a2c5a4df57af3864bbcee096bd5238bd521d7c0eb89c0e8c58272fe"} err="failed to get container status \"a85bb0631a2c5a4df57af3864bbcee096bd5238bd521d7c0eb89c0e8c58272fe\": rpc error: code = NotFound desc = could not find container \"a85bb0631a2c5a4df57af3864bbcee096bd5238bd521d7c0eb89c0e8c58272fe\": container with ID starting with a85bb0631a2c5a4df57af3864bbcee096bd5238bd521d7c0eb89c0e8c58272fe not found: ID does not exist" Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.967108 4812 scope.go:117] "RemoveContainer" containerID="2b6415980dad397539905b1059d35bf6ba2f611c71c0f01cf941fe77f7a4c82b" Nov 24 20:53:43 crc kubenswrapper[4812]: E1124 20:53:43.967357 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b6415980dad397539905b1059d35bf6ba2f611c71c0f01cf941fe77f7a4c82b\": container with ID starting with 2b6415980dad397539905b1059d35bf6ba2f611c71c0f01cf941fe77f7a4c82b not found: ID does not exist" containerID="2b6415980dad397539905b1059d35bf6ba2f611c71c0f01cf941fe77f7a4c82b" Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.967381 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b6415980dad397539905b1059d35bf6ba2f611c71c0f01cf941fe77f7a4c82b"} err="failed to get container status \"2b6415980dad397539905b1059d35bf6ba2f611c71c0f01cf941fe77f7a4c82b\": rpc error: code = NotFound desc = could not find container \"2b6415980dad397539905b1059d35bf6ba2f611c71c0f01cf941fe77f7a4c82b\": container with ID starting with 2b6415980dad397539905b1059d35bf6ba2f611c71c0f01cf941fe77f7a4c82b not found: ID does not exist" Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.967396 4812 scope.go:117] "RemoveContainer" containerID="fd8f4e72e6bb6a7dcf44c3144493856707cd9cd0b34338c87fb3fa67dc14f14d" Nov 24 20:53:43 crc kubenswrapper[4812]: E1124 20:53:43.967665 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd8f4e72e6bb6a7dcf44c3144493856707cd9cd0b34338c87fb3fa67dc14f14d\": container with ID starting with fd8f4e72e6bb6a7dcf44c3144493856707cd9cd0b34338c87fb3fa67dc14f14d not found: ID does not exist" containerID="fd8f4e72e6bb6a7dcf44c3144493856707cd9cd0b34338c87fb3fa67dc14f14d" Nov 24 20:53:43 crc kubenswrapper[4812]: I1124 20:53:43.967690 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd8f4e72e6bb6a7dcf44c3144493856707cd9cd0b34338c87fb3fa67dc14f14d"} err="failed to get container status \"fd8f4e72e6bb6a7dcf44c3144493856707cd9cd0b34338c87fb3fa67dc14f14d\": rpc error: code = NotFound desc = could not find container \"fd8f4e72e6bb6a7dcf44c3144493856707cd9cd0b34338c87fb3fa67dc14f14d\": container with ID starting with fd8f4e72e6bb6a7dcf44c3144493856707cd9cd0b34338c87fb3fa67dc14f14d not found: ID does not exist" Nov 24 20:53:44 crc kubenswrapper[4812]: I1124 20:53:44.986101 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611" path="/var/lib/kubelet/pods/6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611/volumes" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.148901 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-c82f7"] Nov 24 20:53:58 crc kubenswrapper[4812]: E1124 20:53:58.150975 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611" containerName="extract-utilities" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.151029 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611" containerName="extract-utilities" Nov 24 20:53:58 crc kubenswrapper[4812]: E1124 20:53:58.151094 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611" containerName="extract-content" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.151105 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611" containerName="extract-content" Nov 24 20:53:58 crc kubenswrapper[4812]: E1124 20:53:58.151122 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611" containerName="registry-server" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.151130 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611" containerName="registry-server" Nov 24 20:53:58 crc kubenswrapper[4812]: E1124 20:53:58.151152 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cb6407-4424-427d-9335-a44fbe01568b" containerName="extract-utilities" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.151160 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cb6407-4424-427d-9335-a44fbe01568b" containerName="extract-utilities" Nov 24 20:53:58 crc kubenswrapper[4812]: E1124 20:53:58.151200 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cb6407-4424-427d-9335-a44fbe01568b" containerName="extract-content" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.151209 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cb6407-4424-427d-9335-a44fbe01568b" containerName="extract-content" Nov 24 20:53:58 crc kubenswrapper[4812]: E1124 20:53:58.151229 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cb6407-4424-427d-9335-a44fbe01568b" containerName="registry-server" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.151237 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cb6407-4424-427d-9335-a44fbe01568b" containerName="registry-server" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.152117 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd71bc1-ec2e-427e-9c10-9cbe4c6ac611" containerName="registry-server" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.152163 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="80cb6407-4424-427d-9335-a44fbe01568b" containerName="registry-server" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.153311 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-c82f7" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.173157 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-c82f7"] Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.239638 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-f8hch"] Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.246853 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-f8hch" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.247072 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-f8hch"] Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.287681 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ab4449a-a536-4c7d-a405-fb3f44af9ed7-operator-scripts\") pod \"nova-api-db-create-c82f7\" (UID: \"1ab4449a-a536-4c7d-a405-fb3f44af9ed7\") " pod="openstack/nova-api-db-create-c82f7" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.287849 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n65ld\" (UniqueName: \"kubernetes.io/projected/1ab4449a-a536-4c7d-a405-fb3f44af9ed7-kube-api-access-n65ld\") pod \"nova-api-db-create-c82f7\" (UID: \"1ab4449a-a536-4c7d-a405-fb3f44af9ed7\") " pod="openstack/nova-api-db-create-c82f7" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.353024 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-fl2tv"] Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.354066 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fl2tv" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.365094 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-8317-account-create-wcnt7"] Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.366465 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8317-account-create-wcnt7" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.368750 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.388268 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fl2tv"] Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.389273 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ab4449a-a536-4c7d-a405-fb3f44af9ed7-operator-scripts\") pod \"nova-api-db-create-c82f7\" (UID: \"1ab4449a-a536-4c7d-a405-fb3f44af9ed7\") " pod="openstack/nova-api-db-create-c82f7" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.389319 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rkhh\" (UniqueName: \"kubernetes.io/projected/71f85d53-13a9-4baa-bd62-3015c5c95019-kube-api-access-4rkhh\") pod \"nova-cell0-db-create-f8hch\" (UID: \"71f85d53-13a9-4baa-bd62-3015c5c95019\") " pod="openstack/nova-cell0-db-create-f8hch" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.389374 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n65ld\" (UniqueName: \"kubernetes.io/projected/1ab4449a-a536-4c7d-a405-fb3f44af9ed7-kube-api-access-n65ld\") pod \"nova-api-db-create-c82f7\" (UID: \"1ab4449a-a536-4c7d-a405-fb3f44af9ed7\") " pod="openstack/nova-api-db-create-c82f7" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.389473 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71f85d53-13a9-4baa-bd62-3015c5c95019-operator-scripts\") pod \"nova-cell0-db-create-f8hch\" (UID: \"71f85d53-13a9-4baa-bd62-3015c5c95019\") " pod="openstack/nova-cell0-db-create-f8hch" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.390134 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ab4449a-a536-4c7d-a405-fb3f44af9ed7-operator-scripts\") pod \"nova-api-db-create-c82f7\" (UID: \"1ab4449a-a536-4c7d-a405-fb3f44af9ed7\") " pod="openstack/nova-api-db-create-c82f7" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.394130 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8317-account-create-wcnt7"] Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.411083 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n65ld\" (UniqueName: \"kubernetes.io/projected/1ab4449a-a536-4c7d-a405-fb3f44af9ed7-kube-api-access-n65ld\") pod \"nova-api-db-create-c82f7\" (UID: \"1ab4449a-a536-4c7d-a405-fb3f44af9ed7\") " pod="openstack/nova-api-db-create-c82f7" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.491490 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjdkj\" (UniqueName: \"kubernetes.io/projected/75b21ece-3976-4e21-8b32-1e316ad1af2c-kube-api-access-xjdkj\") pod \"nova-api-8317-account-create-wcnt7\" (UID: \"75b21ece-3976-4e21-8b32-1e316ad1af2c\") " pod="openstack/nova-api-8317-account-create-wcnt7" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.491569 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rkhh\" (UniqueName: \"kubernetes.io/projected/71f85d53-13a9-4baa-bd62-3015c5c95019-kube-api-access-4rkhh\") pod \"nova-cell0-db-create-f8hch\" (UID: \"71f85d53-13a9-4baa-bd62-3015c5c95019\") " pod="openstack/nova-cell0-db-create-f8hch" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.491603 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7576e0ff-d8e5-48a1-a022-8196382f4a32-operator-scripts\") pod \"nova-cell1-db-create-fl2tv\" (UID: \"7576e0ff-d8e5-48a1-a022-8196382f4a32\") " pod="openstack/nova-cell1-db-create-fl2tv" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.491666 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75b21ece-3976-4e21-8b32-1e316ad1af2c-operator-scripts\") pod \"nova-api-8317-account-create-wcnt7\" (UID: \"75b21ece-3976-4e21-8b32-1e316ad1af2c\") " pod="openstack/nova-api-8317-account-create-wcnt7" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.491752 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sl7m\" (UniqueName: \"kubernetes.io/projected/7576e0ff-d8e5-48a1-a022-8196382f4a32-kube-api-access-9sl7m\") pod \"nova-cell1-db-create-fl2tv\" (UID: \"7576e0ff-d8e5-48a1-a022-8196382f4a32\") " pod="openstack/nova-cell1-db-create-fl2tv" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.491847 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71f85d53-13a9-4baa-bd62-3015c5c95019-operator-scripts\") pod \"nova-cell0-db-create-f8hch\" (UID: \"71f85d53-13a9-4baa-bd62-3015c5c95019\") " pod="openstack/nova-cell0-db-create-f8hch" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.492681 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71f85d53-13a9-4baa-bd62-3015c5c95019-operator-scripts\") pod \"nova-cell0-db-create-f8hch\" (UID: \"71f85d53-13a9-4baa-bd62-3015c5c95019\") " pod="openstack/nova-cell0-db-create-f8hch" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.497696 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-c82f7" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.507885 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rkhh\" (UniqueName: \"kubernetes.io/projected/71f85d53-13a9-4baa-bd62-3015c5c95019-kube-api-access-4rkhh\") pod \"nova-cell0-db-create-f8hch\" (UID: \"71f85d53-13a9-4baa-bd62-3015c5c95019\") " pod="openstack/nova-cell0-db-create-f8hch" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.558173 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-04b0-account-create-7n4dp"] Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.559718 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-04b0-account-create-7n4dp" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.563267 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.569523 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-f8hch" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.574827 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-04b0-account-create-7n4dp"] Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.593292 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7576e0ff-d8e5-48a1-a022-8196382f4a32-operator-scripts\") pod \"nova-cell1-db-create-fl2tv\" (UID: \"7576e0ff-d8e5-48a1-a022-8196382f4a32\") " pod="openstack/nova-cell1-db-create-fl2tv" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.593398 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75b21ece-3976-4e21-8b32-1e316ad1af2c-operator-scripts\") pod \"nova-api-8317-account-create-wcnt7\" (UID: \"75b21ece-3976-4e21-8b32-1e316ad1af2c\") " pod="openstack/nova-api-8317-account-create-wcnt7" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.593483 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sl7m\" (UniqueName: \"kubernetes.io/projected/7576e0ff-d8e5-48a1-a022-8196382f4a32-kube-api-access-9sl7m\") pod \"nova-cell1-db-create-fl2tv\" (UID: \"7576e0ff-d8e5-48a1-a022-8196382f4a32\") " pod="openstack/nova-cell1-db-create-fl2tv" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.593583 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjdkj\" (UniqueName: \"kubernetes.io/projected/75b21ece-3976-4e21-8b32-1e316ad1af2c-kube-api-access-xjdkj\") pod \"nova-api-8317-account-create-wcnt7\" (UID: \"75b21ece-3976-4e21-8b32-1e316ad1af2c\") " pod="openstack/nova-api-8317-account-create-wcnt7" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.594825 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7576e0ff-d8e5-48a1-a022-8196382f4a32-operator-scripts\") pod \"nova-cell1-db-create-fl2tv\" (UID: \"7576e0ff-d8e5-48a1-a022-8196382f4a32\") " pod="openstack/nova-cell1-db-create-fl2tv" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.595302 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75b21ece-3976-4e21-8b32-1e316ad1af2c-operator-scripts\") pod \"nova-api-8317-account-create-wcnt7\" (UID: \"75b21ece-3976-4e21-8b32-1e316ad1af2c\") " pod="openstack/nova-api-8317-account-create-wcnt7" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.618601 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjdkj\" (UniqueName: \"kubernetes.io/projected/75b21ece-3976-4e21-8b32-1e316ad1af2c-kube-api-access-xjdkj\") pod \"nova-api-8317-account-create-wcnt7\" (UID: \"75b21ece-3976-4e21-8b32-1e316ad1af2c\") " pod="openstack/nova-api-8317-account-create-wcnt7" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.618904 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sl7m\" (UniqueName: \"kubernetes.io/projected/7576e0ff-d8e5-48a1-a022-8196382f4a32-kube-api-access-9sl7m\") pod \"nova-cell1-db-create-fl2tv\" (UID: \"7576e0ff-d8e5-48a1-a022-8196382f4a32\") " pod="openstack/nova-cell1-db-create-fl2tv" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.676229 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fl2tv" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.685837 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8317-account-create-wcnt7" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.697471 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tjxl\" (UniqueName: \"kubernetes.io/projected/f551f2e4-d0b9-4cf1-9131-24075e1a9b92-kube-api-access-4tjxl\") pod \"nova-cell0-04b0-account-create-7n4dp\" (UID: \"f551f2e4-d0b9-4cf1-9131-24075e1a9b92\") " pod="openstack/nova-cell0-04b0-account-create-7n4dp" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.697698 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f551f2e4-d0b9-4cf1-9131-24075e1a9b92-operator-scripts\") pod \"nova-cell0-04b0-account-create-7n4dp\" (UID: \"f551f2e4-d0b9-4cf1-9131-24075e1a9b92\") " pod="openstack/nova-cell0-04b0-account-create-7n4dp" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.766273 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-1080-account-create-6b8hj"] Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.767746 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1080-account-create-6b8hj" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.769961 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.771933 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1080-account-create-6b8hj"] Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.800636 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f551f2e4-d0b9-4cf1-9131-24075e1a9b92-operator-scripts\") pod \"nova-cell0-04b0-account-create-7n4dp\" (UID: \"f551f2e4-d0b9-4cf1-9131-24075e1a9b92\") " pod="openstack/nova-cell0-04b0-account-create-7n4dp" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.801022 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tjxl\" (UniqueName: \"kubernetes.io/projected/f551f2e4-d0b9-4cf1-9131-24075e1a9b92-kube-api-access-4tjxl\") pod \"nova-cell0-04b0-account-create-7n4dp\" (UID: \"f551f2e4-d0b9-4cf1-9131-24075e1a9b92\") " pod="openstack/nova-cell0-04b0-account-create-7n4dp" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.801429 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f551f2e4-d0b9-4cf1-9131-24075e1a9b92-operator-scripts\") pod \"nova-cell0-04b0-account-create-7n4dp\" (UID: \"f551f2e4-d0b9-4cf1-9131-24075e1a9b92\") " pod="openstack/nova-cell0-04b0-account-create-7n4dp" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.823578 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tjxl\" (UniqueName: \"kubernetes.io/projected/f551f2e4-d0b9-4cf1-9131-24075e1a9b92-kube-api-access-4tjxl\") pod \"nova-cell0-04b0-account-create-7n4dp\" (UID: \"f551f2e4-d0b9-4cf1-9131-24075e1a9b92\") " pod="openstack/nova-cell0-04b0-account-create-7n4dp" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.902394 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cgpd\" (UniqueName: \"kubernetes.io/projected/18a6b846-c044-4e56-bc3a-25fec00002fd-kube-api-access-8cgpd\") pod \"nova-cell1-1080-account-create-6b8hj\" (UID: \"18a6b846-c044-4e56-bc3a-25fec00002fd\") " pod="openstack/nova-cell1-1080-account-create-6b8hj" Nov 24 20:53:58 crc kubenswrapper[4812]: I1124 20:53:58.902719 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18a6b846-c044-4e56-bc3a-25fec00002fd-operator-scripts\") pod \"nova-cell1-1080-account-create-6b8hj\" (UID: \"18a6b846-c044-4e56-bc3a-25fec00002fd\") " pod="openstack/nova-cell1-1080-account-create-6b8hj" Nov 24 20:53:59 crc kubenswrapper[4812]: I1124 20:53:59.003883 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cgpd\" (UniqueName: \"kubernetes.io/projected/18a6b846-c044-4e56-bc3a-25fec00002fd-kube-api-access-8cgpd\") pod \"nova-cell1-1080-account-create-6b8hj\" (UID: \"18a6b846-c044-4e56-bc3a-25fec00002fd\") " pod="openstack/nova-cell1-1080-account-create-6b8hj" Nov 24 20:53:59 crc kubenswrapper[4812]: I1124 20:53:59.003995 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18a6b846-c044-4e56-bc3a-25fec00002fd-operator-scripts\") pod \"nova-cell1-1080-account-create-6b8hj\" (UID: \"18a6b846-c044-4e56-bc3a-25fec00002fd\") " pod="openstack/nova-cell1-1080-account-create-6b8hj" Nov 24 20:53:59 crc kubenswrapper[4812]: I1124 20:53:59.004207 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-04b0-account-create-7n4dp" Nov 24 20:53:59 crc kubenswrapper[4812]: I1124 20:53:59.004747 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18a6b846-c044-4e56-bc3a-25fec00002fd-operator-scripts\") pod \"nova-cell1-1080-account-create-6b8hj\" (UID: \"18a6b846-c044-4e56-bc3a-25fec00002fd\") " pod="openstack/nova-cell1-1080-account-create-6b8hj" Nov 24 20:53:59 crc kubenswrapper[4812]: I1124 20:53:59.028799 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cgpd\" (UniqueName: \"kubernetes.io/projected/18a6b846-c044-4e56-bc3a-25fec00002fd-kube-api-access-8cgpd\") pod \"nova-cell1-1080-account-create-6b8hj\" (UID: \"18a6b846-c044-4e56-bc3a-25fec00002fd\") " pod="openstack/nova-cell1-1080-account-create-6b8hj" Nov 24 20:53:59 crc kubenswrapper[4812]: I1124 20:53:59.061868 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-c82f7"] Nov 24 20:53:59 crc kubenswrapper[4812]: I1124 20:53:59.087207 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1080-account-create-6b8hj" Nov 24 20:53:59 crc kubenswrapper[4812]: I1124 20:53:59.131188 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-f8hch"] Nov 24 20:53:59 crc kubenswrapper[4812]: W1124 20:53:59.141246 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71f85d53_13a9_4baa_bd62_3015c5c95019.slice/crio-334c781365719f1b44ba0a8a6732cc0e06d773c3e221640ad0137f536814598a WatchSource:0}: Error finding container 334c781365719f1b44ba0a8a6732cc0e06d773c3e221640ad0137f536814598a: Status 404 returned error can't find the container with id 334c781365719f1b44ba0a8a6732cc0e06d773c3e221640ad0137f536814598a Nov 24 20:53:59 crc kubenswrapper[4812]: I1124 20:53:59.211982 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-fl2tv"] Nov 24 20:53:59 crc kubenswrapper[4812]: I1124 20:53:59.263319 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8317-account-create-wcnt7"] Nov 24 20:53:59 crc kubenswrapper[4812]: I1124 20:53:59.446911 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-04b0-account-create-7n4dp"] Nov 24 20:53:59 crc kubenswrapper[4812]: I1124 20:53:59.561906 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1080-account-create-6b8hj"] Nov 24 20:54:00 crc kubenswrapper[4812]: I1124 20:54:00.036485 4812 generic.go:334] "Generic (PLEG): container finished" podID="71f85d53-13a9-4baa-bd62-3015c5c95019" containerID="47f44abb95b3ce798af38fe0d3df75085f2b605ec71855a5e05432caf8803b56" exitCode=0 Nov 24 20:54:00 crc kubenswrapper[4812]: I1124 20:54:00.036563 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-f8hch" event={"ID":"71f85d53-13a9-4baa-bd62-3015c5c95019","Type":"ContainerDied","Data":"47f44abb95b3ce798af38fe0d3df75085f2b605ec71855a5e05432caf8803b56"} Nov 24 20:54:00 crc kubenswrapper[4812]: I1124 20:54:00.036869 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-f8hch" event={"ID":"71f85d53-13a9-4baa-bd62-3015c5c95019","Type":"ContainerStarted","Data":"334c781365719f1b44ba0a8a6732cc0e06d773c3e221640ad0137f536814598a"} Nov 24 20:54:00 crc kubenswrapper[4812]: I1124 20:54:00.039044 4812 generic.go:334] "Generic (PLEG): container finished" podID="7576e0ff-d8e5-48a1-a022-8196382f4a32" containerID="4629baa85eb91dc0136a7f9506fb8adc45f602f34b50c24db6a5280582951643" exitCode=0 Nov 24 20:54:00 crc kubenswrapper[4812]: I1124 20:54:00.039133 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fl2tv" event={"ID":"7576e0ff-d8e5-48a1-a022-8196382f4a32","Type":"ContainerDied","Data":"4629baa85eb91dc0136a7f9506fb8adc45f602f34b50c24db6a5280582951643"} Nov 24 20:54:00 crc kubenswrapper[4812]: I1124 20:54:00.039235 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fl2tv" event={"ID":"7576e0ff-d8e5-48a1-a022-8196382f4a32","Type":"ContainerStarted","Data":"90aed91818b37860fac946a11b6d454a07091419a8b5f4b5f71896fff45a1a46"} Nov 24 20:54:00 crc kubenswrapper[4812]: I1124 20:54:00.040607 4812 generic.go:334] "Generic (PLEG): container finished" podID="75b21ece-3976-4e21-8b32-1e316ad1af2c" containerID="a76e9fd7ec26815a3de93409fd5fb61b449d07f9c7df95a987a13875c9a9bae9" exitCode=0 Nov 24 20:54:00 crc kubenswrapper[4812]: I1124 20:54:00.040683 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8317-account-create-wcnt7" event={"ID":"75b21ece-3976-4e21-8b32-1e316ad1af2c","Type":"ContainerDied","Data":"a76e9fd7ec26815a3de93409fd5fb61b449d07f9c7df95a987a13875c9a9bae9"} Nov 24 20:54:00 crc kubenswrapper[4812]: I1124 20:54:00.040709 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8317-account-create-wcnt7" event={"ID":"75b21ece-3976-4e21-8b32-1e316ad1af2c","Type":"ContainerStarted","Data":"5fc6f43ca83c7a300a8aad1eefcba63f05c410d93605eeea23a5823ee7550f8c"} Nov 24 20:54:00 crc kubenswrapper[4812]: I1124 20:54:00.042677 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1080-account-create-6b8hj" event={"ID":"18a6b846-c044-4e56-bc3a-25fec00002fd","Type":"ContainerStarted","Data":"49bdb351682d0d46b34001a7176c753828e21609402ab547ea2ce429249bf2ec"} Nov 24 20:54:00 crc kubenswrapper[4812]: I1124 20:54:00.042711 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1080-account-create-6b8hj" event={"ID":"18a6b846-c044-4e56-bc3a-25fec00002fd","Type":"ContainerStarted","Data":"8d3e889de95c5e68298a9f92240c1b2e3f819df9597b1e8d949977a62d590364"} Nov 24 20:54:00 crc kubenswrapper[4812]: I1124 20:54:00.046108 4812 generic.go:334] "Generic (PLEG): container finished" podID="1ab4449a-a536-4c7d-a405-fb3f44af9ed7" containerID="15608135dcc6a73e7cfbdae69ad3801a267454d7f1ce13f1321f54905fae364a" exitCode=0 Nov 24 20:54:00 crc kubenswrapper[4812]: I1124 20:54:00.046164 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-c82f7" event={"ID":"1ab4449a-a536-4c7d-a405-fb3f44af9ed7","Type":"ContainerDied","Data":"15608135dcc6a73e7cfbdae69ad3801a267454d7f1ce13f1321f54905fae364a"} Nov 24 20:54:00 crc kubenswrapper[4812]: I1124 20:54:00.046184 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-c82f7" event={"ID":"1ab4449a-a536-4c7d-a405-fb3f44af9ed7","Type":"ContainerStarted","Data":"26ef5599da082bf70af9f91ba9f6b3e88ca2a2f6dae7bb0359efc388b5c541b9"} Nov 24 20:54:00 crc kubenswrapper[4812]: I1124 20:54:00.048985 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-04b0-account-create-7n4dp" event={"ID":"f551f2e4-d0b9-4cf1-9131-24075e1a9b92","Type":"ContainerStarted","Data":"dca6d2bfcc6eb630eccdae751299ff905121a69131433d98bce8554a1b34f26a"} Nov 24 20:54:00 crc kubenswrapper[4812]: I1124 20:54:00.049054 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-04b0-account-create-7n4dp" event={"ID":"f551f2e4-d0b9-4cf1-9131-24075e1a9b92","Type":"ContainerStarted","Data":"1f3a018e69c397db75ede8d400cc41b87f7649bb2a0ad4527c0578fb3f91b0e5"} Nov 24 20:54:00 crc kubenswrapper[4812]: I1124 20:54:00.109753 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-04b0-account-create-7n4dp" podStartSLOduration=2.109734128 podStartE2EDuration="2.109734128s" podCreationTimestamp="2025-11-24 20:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:54:00.101733251 +0000 UTC m=+5833.890685632" watchObservedRunningTime="2025-11-24 20:54:00.109734128 +0000 UTC m=+5833.898686499" Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.065376 4812 generic.go:334] "Generic (PLEG): container finished" podID="18a6b846-c044-4e56-bc3a-25fec00002fd" containerID="49bdb351682d0d46b34001a7176c753828e21609402ab547ea2ce429249bf2ec" exitCode=0 Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.065456 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1080-account-create-6b8hj" event={"ID":"18a6b846-c044-4e56-bc3a-25fec00002fd","Type":"ContainerDied","Data":"49bdb351682d0d46b34001a7176c753828e21609402ab547ea2ce429249bf2ec"} Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.068908 4812 generic.go:334] "Generic (PLEG): container finished" podID="f551f2e4-d0b9-4cf1-9131-24075e1a9b92" containerID="dca6d2bfcc6eb630eccdae751299ff905121a69131433d98bce8554a1b34f26a" exitCode=0 Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.069481 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-04b0-account-create-7n4dp" event={"ID":"f551f2e4-d0b9-4cf1-9131-24075e1a9b92","Type":"ContainerDied","Data":"dca6d2bfcc6eb630eccdae751299ff905121a69131433d98bce8554a1b34f26a"} Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.571154 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-c82f7" Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.663676 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ab4449a-a536-4c7d-a405-fb3f44af9ed7-operator-scripts\") pod \"1ab4449a-a536-4c7d-a405-fb3f44af9ed7\" (UID: \"1ab4449a-a536-4c7d-a405-fb3f44af9ed7\") " Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.663770 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n65ld\" (UniqueName: \"kubernetes.io/projected/1ab4449a-a536-4c7d-a405-fb3f44af9ed7-kube-api-access-n65ld\") pod \"1ab4449a-a536-4c7d-a405-fb3f44af9ed7\" (UID: \"1ab4449a-a536-4c7d-a405-fb3f44af9ed7\") " Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.664540 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ab4449a-a536-4c7d-a405-fb3f44af9ed7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1ab4449a-a536-4c7d-a405-fb3f44af9ed7" (UID: "1ab4449a-a536-4c7d-a405-fb3f44af9ed7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.669089 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ab4449a-a536-4c7d-a405-fb3f44af9ed7-kube-api-access-n65ld" (OuterVolumeSpecName: "kube-api-access-n65ld") pod "1ab4449a-a536-4c7d-a405-fb3f44af9ed7" (UID: "1ab4449a-a536-4c7d-a405-fb3f44af9ed7"). InnerVolumeSpecName "kube-api-access-n65ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.732396 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8317-account-create-wcnt7" Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.739764 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fl2tv" Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.744795 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-f8hch" Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.766236 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1ab4449a-a536-4c7d-a405-fb3f44af9ed7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.766299 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n65ld\" (UniqueName: \"kubernetes.io/projected/1ab4449a-a536-4c7d-a405-fb3f44af9ed7-kube-api-access-n65ld\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.868004 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75b21ece-3976-4e21-8b32-1e316ad1af2c-operator-scripts\") pod \"75b21ece-3976-4e21-8b32-1e316ad1af2c\" (UID: \"75b21ece-3976-4e21-8b32-1e316ad1af2c\") " Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.868096 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sl7m\" (UniqueName: \"kubernetes.io/projected/7576e0ff-d8e5-48a1-a022-8196382f4a32-kube-api-access-9sl7m\") pod \"7576e0ff-d8e5-48a1-a022-8196382f4a32\" (UID: \"7576e0ff-d8e5-48a1-a022-8196382f4a32\") " Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.868141 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7576e0ff-d8e5-48a1-a022-8196382f4a32-operator-scripts\") pod \"7576e0ff-d8e5-48a1-a022-8196382f4a32\" (UID: \"7576e0ff-d8e5-48a1-a022-8196382f4a32\") " Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.868222 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rkhh\" (UniqueName: \"kubernetes.io/projected/71f85d53-13a9-4baa-bd62-3015c5c95019-kube-api-access-4rkhh\") pod \"71f85d53-13a9-4baa-bd62-3015c5c95019\" (UID: \"71f85d53-13a9-4baa-bd62-3015c5c95019\") " Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.868244 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjdkj\" (UniqueName: \"kubernetes.io/projected/75b21ece-3976-4e21-8b32-1e316ad1af2c-kube-api-access-xjdkj\") pod \"75b21ece-3976-4e21-8b32-1e316ad1af2c\" (UID: \"75b21ece-3976-4e21-8b32-1e316ad1af2c\") " Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.868400 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71f85d53-13a9-4baa-bd62-3015c5c95019-operator-scripts\") pod \"71f85d53-13a9-4baa-bd62-3015c5c95019\" (UID: \"71f85d53-13a9-4baa-bd62-3015c5c95019\") " Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.868996 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75b21ece-3976-4e21-8b32-1e316ad1af2c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75b21ece-3976-4e21-8b32-1e316ad1af2c" (UID: "75b21ece-3976-4e21-8b32-1e316ad1af2c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.869002 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7576e0ff-d8e5-48a1-a022-8196382f4a32-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7576e0ff-d8e5-48a1-a022-8196382f4a32" (UID: "7576e0ff-d8e5-48a1-a022-8196382f4a32"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.869258 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71f85d53-13a9-4baa-bd62-3015c5c95019-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "71f85d53-13a9-4baa-bd62-3015c5c95019" (UID: "71f85d53-13a9-4baa-bd62-3015c5c95019"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.871870 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71f85d53-13a9-4baa-bd62-3015c5c95019-kube-api-access-4rkhh" (OuterVolumeSpecName: "kube-api-access-4rkhh") pod "71f85d53-13a9-4baa-bd62-3015c5c95019" (UID: "71f85d53-13a9-4baa-bd62-3015c5c95019"). InnerVolumeSpecName "kube-api-access-4rkhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.873510 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b21ece-3976-4e21-8b32-1e316ad1af2c-kube-api-access-xjdkj" (OuterVolumeSpecName: "kube-api-access-xjdkj") pod "75b21ece-3976-4e21-8b32-1e316ad1af2c" (UID: "75b21ece-3976-4e21-8b32-1e316ad1af2c"). InnerVolumeSpecName "kube-api-access-xjdkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.873867 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7576e0ff-d8e5-48a1-a022-8196382f4a32-kube-api-access-9sl7m" (OuterVolumeSpecName: "kube-api-access-9sl7m") pod "7576e0ff-d8e5-48a1-a022-8196382f4a32" (UID: "7576e0ff-d8e5-48a1-a022-8196382f4a32"). InnerVolumeSpecName "kube-api-access-9sl7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.971426 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71f85d53-13a9-4baa-bd62-3015c5c95019-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.971886 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75b21ece-3976-4e21-8b32-1e316ad1af2c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.972022 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sl7m\" (UniqueName: \"kubernetes.io/projected/7576e0ff-d8e5-48a1-a022-8196382f4a32-kube-api-access-9sl7m\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.972133 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7576e0ff-d8e5-48a1-a022-8196382f4a32-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.972243 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjdkj\" (UniqueName: \"kubernetes.io/projected/75b21ece-3976-4e21-8b32-1e316ad1af2c-kube-api-access-xjdkj\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:01 crc kubenswrapper[4812]: I1124 20:54:01.972407 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rkhh\" (UniqueName: \"kubernetes.io/projected/71f85d53-13a9-4baa-bd62-3015c5c95019-kube-api-access-4rkhh\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.085040 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-c82f7" Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.085059 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-c82f7" event={"ID":"1ab4449a-a536-4c7d-a405-fb3f44af9ed7","Type":"ContainerDied","Data":"26ef5599da082bf70af9f91ba9f6b3e88ca2a2f6dae7bb0359efc388b5c541b9"} Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.085891 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26ef5599da082bf70af9f91ba9f6b3e88ca2a2f6dae7bb0359efc388b5c541b9" Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.088294 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-f8hch" event={"ID":"71f85d53-13a9-4baa-bd62-3015c5c95019","Type":"ContainerDied","Data":"334c781365719f1b44ba0a8a6732cc0e06d773c3e221640ad0137f536814598a"} Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.088431 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="334c781365719f1b44ba0a8a6732cc0e06d773c3e221640ad0137f536814598a" Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.088311 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-f8hch" Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.091838 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-fl2tv" event={"ID":"7576e0ff-d8e5-48a1-a022-8196382f4a32","Type":"ContainerDied","Data":"90aed91818b37860fac946a11b6d454a07091419a8b5f4b5f71896fff45a1a46"} Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.091892 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90aed91818b37860fac946a11b6d454a07091419a8b5f4b5f71896fff45a1a46" Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.092000 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-fl2tv" Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.097689 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8317-account-create-wcnt7" event={"ID":"75b21ece-3976-4e21-8b32-1e316ad1af2c","Type":"ContainerDied","Data":"5fc6f43ca83c7a300a8aad1eefcba63f05c410d93605eeea23a5823ee7550f8c"} Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.097799 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8317-account-create-wcnt7" Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.097823 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fc6f43ca83c7a300a8aad1eefcba63f05c410d93605eeea23a5823ee7550f8c" Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.545736 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1080-account-create-6b8hj" Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.555422 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-04b0-account-create-7n4dp" Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.687085 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tjxl\" (UniqueName: \"kubernetes.io/projected/f551f2e4-d0b9-4cf1-9131-24075e1a9b92-kube-api-access-4tjxl\") pod \"f551f2e4-d0b9-4cf1-9131-24075e1a9b92\" (UID: \"f551f2e4-d0b9-4cf1-9131-24075e1a9b92\") " Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.687565 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f551f2e4-d0b9-4cf1-9131-24075e1a9b92-operator-scripts\") pod \"f551f2e4-d0b9-4cf1-9131-24075e1a9b92\" (UID: \"f551f2e4-d0b9-4cf1-9131-24075e1a9b92\") " Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.687608 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18a6b846-c044-4e56-bc3a-25fec00002fd-operator-scripts\") pod \"18a6b846-c044-4e56-bc3a-25fec00002fd\" (UID: \"18a6b846-c044-4e56-bc3a-25fec00002fd\") " Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.687641 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cgpd\" (UniqueName: \"kubernetes.io/projected/18a6b846-c044-4e56-bc3a-25fec00002fd-kube-api-access-8cgpd\") pod \"18a6b846-c044-4e56-bc3a-25fec00002fd\" (UID: \"18a6b846-c044-4e56-bc3a-25fec00002fd\") " Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.688241 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f551f2e4-d0b9-4cf1-9131-24075e1a9b92-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f551f2e4-d0b9-4cf1-9131-24075e1a9b92" (UID: "f551f2e4-d0b9-4cf1-9131-24075e1a9b92"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.688274 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18a6b846-c044-4e56-bc3a-25fec00002fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18a6b846-c044-4e56-bc3a-25fec00002fd" (UID: "18a6b846-c044-4e56-bc3a-25fec00002fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.692480 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18a6b846-c044-4e56-bc3a-25fec00002fd-kube-api-access-8cgpd" (OuterVolumeSpecName: "kube-api-access-8cgpd") pod "18a6b846-c044-4e56-bc3a-25fec00002fd" (UID: "18a6b846-c044-4e56-bc3a-25fec00002fd"). InnerVolumeSpecName "kube-api-access-8cgpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.695539 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f551f2e4-d0b9-4cf1-9131-24075e1a9b92-kube-api-access-4tjxl" (OuterVolumeSpecName: "kube-api-access-4tjxl") pod "f551f2e4-d0b9-4cf1-9131-24075e1a9b92" (UID: "f551f2e4-d0b9-4cf1-9131-24075e1a9b92"). InnerVolumeSpecName "kube-api-access-4tjxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.809316 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tjxl\" (UniqueName: \"kubernetes.io/projected/f551f2e4-d0b9-4cf1-9131-24075e1a9b92-kube-api-access-4tjxl\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.809369 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f551f2e4-d0b9-4cf1-9131-24075e1a9b92-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.809380 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18a6b846-c044-4e56-bc3a-25fec00002fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:02 crc kubenswrapper[4812]: I1124 20:54:02.809389 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cgpd\" (UniqueName: \"kubernetes.io/projected/18a6b846-c044-4e56-bc3a-25fec00002fd-kube-api-access-8cgpd\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.113482 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-04b0-account-create-7n4dp" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.113478 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-04b0-account-create-7n4dp" event={"ID":"f551f2e4-d0b9-4cf1-9131-24075e1a9b92","Type":"ContainerDied","Data":"1f3a018e69c397db75ede8d400cc41b87f7649bb2a0ad4527c0578fb3f91b0e5"} Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.113640 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f3a018e69c397db75ede8d400cc41b87f7649bb2a0ad4527c0578fb3f91b0e5" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.116578 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1080-account-create-6b8hj" event={"ID":"18a6b846-c044-4e56-bc3a-25fec00002fd","Type":"ContainerDied","Data":"8d3e889de95c5e68298a9f92240c1b2e3f819df9597b1e8d949977a62d590364"} Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.116627 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d3e889de95c5e68298a9f92240c1b2e3f819df9597b1e8d949977a62d590364" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.116743 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1080-account-create-6b8hj" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.864774 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zsgrz"] Nov 24 20:54:03 crc kubenswrapper[4812]: E1124 20:54:03.865140 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f551f2e4-d0b9-4cf1-9131-24075e1a9b92" containerName="mariadb-account-create" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.865158 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f551f2e4-d0b9-4cf1-9131-24075e1a9b92" containerName="mariadb-account-create" Nov 24 20:54:03 crc kubenswrapper[4812]: E1124 20:54:03.865171 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b21ece-3976-4e21-8b32-1e316ad1af2c" containerName="mariadb-account-create" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.865178 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b21ece-3976-4e21-8b32-1e316ad1af2c" containerName="mariadb-account-create" Nov 24 20:54:03 crc kubenswrapper[4812]: E1124 20:54:03.865184 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71f85d53-13a9-4baa-bd62-3015c5c95019" containerName="mariadb-database-create" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.865190 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="71f85d53-13a9-4baa-bd62-3015c5c95019" containerName="mariadb-database-create" Nov 24 20:54:03 crc kubenswrapper[4812]: E1124 20:54:03.865204 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ab4449a-a536-4c7d-a405-fb3f44af9ed7" containerName="mariadb-database-create" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.865210 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ab4449a-a536-4c7d-a405-fb3f44af9ed7" containerName="mariadb-database-create" Nov 24 20:54:03 crc kubenswrapper[4812]: E1124 20:54:03.865224 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7576e0ff-d8e5-48a1-a022-8196382f4a32" containerName="mariadb-database-create" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.865230 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7576e0ff-d8e5-48a1-a022-8196382f4a32" containerName="mariadb-database-create" Nov 24 20:54:03 crc kubenswrapper[4812]: E1124 20:54:03.865244 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18a6b846-c044-4e56-bc3a-25fec00002fd" containerName="mariadb-account-create" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.865250 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="18a6b846-c044-4e56-bc3a-25fec00002fd" containerName="mariadb-account-create" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.865416 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="18a6b846-c044-4e56-bc3a-25fec00002fd" containerName="mariadb-account-create" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.865427 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f551f2e4-d0b9-4cf1-9131-24075e1a9b92" containerName="mariadb-account-create" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.865437 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b21ece-3976-4e21-8b32-1e316ad1af2c" containerName="mariadb-account-create" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.865460 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="71f85d53-13a9-4baa-bd62-3015c5c95019" containerName="mariadb-database-create" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.865472 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ab4449a-a536-4c7d-a405-fb3f44af9ed7" containerName="mariadb-database-create" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.865485 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7576e0ff-d8e5-48a1-a022-8196382f4a32" containerName="mariadb-database-create" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.866020 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zsgrz" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.872593 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8lb8r" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.873122 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.873835 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.879659 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zsgrz"] Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.933454 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-config-data\") pod \"nova-cell0-conductor-db-sync-zsgrz\" (UID: \"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2\") " pod="openstack/nova-cell0-conductor-db-sync-zsgrz" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.933498 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bww9j\" (UniqueName: \"kubernetes.io/projected/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-kube-api-access-bww9j\") pod \"nova-cell0-conductor-db-sync-zsgrz\" (UID: \"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2\") " pod="openstack/nova-cell0-conductor-db-sync-zsgrz" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.933736 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zsgrz\" (UID: \"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2\") " pod="openstack/nova-cell0-conductor-db-sync-zsgrz" Nov 24 20:54:03 crc kubenswrapper[4812]: I1124 20:54:03.933785 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-scripts\") pod \"nova-cell0-conductor-db-sync-zsgrz\" (UID: \"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2\") " pod="openstack/nova-cell0-conductor-db-sync-zsgrz" Nov 24 20:54:04 crc kubenswrapper[4812]: I1124 20:54:04.035245 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bww9j\" (UniqueName: \"kubernetes.io/projected/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-kube-api-access-bww9j\") pod \"nova-cell0-conductor-db-sync-zsgrz\" (UID: \"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2\") " pod="openstack/nova-cell0-conductor-db-sync-zsgrz" Nov 24 20:54:04 crc kubenswrapper[4812]: I1124 20:54:04.036952 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zsgrz\" (UID: \"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2\") " pod="openstack/nova-cell0-conductor-db-sync-zsgrz" Nov 24 20:54:04 crc kubenswrapper[4812]: I1124 20:54:04.037008 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-scripts\") pod \"nova-cell0-conductor-db-sync-zsgrz\" (UID: \"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2\") " pod="openstack/nova-cell0-conductor-db-sync-zsgrz" Nov 24 20:54:04 crc kubenswrapper[4812]: I1124 20:54:04.037140 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-config-data\") pod \"nova-cell0-conductor-db-sync-zsgrz\" (UID: \"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2\") " pod="openstack/nova-cell0-conductor-db-sync-zsgrz" Nov 24 20:54:04 crc kubenswrapper[4812]: I1124 20:54:04.044588 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zsgrz\" (UID: \"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2\") " pod="openstack/nova-cell0-conductor-db-sync-zsgrz" Nov 24 20:54:04 crc kubenswrapper[4812]: I1124 20:54:04.045922 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-scripts\") pod \"nova-cell0-conductor-db-sync-zsgrz\" (UID: \"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2\") " pod="openstack/nova-cell0-conductor-db-sync-zsgrz" Nov 24 20:54:04 crc kubenswrapper[4812]: I1124 20:54:04.047086 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-config-data\") pod \"nova-cell0-conductor-db-sync-zsgrz\" (UID: \"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2\") " pod="openstack/nova-cell0-conductor-db-sync-zsgrz" Nov 24 20:54:04 crc kubenswrapper[4812]: I1124 20:54:04.058394 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bww9j\" (UniqueName: \"kubernetes.io/projected/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-kube-api-access-bww9j\") pod \"nova-cell0-conductor-db-sync-zsgrz\" (UID: \"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2\") " pod="openstack/nova-cell0-conductor-db-sync-zsgrz" Nov 24 20:54:04 crc kubenswrapper[4812]: I1124 20:54:04.180707 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zsgrz" Nov 24 20:54:04 crc kubenswrapper[4812]: I1124 20:54:04.652633 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zsgrz"] Nov 24 20:54:04 crc kubenswrapper[4812]: W1124 20:54:04.655503 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2af7fb62_3e16_45a7_87da_83d6ecc2b6b2.slice/crio-df1927a00567b9b9b21ece7a1763c2777043e77d683f6ff8a46c8538b40631e0 WatchSource:0}: Error finding container df1927a00567b9b9b21ece7a1763c2777043e77d683f6ff8a46c8538b40631e0: Status 404 returned error can't find the container with id df1927a00567b9b9b21ece7a1763c2777043e77d683f6ff8a46c8538b40631e0 Nov 24 20:54:05 crc kubenswrapper[4812]: I1124 20:54:05.137879 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zsgrz" event={"ID":"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2","Type":"ContainerStarted","Data":"254c9f945f2c0239303a1fd368eead6cdc04b2c52fcf59402657ccd0f40bfa2a"} Nov 24 20:54:05 crc kubenswrapper[4812]: I1124 20:54:05.138460 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zsgrz" event={"ID":"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2","Type":"ContainerStarted","Data":"df1927a00567b9b9b21ece7a1763c2777043e77d683f6ff8a46c8538b40631e0"} Nov 24 20:54:05 crc kubenswrapper[4812]: I1124 20:54:05.157800 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zsgrz" podStartSLOduration=2.15778012 podStartE2EDuration="2.15778012s" podCreationTimestamp="2025-11-24 20:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:54:05.156614287 +0000 UTC m=+5838.945566668" watchObservedRunningTime="2025-11-24 20:54:05.15778012 +0000 UTC m=+5838.946732501" Nov 24 20:54:10 crc kubenswrapper[4812]: I1124 20:54:10.211711 4812 generic.go:334] "Generic (PLEG): container finished" podID="2af7fb62-3e16-45a7-87da-83d6ecc2b6b2" containerID="254c9f945f2c0239303a1fd368eead6cdc04b2c52fcf59402657ccd0f40bfa2a" exitCode=0 Nov 24 20:54:10 crc kubenswrapper[4812]: I1124 20:54:10.212299 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zsgrz" event={"ID":"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2","Type":"ContainerDied","Data":"254c9f945f2c0239303a1fd368eead6cdc04b2c52fcf59402657ccd0f40bfa2a"} Nov 24 20:54:11 crc kubenswrapper[4812]: I1124 20:54:11.625772 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zsgrz" Nov 24 20:54:11 crc kubenswrapper[4812]: I1124 20:54:11.827570 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-config-data\") pod \"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2\" (UID: \"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2\") " Nov 24 20:54:11 crc kubenswrapper[4812]: I1124 20:54:11.827827 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-combined-ca-bundle\") pod \"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2\" (UID: \"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2\") " Nov 24 20:54:11 crc kubenswrapper[4812]: I1124 20:54:11.827867 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-scripts\") pod \"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2\" (UID: \"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2\") " Nov 24 20:54:11 crc kubenswrapper[4812]: I1124 20:54:11.828095 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bww9j\" (UniqueName: \"kubernetes.io/projected/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-kube-api-access-bww9j\") pod \"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2\" (UID: \"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2\") " Nov 24 20:54:11 crc kubenswrapper[4812]: I1124 20:54:11.836937 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-kube-api-access-bww9j" (OuterVolumeSpecName: "kube-api-access-bww9j") pod "2af7fb62-3e16-45a7-87da-83d6ecc2b6b2" (UID: "2af7fb62-3e16-45a7-87da-83d6ecc2b6b2"). InnerVolumeSpecName "kube-api-access-bww9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:54:11 crc kubenswrapper[4812]: I1124 20:54:11.836937 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-scripts" (OuterVolumeSpecName: "scripts") pod "2af7fb62-3e16-45a7-87da-83d6ecc2b6b2" (UID: "2af7fb62-3e16-45a7-87da-83d6ecc2b6b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:11 crc kubenswrapper[4812]: I1124 20:54:11.877231 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2af7fb62-3e16-45a7-87da-83d6ecc2b6b2" (UID: "2af7fb62-3e16-45a7-87da-83d6ecc2b6b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:11 crc kubenswrapper[4812]: I1124 20:54:11.878991 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-config-data" (OuterVolumeSpecName: "config-data") pod "2af7fb62-3e16-45a7-87da-83d6ecc2b6b2" (UID: "2af7fb62-3e16-45a7-87da-83d6ecc2b6b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:11 crc kubenswrapper[4812]: I1124 20:54:11.932143 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bww9j\" (UniqueName: \"kubernetes.io/projected/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-kube-api-access-bww9j\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:11 crc kubenswrapper[4812]: I1124 20:54:11.932191 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:11 crc kubenswrapper[4812]: I1124 20:54:11.932211 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:11 crc kubenswrapper[4812]: I1124 20:54:11.932227 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:12 crc kubenswrapper[4812]: I1124 20:54:12.238952 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zsgrz" event={"ID":"2af7fb62-3e16-45a7-87da-83d6ecc2b6b2","Type":"ContainerDied","Data":"df1927a00567b9b9b21ece7a1763c2777043e77d683f6ff8a46c8538b40631e0"} Nov 24 20:54:12 crc kubenswrapper[4812]: I1124 20:54:12.239016 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df1927a00567b9b9b21ece7a1763c2777043e77d683f6ff8a46c8538b40631e0" Nov 24 20:54:12 crc kubenswrapper[4812]: I1124 20:54:12.239058 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zsgrz" Nov 24 20:54:12 crc kubenswrapper[4812]: I1124 20:54:12.352893 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 20:54:12 crc kubenswrapper[4812]: E1124 20:54:12.353863 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2af7fb62-3e16-45a7-87da-83d6ecc2b6b2" containerName="nova-cell0-conductor-db-sync" Nov 24 20:54:12 crc kubenswrapper[4812]: I1124 20:54:12.354064 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2af7fb62-3e16-45a7-87da-83d6ecc2b6b2" containerName="nova-cell0-conductor-db-sync" Nov 24 20:54:12 crc kubenswrapper[4812]: I1124 20:54:12.354657 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="2af7fb62-3e16-45a7-87da-83d6ecc2b6b2" containerName="nova-cell0-conductor-db-sync" Nov 24 20:54:12 crc kubenswrapper[4812]: I1124 20:54:12.356022 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 20:54:12 crc kubenswrapper[4812]: I1124 20:54:12.360670 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 20:54:12 crc kubenswrapper[4812]: I1124 20:54:12.371536 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8lb8r" Nov 24 20:54:12 crc kubenswrapper[4812]: I1124 20:54:12.386873 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 20:54:12 crc kubenswrapper[4812]: I1124 20:54:12.545242 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v8m6\" (UniqueName: \"kubernetes.io/projected/dd6b987f-ea2d-49ee-8d1c-3abbe6463a66-kube-api-access-9v8m6\") pod \"nova-cell0-conductor-0\" (UID: \"dd6b987f-ea2d-49ee-8d1c-3abbe6463a66\") " pod="openstack/nova-cell0-conductor-0" Nov 24 20:54:12 crc kubenswrapper[4812]: I1124 20:54:12.545778 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd6b987f-ea2d-49ee-8d1c-3abbe6463a66-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dd6b987f-ea2d-49ee-8d1c-3abbe6463a66\") " pod="openstack/nova-cell0-conductor-0" Nov 24 20:54:12 crc kubenswrapper[4812]: I1124 20:54:12.545853 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd6b987f-ea2d-49ee-8d1c-3abbe6463a66-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dd6b987f-ea2d-49ee-8d1c-3abbe6463a66\") " pod="openstack/nova-cell0-conductor-0" Nov 24 20:54:12 crc kubenswrapper[4812]: I1124 20:54:12.648060 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd6b987f-ea2d-49ee-8d1c-3abbe6463a66-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dd6b987f-ea2d-49ee-8d1c-3abbe6463a66\") " pod="openstack/nova-cell0-conductor-0" Nov 24 20:54:12 crc kubenswrapper[4812]: I1124 20:54:12.648130 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd6b987f-ea2d-49ee-8d1c-3abbe6463a66-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dd6b987f-ea2d-49ee-8d1c-3abbe6463a66\") " pod="openstack/nova-cell0-conductor-0" Nov 24 20:54:12 crc kubenswrapper[4812]: I1124 20:54:12.648227 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v8m6\" (UniqueName: \"kubernetes.io/projected/dd6b987f-ea2d-49ee-8d1c-3abbe6463a66-kube-api-access-9v8m6\") pod \"nova-cell0-conductor-0\" (UID: \"dd6b987f-ea2d-49ee-8d1c-3abbe6463a66\") " pod="openstack/nova-cell0-conductor-0" Nov 24 20:54:12 crc kubenswrapper[4812]: I1124 20:54:12.656241 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd6b987f-ea2d-49ee-8d1c-3abbe6463a66-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dd6b987f-ea2d-49ee-8d1c-3abbe6463a66\") " pod="openstack/nova-cell0-conductor-0" Nov 24 20:54:12 crc kubenswrapper[4812]: I1124 20:54:12.658806 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd6b987f-ea2d-49ee-8d1c-3abbe6463a66-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dd6b987f-ea2d-49ee-8d1c-3abbe6463a66\") " pod="openstack/nova-cell0-conductor-0" Nov 24 20:54:12 crc kubenswrapper[4812]: I1124 20:54:12.683414 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v8m6\" (UniqueName: \"kubernetes.io/projected/dd6b987f-ea2d-49ee-8d1c-3abbe6463a66-kube-api-access-9v8m6\") pod \"nova-cell0-conductor-0\" (UID: \"dd6b987f-ea2d-49ee-8d1c-3abbe6463a66\") " pod="openstack/nova-cell0-conductor-0" Nov 24 20:54:12 crc kubenswrapper[4812]: I1124 20:54:12.980690 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 20:54:13 crc kubenswrapper[4812]: I1124 20:54:13.487452 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 20:54:14 crc kubenswrapper[4812]: I1124 20:54:14.266947 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dd6b987f-ea2d-49ee-8d1c-3abbe6463a66","Type":"ContainerStarted","Data":"d29f6c6d47d8621b6fec66813f882174bb2e3411238e1ca13069fa9dedd9d7eb"} Nov 24 20:54:14 crc kubenswrapper[4812]: I1124 20:54:14.267302 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dd6b987f-ea2d-49ee-8d1c-3abbe6463a66","Type":"ContainerStarted","Data":"3ce7236eea21df5331f48c9624f2ded1012b8a51612c9a13391ee1ce0f78849f"} Nov 24 20:54:14 crc kubenswrapper[4812]: I1124 20:54:14.267573 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 24 20:54:14 crc kubenswrapper[4812]: I1124 20:54:14.308983 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.3089537829999998 podStartE2EDuration="2.308953783s" podCreationTimestamp="2025-11-24 20:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:54:14.290510351 +0000 UTC m=+5848.079462762" watchObservedRunningTime="2025-11-24 20:54:14.308953783 +0000 UTC m=+5848.097906194" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.021381 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.642765 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-7xzss"] Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.644992 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7xzss" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.647463 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.648304 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.652170 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7xzss"] Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.792591 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qdhn\" (UniqueName: \"kubernetes.io/projected/74b9a17c-1408-4008-a7ce-2805decab132-kube-api-access-4qdhn\") pod \"nova-cell0-cell-mapping-7xzss\" (UID: \"74b9a17c-1408-4008-a7ce-2805decab132\") " pod="openstack/nova-cell0-cell-mapping-7xzss" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.792746 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74b9a17c-1408-4008-a7ce-2805decab132-scripts\") pod \"nova-cell0-cell-mapping-7xzss\" (UID: \"74b9a17c-1408-4008-a7ce-2805decab132\") " pod="openstack/nova-cell0-cell-mapping-7xzss" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.792834 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74b9a17c-1408-4008-a7ce-2805decab132-config-data\") pod \"nova-cell0-cell-mapping-7xzss\" (UID: \"74b9a17c-1408-4008-a7ce-2805decab132\") " pod="openstack/nova-cell0-cell-mapping-7xzss" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.792873 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74b9a17c-1408-4008-a7ce-2805decab132-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7xzss\" (UID: \"74b9a17c-1408-4008-a7ce-2805decab132\") " pod="openstack/nova-cell0-cell-mapping-7xzss" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.819690 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.821017 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.824376 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.842607 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.893989 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74b9a17c-1408-4008-a7ce-2805decab132-config-data\") pod \"nova-cell0-cell-mapping-7xzss\" (UID: \"74b9a17c-1408-4008-a7ce-2805decab132\") " pod="openstack/nova-cell0-cell-mapping-7xzss" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.894037 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74b9a17c-1408-4008-a7ce-2805decab132-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7xzss\" (UID: \"74b9a17c-1408-4008-a7ce-2805decab132\") " pod="openstack/nova-cell0-cell-mapping-7xzss" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.894059 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05ed2f4-83c0-4baa-be13-ba5e179b4804-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d05ed2f4-83c0-4baa-be13-ba5e179b4804\") " pod="openstack/nova-scheduler-0" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.894096 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qdhn\" (UniqueName: \"kubernetes.io/projected/74b9a17c-1408-4008-a7ce-2805decab132-kube-api-access-4qdhn\") pod \"nova-cell0-cell-mapping-7xzss\" (UID: \"74b9a17c-1408-4008-a7ce-2805decab132\") " pod="openstack/nova-cell0-cell-mapping-7xzss" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.894147 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d05ed2f4-83c0-4baa-be13-ba5e179b4804-config-data\") pod \"nova-scheduler-0\" (UID: \"d05ed2f4-83c0-4baa-be13-ba5e179b4804\") " pod="openstack/nova-scheduler-0" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.894173 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw4ds\" (UniqueName: \"kubernetes.io/projected/d05ed2f4-83c0-4baa-be13-ba5e179b4804-kube-api-access-jw4ds\") pod \"nova-scheduler-0\" (UID: \"d05ed2f4-83c0-4baa-be13-ba5e179b4804\") " pod="openstack/nova-scheduler-0" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.894241 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74b9a17c-1408-4008-a7ce-2805decab132-scripts\") pod \"nova-cell0-cell-mapping-7xzss\" (UID: \"74b9a17c-1408-4008-a7ce-2805decab132\") " pod="openstack/nova-cell0-cell-mapping-7xzss" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.902108 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74b9a17c-1408-4008-a7ce-2805decab132-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-7xzss\" (UID: \"74b9a17c-1408-4008-a7ce-2805decab132\") " pod="openstack/nova-cell0-cell-mapping-7xzss" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.904245 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74b9a17c-1408-4008-a7ce-2805decab132-config-data\") pod \"nova-cell0-cell-mapping-7xzss\" (UID: \"74b9a17c-1408-4008-a7ce-2805decab132\") " pod="openstack/nova-cell0-cell-mapping-7xzss" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.921920 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74b9a17c-1408-4008-a7ce-2805decab132-scripts\") pod \"nova-cell0-cell-mapping-7xzss\" (UID: \"74b9a17c-1408-4008-a7ce-2805decab132\") " pod="openstack/nova-cell0-cell-mapping-7xzss" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.934831 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qdhn\" (UniqueName: \"kubernetes.io/projected/74b9a17c-1408-4008-a7ce-2805decab132-kube-api-access-4qdhn\") pod \"nova-cell0-cell-mapping-7xzss\" (UID: \"74b9a17c-1408-4008-a7ce-2805decab132\") " pod="openstack/nova-cell0-cell-mapping-7xzss" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.944760 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.946329 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.949660 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.955007 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.964946 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.966073 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.969708 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.978629 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7xzss" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.995231 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05ed2f4-83c0-4baa-be13-ba5e179b4804-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d05ed2f4-83c0-4baa-be13-ba5e179b4804\") " pod="openstack/nova-scheduler-0" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.995312 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d05ed2f4-83c0-4baa-be13-ba5e179b4804-config-data\") pod \"nova-scheduler-0\" (UID: \"d05ed2f4-83c0-4baa-be13-ba5e179b4804\") " pod="openstack/nova-scheduler-0" Nov 24 20:54:23 crc kubenswrapper[4812]: I1124 20:54:23.995350 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw4ds\" (UniqueName: \"kubernetes.io/projected/d05ed2f4-83c0-4baa-be13-ba5e179b4804-kube-api-access-jw4ds\") pod \"nova-scheduler-0\" (UID: \"d05ed2f4-83c0-4baa-be13-ba5e179b4804\") " pod="openstack/nova-scheduler-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.004510 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05ed2f4-83c0-4baa-be13-ba5e179b4804-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d05ed2f4-83c0-4baa-be13-ba5e179b4804\") " pod="openstack/nova-scheduler-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.005441 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d05ed2f4-83c0-4baa-be13-ba5e179b4804-config-data\") pod \"nova-scheduler-0\" (UID: \"d05ed2f4-83c0-4baa-be13-ba5e179b4804\") " pod="openstack/nova-scheduler-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.016212 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.031653 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw4ds\" (UniqueName: \"kubernetes.io/projected/d05ed2f4-83c0-4baa-be13-ba5e179b4804-kube-api-access-jw4ds\") pod \"nova-scheduler-0\" (UID: \"d05ed2f4-83c0-4baa-be13-ba5e179b4804\") " pod="openstack/nova-scheduler-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.054224 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6678c6cd77-b7z64"] Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.055925 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.077660 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.079396 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.086876 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6678c6cd77-b7z64"] Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.093876 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.097056 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260\") " pod="openstack/nova-metadata-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.097859 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bzhg\" (UniqueName: \"kubernetes.io/projected/143466f2-dd6e-4237-8e35-5e24198ed56d-kube-api-access-2bzhg\") pod \"nova-cell1-novncproxy-0\" (UID: \"143466f2-dd6e-4237-8e35-5e24198ed56d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.097945 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/143466f2-dd6e-4237-8e35-5e24198ed56d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"143466f2-dd6e-4237-8e35-5e24198ed56d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.098012 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/143466f2-dd6e-4237-8e35-5e24198ed56d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"143466f2-dd6e-4237-8e35-5e24198ed56d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.098123 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl7bz\" (UniqueName: \"kubernetes.io/projected/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-kube-api-access-vl7bz\") pod \"nova-metadata-0\" (UID: \"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260\") " pod="openstack/nova-metadata-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.098230 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-logs\") pod \"nova-metadata-0\" (UID: \"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260\") " pod="openstack/nova-metadata-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.098348 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-config-data\") pod \"nova-metadata-0\" (UID: \"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260\") " pod="openstack/nova-metadata-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.113420 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.148127 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.199755 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7fc21f-e348-494e-b969-197ae9069b36-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bc7fc21f-e348-494e-b969-197ae9069b36\") " pod="openstack/nova-api-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.199800 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl7bz\" (UniqueName: \"kubernetes.io/projected/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-kube-api-access-vl7bz\") pod \"nova-metadata-0\" (UID: \"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260\") " pod="openstack/nova-metadata-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.199833 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-ovsdbserver-nb\") pod \"dnsmasq-dns-6678c6cd77-b7z64\" (UID: \"b617d316-138a-4c93-a1bf-8bb810a33398\") " pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.199861 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-logs\") pod \"nova-metadata-0\" (UID: \"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260\") " pod="openstack/nova-metadata-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.199898 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-config-data\") pod \"nova-metadata-0\" (UID: \"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260\") " pod="openstack/nova-metadata-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.199923 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260\") " pod="openstack/nova-metadata-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.199946 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-dns-svc\") pod \"dnsmasq-dns-6678c6cd77-b7z64\" (UID: \"b617d316-138a-4c93-a1bf-8bb810a33398\") " pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.199965 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv72p\" (UniqueName: \"kubernetes.io/projected/b617d316-138a-4c93-a1bf-8bb810a33398-kube-api-access-wv72p\") pod \"dnsmasq-dns-6678c6cd77-b7z64\" (UID: \"b617d316-138a-4c93-a1bf-8bb810a33398\") " pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.200007 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc7fc21f-e348-494e-b969-197ae9069b36-logs\") pod \"nova-api-0\" (UID: \"bc7fc21f-e348-494e-b969-197ae9069b36\") " pod="openstack/nova-api-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.200041 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgjlt\" (UniqueName: \"kubernetes.io/projected/bc7fc21f-e348-494e-b969-197ae9069b36-kube-api-access-zgjlt\") pod \"nova-api-0\" (UID: \"bc7fc21f-e348-494e-b969-197ae9069b36\") " pod="openstack/nova-api-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.200056 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7fc21f-e348-494e-b969-197ae9069b36-config-data\") pod \"nova-api-0\" (UID: \"bc7fc21f-e348-494e-b969-197ae9069b36\") " pod="openstack/nova-api-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.200084 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bzhg\" (UniqueName: \"kubernetes.io/projected/143466f2-dd6e-4237-8e35-5e24198ed56d-kube-api-access-2bzhg\") pod \"nova-cell1-novncproxy-0\" (UID: \"143466f2-dd6e-4237-8e35-5e24198ed56d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.200099 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-ovsdbserver-sb\") pod \"dnsmasq-dns-6678c6cd77-b7z64\" (UID: \"b617d316-138a-4c93-a1bf-8bb810a33398\") " pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.200116 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/143466f2-dd6e-4237-8e35-5e24198ed56d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"143466f2-dd6e-4237-8e35-5e24198ed56d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.200133 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/143466f2-dd6e-4237-8e35-5e24198ed56d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"143466f2-dd6e-4237-8e35-5e24198ed56d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.200153 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-config\") pod \"dnsmasq-dns-6678c6cd77-b7z64\" (UID: \"b617d316-138a-4c93-a1bf-8bb810a33398\") " pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.200784 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-logs\") pod \"nova-metadata-0\" (UID: \"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260\") " pod="openstack/nova-metadata-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.209910 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-config-data\") pod \"nova-metadata-0\" (UID: \"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260\") " pod="openstack/nova-metadata-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.210967 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/143466f2-dd6e-4237-8e35-5e24198ed56d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"143466f2-dd6e-4237-8e35-5e24198ed56d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.212630 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260\") " pod="openstack/nova-metadata-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.214908 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bzhg\" (UniqueName: \"kubernetes.io/projected/143466f2-dd6e-4237-8e35-5e24198ed56d-kube-api-access-2bzhg\") pod \"nova-cell1-novncproxy-0\" (UID: \"143466f2-dd6e-4237-8e35-5e24198ed56d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.217168 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl7bz\" (UniqueName: \"kubernetes.io/projected/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-kube-api-access-vl7bz\") pod \"nova-metadata-0\" (UID: \"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260\") " pod="openstack/nova-metadata-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.220744 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/143466f2-dd6e-4237-8e35-5e24198ed56d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"143466f2-dd6e-4237-8e35-5e24198ed56d\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.301535 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7fc21f-e348-494e-b969-197ae9069b36-config-data\") pod \"nova-api-0\" (UID: \"bc7fc21f-e348-494e-b969-197ae9069b36\") " pod="openstack/nova-api-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.301871 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-ovsdbserver-sb\") pod \"dnsmasq-dns-6678c6cd77-b7z64\" (UID: \"b617d316-138a-4c93-a1bf-8bb810a33398\") " pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.301898 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-config\") pod \"dnsmasq-dns-6678c6cd77-b7z64\" (UID: \"b617d316-138a-4c93-a1bf-8bb810a33398\") " pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.301928 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7fc21f-e348-494e-b969-197ae9069b36-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bc7fc21f-e348-494e-b969-197ae9069b36\") " pod="openstack/nova-api-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.301968 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-ovsdbserver-nb\") pod \"dnsmasq-dns-6678c6cd77-b7z64\" (UID: \"b617d316-138a-4c93-a1bf-8bb810a33398\") " pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.302026 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-dns-svc\") pod \"dnsmasq-dns-6678c6cd77-b7z64\" (UID: \"b617d316-138a-4c93-a1bf-8bb810a33398\") " pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.302058 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv72p\" (UniqueName: \"kubernetes.io/projected/b617d316-138a-4c93-a1bf-8bb810a33398-kube-api-access-wv72p\") pod \"dnsmasq-dns-6678c6cd77-b7z64\" (UID: \"b617d316-138a-4c93-a1bf-8bb810a33398\") " pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.302101 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc7fc21f-e348-494e-b969-197ae9069b36-logs\") pod \"nova-api-0\" (UID: \"bc7fc21f-e348-494e-b969-197ae9069b36\") " pod="openstack/nova-api-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.302139 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgjlt\" (UniqueName: \"kubernetes.io/projected/bc7fc21f-e348-494e-b969-197ae9069b36-kube-api-access-zgjlt\") pod \"nova-api-0\" (UID: \"bc7fc21f-e348-494e-b969-197ae9069b36\") " pod="openstack/nova-api-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.303140 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-ovsdbserver-sb\") pod \"dnsmasq-dns-6678c6cd77-b7z64\" (UID: \"b617d316-138a-4c93-a1bf-8bb810a33398\") " pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.303685 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-config\") pod \"dnsmasq-dns-6678c6cd77-b7z64\" (UID: \"b617d316-138a-4c93-a1bf-8bb810a33398\") " pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.303865 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc7fc21f-e348-494e-b969-197ae9069b36-logs\") pod \"nova-api-0\" (UID: \"bc7fc21f-e348-494e-b969-197ae9069b36\") " pod="openstack/nova-api-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.304227 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-ovsdbserver-nb\") pod \"dnsmasq-dns-6678c6cd77-b7z64\" (UID: \"b617d316-138a-4c93-a1bf-8bb810a33398\") " pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.304564 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-dns-svc\") pod \"dnsmasq-dns-6678c6cd77-b7z64\" (UID: \"b617d316-138a-4c93-a1bf-8bb810a33398\") " pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.305150 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7fc21f-e348-494e-b969-197ae9069b36-config-data\") pod \"nova-api-0\" (UID: \"bc7fc21f-e348-494e-b969-197ae9069b36\") " pod="openstack/nova-api-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.308178 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7fc21f-e348-494e-b969-197ae9069b36-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bc7fc21f-e348-494e-b969-197ae9069b36\") " pod="openstack/nova-api-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.324009 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv72p\" (UniqueName: \"kubernetes.io/projected/b617d316-138a-4c93-a1bf-8bb810a33398-kube-api-access-wv72p\") pod \"dnsmasq-dns-6678c6cd77-b7z64\" (UID: \"b617d316-138a-4c93-a1bf-8bb810a33398\") " pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.324130 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgjlt\" (UniqueName: \"kubernetes.io/projected/bc7fc21f-e348-494e-b969-197ae9069b36-kube-api-access-zgjlt\") pod \"nova-api-0\" (UID: \"bc7fc21f-e348-494e-b969-197ae9069b36\") " pod="openstack/nova-api-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.373301 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.401068 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.416594 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.431342 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.522890 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-7xzss"] Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.651421 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.682938 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7gkjk"] Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.684139 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7gkjk" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.686735 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.686949 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.691735 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7gkjk"] Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.812016 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8455425-64c6-4828-aba6-ef211a15a2ba-scripts\") pod \"nova-cell1-conductor-db-sync-7gkjk\" (UID: \"d8455425-64c6-4828-aba6-ef211a15a2ba\") " pod="openstack/nova-cell1-conductor-db-sync-7gkjk" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.812076 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58lzg\" (UniqueName: \"kubernetes.io/projected/d8455425-64c6-4828-aba6-ef211a15a2ba-kube-api-access-58lzg\") pod \"nova-cell1-conductor-db-sync-7gkjk\" (UID: \"d8455425-64c6-4828-aba6-ef211a15a2ba\") " pod="openstack/nova-cell1-conductor-db-sync-7gkjk" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.812166 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8455425-64c6-4828-aba6-ef211a15a2ba-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7gkjk\" (UID: \"d8455425-64c6-4828-aba6-ef211a15a2ba\") " pod="openstack/nova-cell1-conductor-db-sync-7gkjk" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.812193 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8455425-64c6-4828-aba6-ef211a15a2ba-config-data\") pod \"nova-cell1-conductor-db-sync-7gkjk\" (UID: \"d8455425-64c6-4828-aba6-ef211a15a2ba\") " pod="openstack/nova-cell1-conductor-db-sync-7gkjk" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.846588 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 20:54:24 crc kubenswrapper[4812]: W1124 20:54:24.847945 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3316ae1_10a3_4d2f_8a9d_36bf2f0cc260.slice/crio-dcafa47eb9332825f4055da16615e74d42306e58a9b04f111e43be83011cf2b6 WatchSource:0}: Error finding container dcafa47eb9332825f4055da16615e74d42306e58a9b04f111e43be83011cf2b6: Status 404 returned error can't find the container with id dcafa47eb9332825f4055da16615e74d42306e58a9b04f111e43be83011cf2b6 Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.914291 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8455425-64c6-4828-aba6-ef211a15a2ba-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7gkjk\" (UID: \"d8455425-64c6-4828-aba6-ef211a15a2ba\") " pod="openstack/nova-cell1-conductor-db-sync-7gkjk" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.914330 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8455425-64c6-4828-aba6-ef211a15a2ba-config-data\") pod \"nova-cell1-conductor-db-sync-7gkjk\" (UID: \"d8455425-64c6-4828-aba6-ef211a15a2ba\") " pod="openstack/nova-cell1-conductor-db-sync-7gkjk" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.914448 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8455425-64c6-4828-aba6-ef211a15a2ba-scripts\") pod \"nova-cell1-conductor-db-sync-7gkjk\" (UID: \"d8455425-64c6-4828-aba6-ef211a15a2ba\") " pod="openstack/nova-cell1-conductor-db-sync-7gkjk" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.914474 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58lzg\" (UniqueName: \"kubernetes.io/projected/d8455425-64c6-4828-aba6-ef211a15a2ba-kube-api-access-58lzg\") pod \"nova-cell1-conductor-db-sync-7gkjk\" (UID: \"d8455425-64c6-4828-aba6-ef211a15a2ba\") " pod="openstack/nova-cell1-conductor-db-sync-7gkjk" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.932463 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8455425-64c6-4828-aba6-ef211a15a2ba-config-data\") pod \"nova-cell1-conductor-db-sync-7gkjk\" (UID: \"d8455425-64c6-4828-aba6-ef211a15a2ba\") " pod="openstack/nova-cell1-conductor-db-sync-7gkjk" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.934400 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8455425-64c6-4828-aba6-ef211a15a2ba-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-7gkjk\" (UID: \"d8455425-64c6-4828-aba6-ef211a15a2ba\") " pod="openstack/nova-cell1-conductor-db-sync-7gkjk" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.956126 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8455425-64c6-4828-aba6-ef211a15a2ba-scripts\") pod \"nova-cell1-conductor-db-sync-7gkjk\" (UID: \"d8455425-64c6-4828-aba6-ef211a15a2ba\") " pod="openstack/nova-cell1-conductor-db-sync-7gkjk" Nov 24 20:54:24 crc kubenswrapper[4812]: I1124 20:54:24.992601 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58lzg\" (UniqueName: \"kubernetes.io/projected/d8455425-64c6-4828-aba6-ef211a15a2ba-kube-api-access-58lzg\") pod \"nova-cell1-conductor-db-sync-7gkjk\" (UID: \"d8455425-64c6-4828-aba6-ef211a15a2ba\") " pod="openstack/nova-cell1-conductor-db-sync-7gkjk" Nov 24 20:54:25 crc kubenswrapper[4812]: I1124 20:54:25.002378 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 20:54:25 crc kubenswrapper[4812]: I1124 20:54:25.009634 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7gkjk" Nov 24 20:54:25 crc kubenswrapper[4812]: I1124 20:54:25.074060 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6678c6cd77-b7z64"] Nov 24 20:54:25 crc kubenswrapper[4812]: I1124 20:54:25.214959 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 20:54:25 crc kubenswrapper[4812]: I1124 20:54:25.432389 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260","Type":"ContainerStarted","Data":"58f5f3804bb66560a9e0e70e07dbed91b1f5d08755c4b7c96541c06c0536403d"} Nov 24 20:54:25 crc kubenswrapper[4812]: I1124 20:54:25.432674 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260","Type":"ContainerStarted","Data":"dcafa47eb9332825f4055da16615e74d42306e58a9b04f111e43be83011cf2b6"} Nov 24 20:54:25 crc kubenswrapper[4812]: I1124 20:54:25.442645 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d05ed2f4-83c0-4baa-be13-ba5e179b4804","Type":"ContainerStarted","Data":"335ce9c8ffc70b8007a68e945e55d92701133418ef1b79818e817513f7f4e7a6"} Nov 24 20:54:25 crc kubenswrapper[4812]: I1124 20:54:25.442689 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d05ed2f4-83c0-4baa-be13-ba5e179b4804","Type":"ContainerStarted","Data":"d435c46ca6593ae7585ccdedb41d6956ff5d8faa7d98e525ae2671d52eafd2c1"} Nov 24 20:54:25 crc kubenswrapper[4812]: I1124 20:54:25.448926 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"143466f2-dd6e-4237-8e35-5e24198ed56d","Type":"ContainerStarted","Data":"0379bf1961642b773b7c9e4bde75593c908721a656052501de70146fb569f26c"} Nov 24 20:54:25 crc kubenswrapper[4812]: I1124 20:54:25.448973 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"143466f2-dd6e-4237-8e35-5e24198ed56d","Type":"ContainerStarted","Data":"7772be8db1f46a9d45e2d5c8df3d484e872ba22f5d12f8716670ca9285d19baf"} Nov 24 20:54:25 crc kubenswrapper[4812]: I1124 20:54:25.451915 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc7fc21f-e348-494e-b969-197ae9069b36","Type":"ContainerStarted","Data":"a9bb4cf6aba7781d6bf3aa4f4e3d7053e24f716dcced1d942fb583b91ca79a46"} Nov 24 20:54:25 crc kubenswrapper[4812]: I1124 20:54:25.458189 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.458171611 podStartE2EDuration="2.458171611s" podCreationTimestamp="2025-11-24 20:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:54:25.456886125 +0000 UTC m=+5859.245838496" watchObservedRunningTime="2025-11-24 20:54:25.458171611 +0000 UTC m=+5859.247123982" Nov 24 20:54:25 crc kubenswrapper[4812]: I1124 20:54:25.460001 4812 generic.go:334] "Generic (PLEG): container finished" podID="b617d316-138a-4c93-a1bf-8bb810a33398" containerID="7edfe1b23988d9624d9486f64692f8085d7282753ae3f37f628397ebeed32afc" exitCode=0 Nov 24 20:54:25 crc kubenswrapper[4812]: I1124 20:54:25.460090 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" event={"ID":"b617d316-138a-4c93-a1bf-8bb810a33398","Type":"ContainerDied","Data":"7edfe1b23988d9624d9486f64692f8085d7282753ae3f37f628397ebeed32afc"} Nov 24 20:54:25 crc kubenswrapper[4812]: I1124 20:54:25.460120 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" event={"ID":"b617d316-138a-4c93-a1bf-8bb810a33398","Type":"ContainerStarted","Data":"f4b9ee761b7d4baaf160f6df7989f36daa71c5fe3374174de12477ee2fbf64b4"} Nov 24 20:54:25 crc kubenswrapper[4812]: I1124 20:54:25.462037 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7xzss" event={"ID":"74b9a17c-1408-4008-a7ce-2805decab132","Type":"ContainerStarted","Data":"1ca892379c597a3c1f3242eb7ef0cf9f523784315ef3bb2969204feb27d835f9"} Nov 24 20:54:25 crc kubenswrapper[4812]: I1124 20:54:25.462083 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7xzss" event={"ID":"74b9a17c-1408-4008-a7ce-2805decab132","Type":"ContainerStarted","Data":"bbfd192674507f9af7185781b67758e9b89c9353998283027bddee1399ed341a"} Nov 24 20:54:25 crc kubenswrapper[4812]: I1124 20:54:25.477296 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.477278282 podStartE2EDuration="2.477278282s" podCreationTimestamp="2025-11-24 20:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:54:25.473611058 +0000 UTC m=+5859.262563429" watchObservedRunningTime="2025-11-24 20:54:25.477278282 +0000 UTC m=+5859.266230653" Nov 24 20:54:25 crc kubenswrapper[4812]: I1124 20:54:25.571512 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-7xzss" podStartSLOduration=2.5714947070000003 podStartE2EDuration="2.571494707s" podCreationTimestamp="2025-11-24 20:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:54:25.510551763 +0000 UTC m=+5859.299504134" watchObservedRunningTime="2025-11-24 20:54:25.571494707 +0000 UTC m=+5859.360447078" Nov 24 20:54:25 crc kubenswrapper[4812]: I1124 20:54:25.577391 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7gkjk"] Nov 24 20:54:25 crc kubenswrapper[4812]: W1124 20:54:25.588467 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8455425_64c6_4828_aba6_ef211a15a2ba.slice/crio-36b157580d00b73fbd14213c66e51f787c332211d23019c822f36b14839e2d20 WatchSource:0}: Error finding container 36b157580d00b73fbd14213c66e51f787c332211d23019c822f36b14839e2d20: Status 404 returned error can't find the container with id 36b157580d00b73fbd14213c66e51f787c332211d23019c822f36b14839e2d20 Nov 24 20:54:26 crc kubenswrapper[4812]: I1124 20:54:26.470537 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" event={"ID":"b617d316-138a-4c93-a1bf-8bb810a33398","Type":"ContainerStarted","Data":"80f76c14a785e7460005aca193011ab9c16732b2c6318b05149fd30316046a1e"} Nov 24 20:54:26 crc kubenswrapper[4812]: I1124 20:54:26.470988 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" Nov 24 20:54:26 crc kubenswrapper[4812]: I1124 20:54:26.472633 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260","Type":"ContainerStarted","Data":"aa005cd518ba9daafbdc38974187e927edcfd74ca112afad4e1b5d55f1efe8cc"} Nov 24 20:54:26 crc kubenswrapper[4812]: I1124 20:54:26.478706 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc7fc21f-e348-494e-b969-197ae9069b36","Type":"ContainerStarted","Data":"2aa38d93618e20854428f7510143b3371889dc651d29973ca0047189ddb62fda"} Nov 24 20:54:26 crc kubenswrapper[4812]: I1124 20:54:26.478732 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc7fc21f-e348-494e-b969-197ae9069b36","Type":"ContainerStarted","Data":"561edc684fd8b088643ef896b35931053121bb30f1ad8fa84ad9b26759734ff9"} Nov 24 20:54:26 crc kubenswrapper[4812]: I1124 20:54:26.482330 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7gkjk" event={"ID":"d8455425-64c6-4828-aba6-ef211a15a2ba","Type":"ContainerStarted","Data":"f3a58ce8cfc698c3431d4078a1d1609979f6147ef4070b57ef8bdeb25837698c"} Nov 24 20:54:26 crc kubenswrapper[4812]: I1124 20:54:26.482369 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7gkjk" event={"ID":"d8455425-64c6-4828-aba6-ef211a15a2ba","Type":"ContainerStarted","Data":"36b157580d00b73fbd14213c66e51f787c332211d23019c822f36b14839e2d20"} Nov 24 20:54:26 crc kubenswrapper[4812]: I1124 20:54:26.523326 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.523311264 podStartE2EDuration="3.523311264s" podCreationTimestamp="2025-11-24 20:54:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:54:26.519623889 +0000 UTC m=+5860.308576260" watchObservedRunningTime="2025-11-24 20:54:26.523311264 +0000 UTC m=+5860.312263635" Nov 24 20:54:26 crc kubenswrapper[4812]: I1124 20:54:26.524744 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" podStartSLOduration=2.524738014 podStartE2EDuration="2.524738014s" podCreationTimestamp="2025-11-24 20:54:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:54:26.494102727 +0000 UTC m=+5860.283055138" watchObservedRunningTime="2025-11-24 20:54:26.524738014 +0000 UTC m=+5860.313690385" Nov 24 20:54:26 crc kubenswrapper[4812]: I1124 20:54:26.540370 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-7gkjk" podStartSLOduration=2.540353556 podStartE2EDuration="2.540353556s" podCreationTimestamp="2025-11-24 20:54:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:54:26.539601265 +0000 UTC m=+5860.328553636" watchObservedRunningTime="2025-11-24 20:54:26.540353556 +0000 UTC m=+5860.329305927" Nov 24 20:54:26 crc kubenswrapper[4812]: I1124 20:54:26.579999 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.579980087 podStartE2EDuration="2.579980087s" podCreationTimestamp="2025-11-24 20:54:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:54:26.577094955 +0000 UTC m=+5860.366047326" watchObservedRunningTime="2025-11-24 20:54:26.579980087 +0000 UTC m=+5860.368932478" Nov 24 20:54:28 crc kubenswrapper[4812]: I1124 20:54:28.288739 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 20:54:28 crc kubenswrapper[4812]: I1124 20:54:28.299212 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 20:54:28 crc kubenswrapper[4812]: I1124 20:54:28.299437 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="143466f2-dd6e-4237-8e35-5e24198ed56d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://0379bf1961642b773b7c9e4bde75593c908721a656052501de70146fb569f26c" gracePeriod=30 Nov 24 20:54:28 crc kubenswrapper[4812]: I1124 20:54:28.512840 4812 generic.go:334] "Generic (PLEG): container finished" podID="d8455425-64c6-4828-aba6-ef211a15a2ba" containerID="f3a58ce8cfc698c3431d4078a1d1609979f6147ef4070b57ef8bdeb25837698c" exitCode=0 Nov 24 20:54:28 crc kubenswrapper[4812]: I1124 20:54:28.512941 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7gkjk" event={"ID":"d8455425-64c6-4828-aba6-ef211a15a2ba","Type":"ContainerDied","Data":"f3a58ce8cfc698c3431d4078a1d1609979f6147ef4070b57ef8bdeb25837698c"} Nov 24 20:54:28 crc kubenswrapper[4812]: I1124 20:54:28.513028 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260" containerName="nova-metadata-log" containerID="cri-o://58f5f3804bb66560a9e0e70e07dbed91b1f5d08755c4b7c96541c06c0536403d" gracePeriod=30 Nov 24 20:54:28 crc kubenswrapper[4812]: I1124 20:54:28.513118 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260" containerName="nova-metadata-metadata" containerID="cri-o://aa005cd518ba9daafbdc38974187e927edcfd74ca112afad4e1b5d55f1efe8cc" gracePeriod=30 Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.149101 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.153362 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.159743 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.214415 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl7bz\" (UniqueName: \"kubernetes.io/projected/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-kube-api-access-vl7bz\") pod \"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260\" (UID: \"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260\") " Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.214480 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/143466f2-dd6e-4237-8e35-5e24198ed56d-combined-ca-bundle\") pod \"143466f2-dd6e-4237-8e35-5e24198ed56d\" (UID: \"143466f2-dd6e-4237-8e35-5e24198ed56d\") " Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.214599 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/143466f2-dd6e-4237-8e35-5e24198ed56d-config-data\") pod \"143466f2-dd6e-4237-8e35-5e24198ed56d\" (UID: \"143466f2-dd6e-4237-8e35-5e24198ed56d\") " Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.214638 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-logs\") pod \"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260\" (UID: \"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260\") " Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.214741 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-combined-ca-bundle\") pod \"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260\" (UID: \"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260\") " Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.214775 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bzhg\" (UniqueName: \"kubernetes.io/projected/143466f2-dd6e-4237-8e35-5e24198ed56d-kube-api-access-2bzhg\") pod \"143466f2-dd6e-4237-8e35-5e24198ed56d\" (UID: \"143466f2-dd6e-4237-8e35-5e24198ed56d\") " Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.214884 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-config-data\") pod \"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260\" (UID: \"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260\") " Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.215575 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-logs" (OuterVolumeSpecName: "logs") pod "f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260" (UID: "f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.216358 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-logs\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.221698 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-kube-api-access-vl7bz" (OuterVolumeSpecName: "kube-api-access-vl7bz") pod "f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260" (UID: "f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260"). InnerVolumeSpecName "kube-api-access-vl7bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.226500 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/143466f2-dd6e-4237-8e35-5e24198ed56d-kube-api-access-2bzhg" (OuterVolumeSpecName: "kube-api-access-2bzhg") pod "143466f2-dd6e-4237-8e35-5e24198ed56d" (UID: "143466f2-dd6e-4237-8e35-5e24198ed56d"). InnerVolumeSpecName "kube-api-access-2bzhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.248154 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/143466f2-dd6e-4237-8e35-5e24198ed56d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "143466f2-dd6e-4237-8e35-5e24198ed56d" (UID: "143466f2-dd6e-4237-8e35-5e24198ed56d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.252413 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-config-data" (OuterVolumeSpecName: "config-data") pod "f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260" (UID: "f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.254518 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/143466f2-dd6e-4237-8e35-5e24198ed56d-config-data" (OuterVolumeSpecName: "config-data") pod "143466f2-dd6e-4237-8e35-5e24198ed56d" (UID: "143466f2-dd6e-4237-8e35-5e24198ed56d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.256027 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260" (UID: "f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.318202 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.318255 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bzhg\" (UniqueName: \"kubernetes.io/projected/143466f2-dd6e-4237-8e35-5e24198ed56d-kube-api-access-2bzhg\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.318272 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.318286 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl7bz\" (UniqueName: \"kubernetes.io/projected/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260-kube-api-access-vl7bz\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.318299 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/143466f2-dd6e-4237-8e35-5e24198ed56d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.318312 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/143466f2-dd6e-4237-8e35-5e24198ed56d-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.528133 4812 generic.go:334] "Generic (PLEG): container finished" podID="143466f2-dd6e-4237-8e35-5e24198ed56d" containerID="0379bf1961642b773b7c9e4bde75593c908721a656052501de70146fb569f26c" exitCode=0 Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.528610 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"143466f2-dd6e-4237-8e35-5e24198ed56d","Type":"ContainerDied","Data":"0379bf1961642b773b7c9e4bde75593c908721a656052501de70146fb569f26c"} Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.528649 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"143466f2-dd6e-4237-8e35-5e24198ed56d","Type":"ContainerDied","Data":"7772be8db1f46a9d45e2d5c8df3d484e872ba22f5d12f8716670ca9285d19baf"} Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.528678 4812 scope.go:117] "RemoveContainer" containerID="0379bf1961642b773b7c9e4bde75593c908721a656052501de70146fb569f26c" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.528871 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.542818 4812 generic.go:334] "Generic (PLEG): container finished" podID="f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260" containerID="aa005cd518ba9daafbdc38974187e927edcfd74ca112afad4e1b5d55f1efe8cc" exitCode=0 Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.542860 4812 generic.go:334] "Generic (PLEG): container finished" podID="f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260" containerID="58f5f3804bb66560a9e0e70e07dbed91b1f5d08755c4b7c96541c06c0536403d" exitCode=143 Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.543097 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260","Type":"ContainerDied","Data":"aa005cd518ba9daafbdc38974187e927edcfd74ca112afad4e1b5d55f1efe8cc"} Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.543161 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260","Type":"ContainerDied","Data":"58f5f3804bb66560a9e0e70e07dbed91b1f5d08755c4b7c96541c06c0536403d"} Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.543176 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260","Type":"ContainerDied","Data":"dcafa47eb9332825f4055da16615e74d42306e58a9b04f111e43be83011cf2b6"} Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.543155 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.577837 4812 scope.go:117] "RemoveContainer" containerID="0379bf1961642b773b7c9e4bde75593c908721a656052501de70146fb569f26c" Nov 24 20:54:29 crc kubenswrapper[4812]: E1124 20:54:29.578447 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0379bf1961642b773b7c9e4bde75593c908721a656052501de70146fb569f26c\": container with ID starting with 0379bf1961642b773b7c9e4bde75593c908721a656052501de70146fb569f26c not found: ID does not exist" containerID="0379bf1961642b773b7c9e4bde75593c908721a656052501de70146fb569f26c" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.578483 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0379bf1961642b773b7c9e4bde75593c908721a656052501de70146fb569f26c"} err="failed to get container status \"0379bf1961642b773b7c9e4bde75593c908721a656052501de70146fb569f26c\": rpc error: code = NotFound desc = could not find container \"0379bf1961642b773b7c9e4bde75593c908721a656052501de70146fb569f26c\": container with ID starting with 0379bf1961642b773b7c9e4bde75593c908721a656052501de70146fb569f26c not found: ID does not exist" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.578636 4812 scope.go:117] "RemoveContainer" containerID="aa005cd518ba9daafbdc38974187e927edcfd74ca112afad4e1b5d55f1efe8cc" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.606316 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.629172 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.633701 4812 scope.go:117] "RemoveContainer" containerID="58f5f3804bb66560a9e0e70e07dbed91b1f5d08755c4b7c96541c06c0536403d" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.642359 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.664475 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.671497 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 20:54:29 crc kubenswrapper[4812]: E1124 20:54:29.672020 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260" containerName="nova-metadata-log" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.672050 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260" containerName="nova-metadata-log" Nov 24 20:54:29 crc kubenswrapper[4812]: E1124 20:54:29.672078 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143466f2-dd6e-4237-8e35-5e24198ed56d" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.672091 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="143466f2-dd6e-4237-8e35-5e24198ed56d" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 20:54:29 crc kubenswrapper[4812]: E1124 20:54:29.672124 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260" containerName="nova-metadata-metadata" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.672137 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260" containerName="nova-metadata-metadata" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.672527 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260" containerName="nova-metadata-log" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.672562 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260" containerName="nova-metadata-metadata" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.672603 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="143466f2-dd6e-4237-8e35-5e24198ed56d" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.673532 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.682521 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.682715 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.682980 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.683101 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.685004 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.687492 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.691619 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.692082 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.699738 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.720767 4812 scope.go:117] "RemoveContainer" containerID="aa005cd518ba9daafbdc38974187e927edcfd74ca112afad4e1b5d55f1efe8cc" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.724418 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c7bd34c-5266-40cb-8023-b6013d9cc8a2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c7bd34c-5266-40cb-8023-b6013d9cc8a2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.724476 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pxsz\" (UniqueName: \"kubernetes.io/projected/8c7bd34c-5266-40cb-8023-b6013d9cc8a2-kube-api-access-8pxsz\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c7bd34c-5266-40cb-8023-b6013d9cc8a2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.724526 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c7bd34c-5266-40cb-8023-b6013d9cc8a2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c7bd34c-5266-40cb-8023-b6013d9cc8a2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.724569 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c7bd34c-5266-40cb-8023-b6013d9cc8a2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c7bd34c-5266-40cb-8023-b6013d9cc8a2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.724630 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c7bd34c-5266-40cb-8023-b6013d9cc8a2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c7bd34c-5266-40cb-8023-b6013d9cc8a2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.724764 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eba93e8-854a-4975-98d9-8359b32f5378-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7eba93e8-854a-4975-98d9-8359b32f5378\") " pod="openstack/nova-metadata-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.724939 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eba93e8-854a-4975-98d9-8359b32f5378-config-data\") pod \"nova-metadata-0\" (UID: \"7eba93e8-854a-4975-98d9-8359b32f5378\") " pod="openstack/nova-metadata-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.725019 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9wn8\" (UniqueName: \"kubernetes.io/projected/7eba93e8-854a-4975-98d9-8359b32f5378-kube-api-access-f9wn8\") pod \"nova-metadata-0\" (UID: \"7eba93e8-854a-4975-98d9-8359b32f5378\") " pod="openstack/nova-metadata-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.725102 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7eba93e8-854a-4975-98d9-8359b32f5378-logs\") pod \"nova-metadata-0\" (UID: \"7eba93e8-854a-4975-98d9-8359b32f5378\") " pod="openstack/nova-metadata-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.725177 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eba93e8-854a-4975-98d9-8359b32f5378-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7eba93e8-854a-4975-98d9-8359b32f5378\") " pod="openstack/nova-metadata-0" Nov 24 20:54:29 crc kubenswrapper[4812]: E1124 20:54:29.726794 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa005cd518ba9daafbdc38974187e927edcfd74ca112afad4e1b5d55f1efe8cc\": container with ID starting with aa005cd518ba9daafbdc38974187e927edcfd74ca112afad4e1b5d55f1efe8cc not found: ID does not exist" containerID="aa005cd518ba9daafbdc38974187e927edcfd74ca112afad4e1b5d55f1efe8cc" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.726827 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa005cd518ba9daafbdc38974187e927edcfd74ca112afad4e1b5d55f1efe8cc"} err="failed to get container status \"aa005cd518ba9daafbdc38974187e927edcfd74ca112afad4e1b5d55f1efe8cc\": rpc error: code = NotFound desc = could not find container \"aa005cd518ba9daafbdc38974187e927edcfd74ca112afad4e1b5d55f1efe8cc\": container with ID starting with aa005cd518ba9daafbdc38974187e927edcfd74ca112afad4e1b5d55f1efe8cc not found: ID does not exist" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.726851 4812 scope.go:117] "RemoveContainer" containerID="58f5f3804bb66560a9e0e70e07dbed91b1f5d08755c4b7c96541c06c0536403d" Nov 24 20:54:29 crc kubenswrapper[4812]: E1124 20:54:29.727172 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58f5f3804bb66560a9e0e70e07dbed91b1f5d08755c4b7c96541c06c0536403d\": container with ID starting with 58f5f3804bb66560a9e0e70e07dbed91b1f5d08755c4b7c96541c06c0536403d not found: ID does not exist" containerID="58f5f3804bb66560a9e0e70e07dbed91b1f5d08755c4b7c96541c06c0536403d" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.727193 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f5f3804bb66560a9e0e70e07dbed91b1f5d08755c4b7c96541c06c0536403d"} err="failed to get container status \"58f5f3804bb66560a9e0e70e07dbed91b1f5d08755c4b7c96541c06c0536403d\": rpc error: code = NotFound desc = could not find container \"58f5f3804bb66560a9e0e70e07dbed91b1f5d08755c4b7c96541c06c0536403d\": container with ID starting with 58f5f3804bb66560a9e0e70e07dbed91b1f5d08755c4b7c96541c06c0536403d not found: ID does not exist" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.727207 4812 scope.go:117] "RemoveContainer" containerID="aa005cd518ba9daafbdc38974187e927edcfd74ca112afad4e1b5d55f1efe8cc" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.727531 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa005cd518ba9daafbdc38974187e927edcfd74ca112afad4e1b5d55f1efe8cc"} err="failed to get container status \"aa005cd518ba9daafbdc38974187e927edcfd74ca112afad4e1b5d55f1efe8cc\": rpc error: code = NotFound desc = could not find container \"aa005cd518ba9daafbdc38974187e927edcfd74ca112afad4e1b5d55f1efe8cc\": container with ID starting with aa005cd518ba9daafbdc38974187e927edcfd74ca112afad4e1b5d55f1efe8cc not found: ID does not exist" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.727553 4812 scope.go:117] "RemoveContainer" containerID="58f5f3804bb66560a9e0e70e07dbed91b1f5d08755c4b7c96541c06c0536403d" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.727731 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f5f3804bb66560a9e0e70e07dbed91b1f5d08755c4b7c96541c06c0536403d"} err="failed to get container status \"58f5f3804bb66560a9e0e70e07dbed91b1f5d08755c4b7c96541c06c0536403d\": rpc error: code = NotFound desc = could not find container \"58f5f3804bb66560a9e0e70e07dbed91b1f5d08755c4b7c96541c06c0536403d\": container with ID starting with 58f5f3804bb66560a9e0e70e07dbed91b1f5d08755c4b7c96541c06c0536403d not found: ID does not exist" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.826571 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7eba93e8-854a-4975-98d9-8359b32f5378-logs\") pod \"nova-metadata-0\" (UID: \"7eba93e8-854a-4975-98d9-8359b32f5378\") " pod="openstack/nova-metadata-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.826621 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eba93e8-854a-4975-98d9-8359b32f5378-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7eba93e8-854a-4975-98d9-8359b32f5378\") " pod="openstack/nova-metadata-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.826687 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c7bd34c-5266-40cb-8023-b6013d9cc8a2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c7bd34c-5266-40cb-8023-b6013d9cc8a2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.826722 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pxsz\" (UniqueName: \"kubernetes.io/projected/8c7bd34c-5266-40cb-8023-b6013d9cc8a2-kube-api-access-8pxsz\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c7bd34c-5266-40cb-8023-b6013d9cc8a2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.826761 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c7bd34c-5266-40cb-8023-b6013d9cc8a2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c7bd34c-5266-40cb-8023-b6013d9cc8a2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.826778 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c7bd34c-5266-40cb-8023-b6013d9cc8a2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c7bd34c-5266-40cb-8023-b6013d9cc8a2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.826819 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c7bd34c-5266-40cb-8023-b6013d9cc8a2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c7bd34c-5266-40cb-8023-b6013d9cc8a2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.826845 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eba93e8-854a-4975-98d9-8359b32f5378-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7eba93e8-854a-4975-98d9-8359b32f5378\") " pod="openstack/nova-metadata-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.826867 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eba93e8-854a-4975-98d9-8359b32f5378-config-data\") pod \"nova-metadata-0\" (UID: \"7eba93e8-854a-4975-98d9-8359b32f5378\") " pod="openstack/nova-metadata-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.826890 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9wn8\" (UniqueName: \"kubernetes.io/projected/7eba93e8-854a-4975-98d9-8359b32f5378-kube-api-access-f9wn8\") pod \"nova-metadata-0\" (UID: \"7eba93e8-854a-4975-98d9-8359b32f5378\") " pod="openstack/nova-metadata-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.827058 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7eba93e8-854a-4975-98d9-8359b32f5378-logs\") pod \"nova-metadata-0\" (UID: \"7eba93e8-854a-4975-98d9-8359b32f5378\") " pod="openstack/nova-metadata-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.831527 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eba93e8-854a-4975-98d9-8359b32f5378-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7eba93e8-854a-4975-98d9-8359b32f5378\") " pod="openstack/nova-metadata-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.832011 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c7bd34c-5266-40cb-8023-b6013d9cc8a2-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c7bd34c-5266-40cb-8023-b6013d9cc8a2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.832175 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c7bd34c-5266-40cb-8023-b6013d9cc8a2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c7bd34c-5266-40cb-8023-b6013d9cc8a2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.833000 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eba93e8-854a-4975-98d9-8359b32f5378-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7eba93e8-854a-4975-98d9-8359b32f5378\") " pod="openstack/nova-metadata-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.834688 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eba93e8-854a-4975-98d9-8359b32f5378-config-data\") pod \"nova-metadata-0\" (UID: \"7eba93e8-854a-4975-98d9-8359b32f5378\") " pod="openstack/nova-metadata-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.835241 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c7bd34c-5266-40cb-8023-b6013d9cc8a2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c7bd34c-5266-40cb-8023-b6013d9cc8a2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.836844 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c7bd34c-5266-40cb-8023-b6013d9cc8a2-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c7bd34c-5266-40cb-8023-b6013d9cc8a2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.841991 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pxsz\" (UniqueName: \"kubernetes.io/projected/8c7bd34c-5266-40cb-8023-b6013d9cc8a2-kube-api-access-8pxsz\") pod \"nova-cell1-novncproxy-0\" (UID: \"8c7bd34c-5266-40cb-8023-b6013d9cc8a2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.844714 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9wn8\" (UniqueName: \"kubernetes.io/projected/7eba93e8-854a-4975-98d9-8359b32f5378-kube-api-access-f9wn8\") pod \"nova-metadata-0\" (UID: \"7eba93e8-854a-4975-98d9-8359b32f5378\") " pod="openstack/nova-metadata-0" Nov 24 20:54:29 crc kubenswrapper[4812]: E1124 20:54:29.931112 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74b9a17c_1408_4008_a7ce_2805decab132.slice/crio-1ca892379c597a3c1f3242eb7ef0cf9f523784315ef3bb2969204feb27d835f9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74b9a17c_1408_4008_a7ce_2805decab132.slice/crio-conmon-1ca892379c597a3c1f3242eb7ef0cf9f523784315ef3bb2969204feb27d835f9.scope\": RecentStats: unable to find data in memory cache]" Nov 24 20:54:29 crc kubenswrapper[4812]: I1124 20:54:29.934655 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7gkjk" Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.029738 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.030104 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8455425-64c6-4828-aba6-ef211a15a2ba-combined-ca-bundle\") pod \"d8455425-64c6-4828-aba6-ef211a15a2ba\" (UID: \"d8455425-64c6-4828-aba6-ef211a15a2ba\") " Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.030179 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8455425-64c6-4828-aba6-ef211a15a2ba-scripts\") pod \"d8455425-64c6-4828-aba6-ef211a15a2ba\" (UID: \"d8455425-64c6-4828-aba6-ef211a15a2ba\") " Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.030452 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8455425-64c6-4828-aba6-ef211a15a2ba-config-data\") pod \"d8455425-64c6-4828-aba6-ef211a15a2ba\" (UID: \"d8455425-64c6-4828-aba6-ef211a15a2ba\") " Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.030557 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58lzg\" (UniqueName: \"kubernetes.io/projected/d8455425-64c6-4828-aba6-ef211a15a2ba-kube-api-access-58lzg\") pod \"d8455425-64c6-4828-aba6-ef211a15a2ba\" (UID: \"d8455425-64c6-4828-aba6-ef211a15a2ba\") " Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.034413 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8455425-64c6-4828-aba6-ef211a15a2ba-kube-api-access-58lzg" (OuterVolumeSpecName: "kube-api-access-58lzg") pod "d8455425-64c6-4828-aba6-ef211a15a2ba" (UID: "d8455425-64c6-4828-aba6-ef211a15a2ba"). InnerVolumeSpecName "kube-api-access-58lzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.035091 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8455425-64c6-4828-aba6-ef211a15a2ba-scripts" (OuterVolumeSpecName: "scripts") pod "d8455425-64c6-4828-aba6-ef211a15a2ba" (UID: "d8455425-64c6-4828-aba6-ef211a15a2ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.037464 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.055865 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8455425-64c6-4828-aba6-ef211a15a2ba-config-data" (OuterVolumeSpecName: "config-data") pod "d8455425-64c6-4828-aba6-ef211a15a2ba" (UID: "d8455425-64c6-4828-aba6-ef211a15a2ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.057862 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8455425-64c6-4828-aba6-ef211a15a2ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8455425-64c6-4828-aba6-ef211a15a2ba" (UID: "d8455425-64c6-4828-aba6-ef211a15a2ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.134545 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8455425-64c6-4828-aba6-ef211a15a2ba-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.134611 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58lzg\" (UniqueName: \"kubernetes.io/projected/d8455425-64c6-4828-aba6-ef211a15a2ba-kube-api-access-58lzg\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.134628 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8455425-64c6-4828-aba6-ef211a15a2ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.134640 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8455425-64c6-4828-aba6-ef211a15a2ba-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.502587 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.556428 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-7gkjk" event={"ID":"d8455425-64c6-4828-aba6-ef211a15a2ba","Type":"ContainerDied","Data":"36b157580d00b73fbd14213c66e51f787c332211d23019c822f36b14839e2d20"} Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.556461 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36b157580d00b73fbd14213c66e51f787c332211d23019c822f36b14839e2d20" Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.556435 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-7gkjk" Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.557550 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.559389 4812 generic.go:334] "Generic (PLEG): container finished" podID="74b9a17c-1408-4008-a7ce-2805decab132" containerID="1ca892379c597a3c1f3242eb7ef0cf9f523784315ef3bb2969204feb27d835f9" exitCode=0 Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.559476 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7xzss" event={"ID":"74b9a17c-1408-4008-a7ce-2805decab132","Type":"ContainerDied","Data":"1ca892379c597a3c1f3242eb7ef0cf9f523784315ef3bb2969204feb27d835f9"} Nov 24 20:54:30 crc kubenswrapper[4812]: W1124 20:54:30.562932 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7eba93e8_854a_4975_98d9_8359b32f5378.slice/crio-ad38204f0e5e62bf5c081e6ea397c135d8834c6fda09a8e259ea01d892dc2274 WatchSource:0}: Error finding container ad38204f0e5e62bf5c081e6ea397c135d8834c6fda09a8e259ea01d892dc2274: Status 404 returned error can't find the container with id ad38204f0e5e62bf5c081e6ea397c135d8834c6fda09a8e259ea01d892dc2274 Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.563658 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8c7bd34c-5266-40cb-8023-b6013d9cc8a2","Type":"ContainerStarted","Data":"863894366c284d72390f73917f6d2ee70c348bc8d540ed93016a446846f24936"} Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.983511 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="143466f2-dd6e-4237-8e35-5e24198ed56d" path="/var/lib/kubelet/pods/143466f2-dd6e-4237-8e35-5e24198ed56d/volumes" Nov 24 20:54:30 crc kubenswrapper[4812]: I1124 20:54:30.984630 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260" path="/var/lib/kubelet/pods/f3316ae1-10a3-4d2f-8a9d-36bf2f0cc260/volumes" Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.018929 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 20:54:31 crc kubenswrapper[4812]: E1124 20:54:31.019379 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8455425-64c6-4828-aba6-ef211a15a2ba" containerName="nova-cell1-conductor-db-sync" Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.019396 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8455425-64c6-4828-aba6-ef211a15a2ba" containerName="nova-cell1-conductor-db-sync" Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.019636 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8455425-64c6-4828-aba6-ef211a15a2ba" containerName="nova-cell1-conductor-db-sync" Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.022310 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.028211 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.037491 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.165531 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lv8h\" (UniqueName: \"kubernetes.io/projected/94d71826-08db-4e3a-a03d-e8141947dcc0-kube-api-access-2lv8h\") pod \"nova-cell1-conductor-0\" (UID: \"94d71826-08db-4e3a-a03d-e8141947dcc0\") " pod="openstack/nova-cell1-conductor-0" Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.165599 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d71826-08db-4e3a-a03d-e8141947dcc0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"94d71826-08db-4e3a-a03d-e8141947dcc0\") " pod="openstack/nova-cell1-conductor-0" Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.165657 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d71826-08db-4e3a-a03d-e8141947dcc0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"94d71826-08db-4e3a-a03d-e8141947dcc0\") " pod="openstack/nova-cell1-conductor-0" Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.268106 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lv8h\" (UniqueName: \"kubernetes.io/projected/94d71826-08db-4e3a-a03d-e8141947dcc0-kube-api-access-2lv8h\") pod \"nova-cell1-conductor-0\" (UID: \"94d71826-08db-4e3a-a03d-e8141947dcc0\") " pod="openstack/nova-cell1-conductor-0" Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.268296 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d71826-08db-4e3a-a03d-e8141947dcc0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"94d71826-08db-4e3a-a03d-e8141947dcc0\") " pod="openstack/nova-cell1-conductor-0" Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.268410 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d71826-08db-4e3a-a03d-e8141947dcc0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"94d71826-08db-4e3a-a03d-e8141947dcc0\") " pod="openstack/nova-cell1-conductor-0" Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.273838 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d71826-08db-4e3a-a03d-e8141947dcc0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"94d71826-08db-4e3a-a03d-e8141947dcc0\") " pod="openstack/nova-cell1-conductor-0" Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.273844 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d71826-08db-4e3a-a03d-e8141947dcc0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"94d71826-08db-4e3a-a03d-e8141947dcc0\") " pod="openstack/nova-cell1-conductor-0" Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.284778 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lv8h\" (UniqueName: \"kubernetes.io/projected/94d71826-08db-4e3a-a03d-e8141947dcc0-kube-api-access-2lv8h\") pod \"nova-cell1-conductor-0\" (UID: \"94d71826-08db-4e3a-a03d-e8141947dcc0\") " pod="openstack/nova-cell1-conductor-0" Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.384053 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.594790 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8c7bd34c-5266-40cb-8023-b6013d9cc8a2","Type":"ContainerStarted","Data":"6e37b6def8ed84d0eaa8a580cb4d003a2e2b898a92747f4856bdf5a95c15adc4"} Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.603223 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7eba93e8-854a-4975-98d9-8359b32f5378","Type":"ContainerStarted","Data":"76dee2d27e0b31b22f315924c2a3605782c2d1baecc6a761ecd42cddb85a4f1a"} Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.603386 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7eba93e8-854a-4975-98d9-8359b32f5378","Type":"ContainerStarted","Data":"9f17713b486df9c3e0d8860568de75b67a18a20b425dac1dfb252654dcd63053"} Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.603402 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7eba93e8-854a-4975-98d9-8359b32f5378","Type":"ContainerStarted","Data":"ad38204f0e5e62bf5c081e6ea397c135d8834c6fda09a8e259ea01d892dc2274"} Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.616758 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.61674299 podStartE2EDuration="2.61674299s" podCreationTimestamp="2025-11-24 20:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:54:31.615852485 +0000 UTC m=+5865.404804846" watchObservedRunningTime="2025-11-24 20:54:31.61674299 +0000 UTC m=+5865.405695361" Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.662465 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.662444023 podStartE2EDuration="2.662444023s" podCreationTimestamp="2025-11-24 20:54:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:54:31.662044272 +0000 UTC m=+5865.450996633" watchObservedRunningTime="2025-11-24 20:54:31.662444023 +0000 UTC m=+5865.451396394" Nov 24 20:54:31 crc kubenswrapper[4812]: I1124 20:54:31.880221 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 20:54:31 crc kubenswrapper[4812]: W1124 20:54:31.887478 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94d71826_08db_4e3a_a03d_e8141947dcc0.slice/crio-5b9b1b03f7e7144a13a1189be99194dd7c35be80e221c2871d998e6697ae6778 WatchSource:0}: Error finding container 5b9b1b03f7e7144a13a1189be99194dd7c35be80e221c2871d998e6697ae6778: Status 404 returned error can't find the container with id 5b9b1b03f7e7144a13a1189be99194dd7c35be80e221c2871d998e6697ae6778 Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.031322 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7xzss" Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.092112 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qdhn\" (UniqueName: \"kubernetes.io/projected/74b9a17c-1408-4008-a7ce-2805decab132-kube-api-access-4qdhn\") pod \"74b9a17c-1408-4008-a7ce-2805decab132\" (UID: \"74b9a17c-1408-4008-a7ce-2805decab132\") " Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.092262 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74b9a17c-1408-4008-a7ce-2805decab132-config-data\") pod \"74b9a17c-1408-4008-a7ce-2805decab132\" (UID: \"74b9a17c-1408-4008-a7ce-2805decab132\") " Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.092329 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74b9a17c-1408-4008-a7ce-2805decab132-combined-ca-bundle\") pod \"74b9a17c-1408-4008-a7ce-2805decab132\" (UID: \"74b9a17c-1408-4008-a7ce-2805decab132\") " Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.092524 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74b9a17c-1408-4008-a7ce-2805decab132-scripts\") pod \"74b9a17c-1408-4008-a7ce-2805decab132\" (UID: \"74b9a17c-1408-4008-a7ce-2805decab132\") " Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.100799 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74b9a17c-1408-4008-a7ce-2805decab132-kube-api-access-4qdhn" (OuterVolumeSpecName: "kube-api-access-4qdhn") pod "74b9a17c-1408-4008-a7ce-2805decab132" (UID: "74b9a17c-1408-4008-a7ce-2805decab132"). InnerVolumeSpecName "kube-api-access-4qdhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.101122 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b9a17c-1408-4008-a7ce-2805decab132-scripts" (OuterVolumeSpecName: "scripts") pod "74b9a17c-1408-4008-a7ce-2805decab132" (UID: "74b9a17c-1408-4008-a7ce-2805decab132"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.150831 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b9a17c-1408-4008-a7ce-2805decab132-config-data" (OuterVolumeSpecName: "config-data") pod "74b9a17c-1408-4008-a7ce-2805decab132" (UID: "74b9a17c-1408-4008-a7ce-2805decab132"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.152385 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74b9a17c-1408-4008-a7ce-2805decab132-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74b9a17c-1408-4008-a7ce-2805decab132" (UID: "74b9a17c-1408-4008-a7ce-2805decab132"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.195309 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74b9a17c-1408-4008-a7ce-2805decab132-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.195364 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qdhn\" (UniqueName: \"kubernetes.io/projected/74b9a17c-1408-4008-a7ce-2805decab132-kube-api-access-4qdhn\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.195378 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74b9a17c-1408-4008-a7ce-2805decab132-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.195388 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74b9a17c-1408-4008-a7ce-2805decab132-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.628112 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"94d71826-08db-4e3a-a03d-e8141947dcc0","Type":"ContainerStarted","Data":"f7a22ae3ea48aaac2a5a21757c8e5a8fd54be92b0dba494a59bba60bceb12673"} Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.630031 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"94d71826-08db-4e3a-a03d-e8141947dcc0","Type":"ContainerStarted","Data":"5b9b1b03f7e7144a13a1189be99194dd7c35be80e221c2871d998e6697ae6778"} Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.632140 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.638748 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-7xzss" event={"ID":"74b9a17c-1408-4008-a7ce-2805decab132","Type":"ContainerDied","Data":"bbfd192674507f9af7185781b67758e9b89c9353998283027bddee1399ed341a"} Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.639029 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbfd192674507f9af7185781b67758e9b89c9353998283027bddee1399ed341a" Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.639888 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-7xzss" Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.656514 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.656491006 podStartE2EDuration="2.656491006s" podCreationTimestamp="2025-11-24 20:54:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:54:32.653773899 +0000 UTC m=+5866.442726300" watchObservedRunningTime="2025-11-24 20:54:32.656491006 +0000 UTC m=+5866.445443377" Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.772622 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.773705 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bc7fc21f-e348-494e-b969-197ae9069b36" containerName="nova-api-log" containerID="cri-o://2aa38d93618e20854428f7510143b3371889dc651d29973ca0047189ddb62fda" gracePeriod=30 Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.773695 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bc7fc21f-e348-494e-b969-197ae9069b36" containerName="nova-api-api" containerID="cri-o://561edc684fd8b088643ef896b35931053121bb30f1ad8fa84ad9b26759734ff9" gracePeriod=30 Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.864716 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.865372 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d05ed2f4-83c0-4baa-be13-ba5e179b4804" containerName="nova-scheduler-scheduler" containerID="cri-o://335ce9c8ffc70b8007a68e945e55d92701133418ef1b79818e817513f7f4e7a6" gracePeriod=30 Nov 24 20:54:32 crc kubenswrapper[4812]: I1124 20:54:32.881371 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 20:54:33 crc kubenswrapper[4812]: I1124 20:54:33.662148 4812 generic.go:334] "Generic (PLEG): container finished" podID="bc7fc21f-e348-494e-b969-197ae9069b36" containerID="561edc684fd8b088643ef896b35931053121bb30f1ad8fa84ad9b26759734ff9" exitCode=0 Nov 24 20:54:33 crc kubenswrapper[4812]: I1124 20:54:33.662172 4812 generic.go:334] "Generic (PLEG): container finished" podID="bc7fc21f-e348-494e-b969-197ae9069b36" containerID="2aa38d93618e20854428f7510143b3371889dc651d29973ca0047189ddb62fda" exitCode=143 Nov 24 20:54:33 crc kubenswrapper[4812]: I1124 20:54:33.663127 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc7fc21f-e348-494e-b969-197ae9069b36","Type":"ContainerDied","Data":"561edc684fd8b088643ef896b35931053121bb30f1ad8fa84ad9b26759734ff9"} Nov 24 20:54:33 crc kubenswrapper[4812]: I1124 20:54:33.663151 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc7fc21f-e348-494e-b969-197ae9069b36","Type":"ContainerDied","Data":"2aa38d93618e20854428f7510143b3371889dc651d29973ca0047189ddb62fda"} Nov 24 20:54:33 crc kubenswrapper[4812]: I1124 20:54:33.663266 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7eba93e8-854a-4975-98d9-8359b32f5378" containerName="nova-metadata-log" containerID="cri-o://9f17713b486df9c3e0d8860568de75b67a18a20b425dac1dfb252654dcd63053" gracePeriod=30 Nov 24 20:54:33 crc kubenswrapper[4812]: I1124 20:54:33.663779 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7eba93e8-854a-4975-98d9-8359b32f5378" containerName="nova-metadata-metadata" containerID="cri-o://76dee2d27e0b31b22f315924c2a3605782c2d1baecc6a761ecd42cddb85a4f1a" gracePeriod=30 Nov 24 20:54:33 crc kubenswrapper[4812]: I1124 20:54:33.953776 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.049096 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgjlt\" (UniqueName: \"kubernetes.io/projected/bc7fc21f-e348-494e-b969-197ae9069b36-kube-api-access-zgjlt\") pod \"bc7fc21f-e348-494e-b969-197ae9069b36\" (UID: \"bc7fc21f-e348-494e-b969-197ae9069b36\") " Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.049150 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7fc21f-e348-494e-b969-197ae9069b36-config-data\") pod \"bc7fc21f-e348-494e-b969-197ae9069b36\" (UID: \"bc7fc21f-e348-494e-b969-197ae9069b36\") " Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.049211 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7fc21f-e348-494e-b969-197ae9069b36-combined-ca-bundle\") pod \"bc7fc21f-e348-494e-b969-197ae9069b36\" (UID: \"bc7fc21f-e348-494e-b969-197ae9069b36\") " Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.052682 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc7fc21f-e348-494e-b969-197ae9069b36-logs\") pod \"bc7fc21f-e348-494e-b969-197ae9069b36\" (UID: \"bc7fc21f-e348-494e-b969-197ae9069b36\") " Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.053636 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc7fc21f-e348-494e-b969-197ae9069b36-logs" (OuterVolumeSpecName: "logs") pod "bc7fc21f-e348-494e-b969-197ae9069b36" (UID: "bc7fc21f-e348-494e-b969-197ae9069b36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.055654 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc7fc21f-e348-494e-b969-197ae9069b36-kube-api-access-zgjlt" (OuterVolumeSpecName: "kube-api-access-zgjlt") pod "bc7fc21f-e348-494e-b969-197ae9069b36" (UID: "bc7fc21f-e348-494e-b969-197ae9069b36"). InnerVolumeSpecName "kube-api-access-zgjlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.082709 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc7fc21f-e348-494e-b969-197ae9069b36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc7fc21f-e348-494e-b969-197ae9069b36" (UID: "bc7fc21f-e348-494e-b969-197ae9069b36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.084756 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc7fc21f-e348-494e-b969-197ae9069b36-config-data" (OuterVolumeSpecName: "config-data") pod "bc7fc21f-e348-494e-b969-197ae9069b36" (UID: "bc7fc21f-e348-494e-b969-197ae9069b36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.155135 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc7fc21f-e348-494e-b969-197ae9069b36-logs\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.155171 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgjlt\" (UniqueName: \"kubernetes.io/projected/bc7fc21f-e348-494e-b969-197ae9069b36-kube-api-access-zgjlt\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.155181 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7fc21f-e348-494e-b969-197ae9069b36-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.155191 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7fc21f-e348-494e-b969-197ae9069b36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.215283 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.358808 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7eba93e8-854a-4975-98d9-8359b32f5378-logs\") pod \"7eba93e8-854a-4975-98d9-8359b32f5378\" (UID: \"7eba93e8-854a-4975-98d9-8359b32f5378\") " Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.358881 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eba93e8-854a-4975-98d9-8359b32f5378-config-data\") pod \"7eba93e8-854a-4975-98d9-8359b32f5378\" (UID: \"7eba93e8-854a-4975-98d9-8359b32f5378\") " Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.358973 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9wn8\" (UniqueName: \"kubernetes.io/projected/7eba93e8-854a-4975-98d9-8359b32f5378-kube-api-access-f9wn8\") pod \"7eba93e8-854a-4975-98d9-8359b32f5378\" (UID: \"7eba93e8-854a-4975-98d9-8359b32f5378\") " Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.359160 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eba93e8-854a-4975-98d9-8359b32f5378-nova-metadata-tls-certs\") pod \"7eba93e8-854a-4975-98d9-8359b32f5378\" (UID: \"7eba93e8-854a-4975-98d9-8359b32f5378\") " Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.359285 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eba93e8-854a-4975-98d9-8359b32f5378-combined-ca-bundle\") pod \"7eba93e8-854a-4975-98d9-8359b32f5378\" (UID: \"7eba93e8-854a-4975-98d9-8359b32f5378\") " Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.359462 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eba93e8-854a-4975-98d9-8359b32f5378-logs" (OuterVolumeSpecName: "logs") pod "7eba93e8-854a-4975-98d9-8359b32f5378" (UID: "7eba93e8-854a-4975-98d9-8359b32f5378"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.359957 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7eba93e8-854a-4975-98d9-8359b32f5378-logs\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.363450 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eba93e8-854a-4975-98d9-8359b32f5378-kube-api-access-f9wn8" (OuterVolumeSpecName: "kube-api-access-f9wn8") pod "7eba93e8-854a-4975-98d9-8359b32f5378" (UID: "7eba93e8-854a-4975-98d9-8359b32f5378"). InnerVolumeSpecName "kube-api-access-f9wn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.389755 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eba93e8-854a-4975-98d9-8359b32f5378-config-data" (OuterVolumeSpecName: "config-data") pod "7eba93e8-854a-4975-98d9-8359b32f5378" (UID: "7eba93e8-854a-4975-98d9-8359b32f5378"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.390499 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eba93e8-854a-4975-98d9-8359b32f5378-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7eba93e8-854a-4975-98d9-8359b32f5378" (UID: "7eba93e8-854a-4975-98d9-8359b32f5378"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.418483 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.419576 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eba93e8-854a-4975-98d9-8359b32f5378-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7eba93e8-854a-4975-98d9-8359b32f5378" (UID: "7eba93e8-854a-4975-98d9-8359b32f5378"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.462067 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eba93e8-854a-4975-98d9-8359b32f5378-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.462115 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9wn8\" (UniqueName: \"kubernetes.io/projected/7eba93e8-854a-4975-98d9-8359b32f5378-kube-api-access-f9wn8\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.462132 4812 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7eba93e8-854a-4975-98d9-8359b32f5378-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.462143 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eba93e8-854a-4975-98d9-8359b32f5378-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.499470 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9cd89ff9-dsg2v"] Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.499837 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" podUID="ef27b779-8611-4ba0-a021-de389bb54713" containerName="dnsmasq-dns" containerID="cri-o://a4edb38be07ab8982447384ea7f74bee958d3b36d7ad68eb697765544a7e7f48" gracePeriod=10 Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.687734 4812 generic.go:334] "Generic (PLEG): container finished" podID="ef27b779-8611-4ba0-a021-de389bb54713" containerID="a4edb38be07ab8982447384ea7f74bee958d3b36d7ad68eb697765544a7e7f48" exitCode=0 Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.688120 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" event={"ID":"ef27b779-8611-4ba0-a021-de389bb54713","Type":"ContainerDied","Data":"a4edb38be07ab8982447384ea7f74bee958d3b36d7ad68eb697765544a7e7f48"} Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.691170 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.691180 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc7fc21f-e348-494e-b969-197ae9069b36","Type":"ContainerDied","Data":"a9bb4cf6aba7781d6bf3aa4f4e3d7053e24f716dcced1d942fb583b91ca79a46"} Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.691256 4812 scope.go:117] "RemoveContainer" containerID="561edc684fd8b088643ef896b35931053121bb30f1ad8fa84ad9b26759734ff9" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.696073 4812 generic.go:334] "Generic (PLEG): container finished" podID="7eba93e8-854a-4975-98d9-8359b32f5378" containerID="76dee2d27e0b31b22f315924c2a3605782c2d1baecc6a761ecd42cddb85a4f1a" exitCode=0 Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.696101 4812 generic.go:334] "Generic (PLEG): container finished" podID="7eba93e8-854a-4975-98d9-8359b32f5378" containerID="9f17713b486df9c3e0d8860568de75b67a18a20b425dac1dfb252654dcd63053" exitCode=143 Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.696963 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.700489 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7eba93e8-854a-4975-98d9-8359b32f5378","Type":"ContainerDied","Data":"76dee2d27e0b31b22f315924c2a3605782c2d1baecc6a761ecd42cddb85a4f1a"} Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.700538 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7eba93e8-854a-4975-98d9-8359b32f5378","Type":"ContainerDied","Data":"9f17713b486df9c3e0d8860568de75b67a18a20b425dac1dfb252654dcd63053"} Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.700551 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7eba93e8-854a-4975-98d9-8359b32f5378","Type":"ContainerDied","Data":"ad38204f0e5e62bf5c081e6ea397c135d8834c6fda09a8e259ea01d892dc2274"} Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.723656 4812 scope.go:117] "RemoveContainer" containerID="2aa38d93618e20854428f7510143b3371889dc651d29973ca0047189ddb62fda" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.727700 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.739377 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.758010 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.774125 4812 scope.go:117] "RemoveContainer" containerID="76dee2d27e0b31b22f315924c2a3605782c2d1baecc6a761ecd42cddb85a4f1a" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.787525 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.807113 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 20:54:34 crc kubenswrapper[4812]: E1124 20:54:34.807782 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eba93e8-854a-4975-98d9-8359b32f5378" containerName="nova-metadata-metadata" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.807806 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eba93e8-854a-4975-98d9-8359b32f5378" containerName="nova-metadata-metadata" Nov 24 20:54:34 crc kubenswrapper[4812]: E1124 20:54:34.807884 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74b9a17c-1408-4008-a7ce-2805decab132" containerName="nova-manage" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.807898 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="74b9a17c-1408-4008-a7ce-2805decab132" containerName="nova-manage" Nov 24 20:54:34 crc kubenswrapper[4812]: E1124 20:54:34.807914 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7fc21f-e348-494e-b969-197ae9069b36" containerName="nova-api-api" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.807921 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7fc21f-e348-494e-b969-197ae9069b36" containerName="nova-api-api" Nov 24 20:54:34 crc kubenswrapper[4812]: E1124 20:54:34.807957 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7fc21f-e348-494e-b969-197ae9069b36" containerName="nova-api-log" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.807964 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7fc21f-e348-494e-b969-197ae9069b36" containerName="nova-api-log" Nov 24 20:54:34 crc kubenswrapper[4812]: E1124 20:54:34.807994 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eba93e8-854a-4975-98d9-8359b32f5378" containerName="nova-metadata-log" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.808001 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eba93e8-854a-4975-98d9-8359b32f5378" containerName="nova-metadata-log" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.808236 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc7fc21f-e348-494e-b969-197ae9069b36" containerName="nova-api-api" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.808276 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eba93e8-854a-4975-98d9-8359b32f5378" containerName="nova-metadata-log" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.808288 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eba93e8-854a-4975-98d9-8359b32f5378" containerName="nova-metadata-metadata" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.808302 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc7fc21f-e348-494e-b969-197ae9069b36" containerName="nova-api-log" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.808310 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="74b9a17c-1408-4008-a7ce-2805decab132" containerName="nova-manage" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.811076 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.820468 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.821969 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.823640 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.824445 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.825385 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.825697 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.840703 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.867096 4812 scope.go:117] "RemoveContainer" containerID="9f17713b486df9c3e0d8860568de75b67a18a20b425dac1dfb252654dcd63053" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.906864 4812 scope.go:117] "RemoveContainer" containerID="76dee2d27e0b31b22f315924c2a3605782c2d1baecc6a761ecd42cddb85a4f1a" Nov 24 20:54:34 crc kubenswrapper[4812]: E1124 20:54:34.915185 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76dee2d27e0b31b22f315924c2a3605782c2d1baecc6a761ecd42cddb85a4f1a\": container with ID starting with 76dee2d27e0b31b22f315924c2a3605782c2d1baecc6a761ecd42cddb85a4f1a not found: ID does not exist" containerID="76dee2d27e0b31b22f315924c2a3605782c2d1baecc6a761ecd42cddb85a4f1a" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.915235 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76dee2d27e0b31b22f315924c2a3605782c2d1baecc6a761ecd42cddb85a4f1a"} err="failed to get container status \"76dee2d27e0b31b22f315924c2a3605782c2d1baecc6a761ecd42cddb85a4f1a\": rpc error: code = NotFound desc = could not find container \"76dee2d27e0b31b22f315924c2a3605782c2d1baecc6a761ecd42cddb85a4f1a\": container with ID starting with 76dee2d27e0b31b22f315924c2a3605782c2d1baecc6a761ecd42cddb85a4f1a not found: ID does not exist" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.915263 4812 scope.go:117] "RemoveContainer" containerID="9f17713b486df9c3e0d8860568de75b67a18a20b425dac1dfb252654dcd63053" Nov 24 20:54:34 crc kubenswrapper[4812]: E1124 20:54:34.916048 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f17713b486df9c3e0d8860568de75b67a18a20b425dac1dfb252654dcd63053\": container with ID starting with 9f17713b486df9c3e0d8860568de75b67a18a20b425dac1dfb252654dcd63053 not found: ID does not exist" containerID="9f17713b486df9c3e0d8860568de75b67a18a20b425dac1dfb252654dcd63053" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.916131 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f17713b486df9c3e0d8860568de75b67a18a20b425dac1dfb252654dcd63053"} err="failed to get container status \"9f17713b486df9c3e0d8860568de75b67a18a20b425dac1dfb252654dcd63053\": rpc error: code = NotFound desc = could not find container \"9f17713b486df9c3e0d8860568de75b67a18a20b425dac1dfb252654dcd63053\": container with ID starting with 9f17713b486df9c3e0d8860568de75b67a18a20b425dac1dfb252654dcd63053 not found: ID does not exist" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.916150 4812 scope.go:117] "RemoveContainer" containerID="76dee2d27e0b31b22f315924c2a3605782c2d1baecc6a761ecd42cddb85a4f1a" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.916481 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76dee2d27e0b31b22f315924c2a3605782c2d1baecc6a761ecd42cddb85a4f1a"} err="failed to get container status \"76dee2d27e0b31b22f315924c2a3605782c2d1baecc6a761ecd42cddb85a4f1a\": rpc error: code = NotFound desc = could not find container \"76dee2d27e0b31b22f315924c2a3605782c2d1baecc6a761ecd42cddb85a4f1a\": container with ID starting with 76dee2d27e0b31b22f315924c2a3605782c2d1baecc6a761ecd42cddb85a4f1a not found: ID does not exist" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.916499 4812 scope.go:117] "RemoveContainer" containerID="9f17713b486df9c3e0d8860568de75b67a18a20b425dac1dfb252654dcd63053" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.916734 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f17713b486df9c3e0d8860568de75b67a18a20b425dac1dfb252654dcd63053"} err="failed to get container status \"9f17713b486df9c3e0d8860568de75b67a18a20b425dac1dfb252654dcd63053\": rpc error: code = NotFound desc = could not find container \"9f17713b486df9c3e0d8860568de75b67a18a20b425dac1dfb252654dcd63053\": container with ID starting with 9f17713b486df9c3e0d8860568de75b67a18a20b425dac1dfb252654dcd63053 not found: ID does not exist" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.981765 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eba93e8-854a-4975-98d9-8359b32f5378" path="/var/lib/kubelet/pods/7eba93e8-854a-4975-98d9-8359b32f5378/volumes" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.982456 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc7fc21f-e348-494e-b969-197ae9069b36" path="/var/lib/kubelet/pods/bc7fc21f-e348-494e-b969-197ae9069b36/volumes" Nov 24 20:54:34 crc kubenswrapper[4812]: I1124 20:54:34.992660 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.002313 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4b0b334-a9cd-4925-afaa-d55f9f213f92-logs\") pod \"nova-metadata-0\" (UID: \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\") " pod="openstack/nova-metadata-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.002448 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbr8w\" (UniqueName: \"kubernetes.io/projected/f4b0b334-a9cd-4925-afaa-d55f9f213f92-kube-api-access-qbr8w\") pod \"nova-metadata-0\" (UID: \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\") " pod="openstack/nova-metadata-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.002488 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b0b334-a9cd-4925-afaa-d55f9f213f92-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\") " pod="openstack/nova-metadata-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.003196 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4b0b334-a9cd-4925-afaa-d55f9f213f92-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\") " pod="openstack/nova-metadata-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.003370 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tswr7\" (UniqueName: \"kubernetes.io/projected/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-kube-api-access-tswr7\") pod \"nova-api-0\" (UID: \"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319\") " pod="openstack/nova-api-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.003484 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-logs\") pod \"nova-api-0\" (UID: \"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319\") " pod="openstack/nova-api-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.003519 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b0b334-a9cd-4925-afaa-d55f9f213f92-config-data\") pod \"nova-metadata-0\" (UID: \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\") " pod="openstack/nova-metadata-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.003592 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-config-data\") pod \"nova-api-0\" (UID: \"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319\") " pod="openstack/nova-api-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.003623 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319\") " pod="openstack/nova-api-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.030152 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.104723 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-dns-svc\") pod \"ef27b779-8611-4ba0-a021-de389bb54713\" (UID: \"ef27b779-8611-4ba0-a021-de389bb54713\") " Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.104770 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-ovsdbserver-sb\") pod \"ef27b779-8611-4ba0-a021-de389bb54713\" (UID: \"ef27b779-8611-4ba0-a021-de389bb54713\") " Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.104883 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-ovsdbserver-nb\") pod \"ef27b779-8611-4ba0-a021-de389bb54713\" (UID: \"ef27b779-8611-4ba0-a021-de389bb54713\") " Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.104906 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwszh\" (UniqueName: \"kubernetes.io/projected/ef27b779-8611-4ba0-a021-de389bb54713-kube-api-access-pwszh\") pod \"ef27b779-8611-4ba0-a021-de389bb54713\" (UID: \"ef27b779-8611-4ba0-a021-de389bb54713\") " Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.105617 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-config\") pod \"ef27b779-8611-4ba0-a021-de389bb54713\" (UID: \"ef27b779-8611-4ba0-a021-de389bb54713\") " Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.105802 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-logs\") pod \"nova-api-0\" (UID: \"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319\") " pod="openstack/nova-api-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.105834 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b0b334-a9cd-4925-afaa-d55f9f213f92-config-data\") pod \"nova-metadata-0\" (UID: \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\") " pod="openstack/nova-metadata-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.105890 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-config-data\") pod \"nova-api-0\" (UID: \"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319\") " pod="openstack/nova-api-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.105917 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319\") " pod="openstack/nova-api-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.105953 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4b0b334-a9cd-4925-afaa-d55f9f213f92-logs\") pod \"nova-metadata-0\" (UID: \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\") " pod="openstack/nova-metadata-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.106007 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbr8w\" (UniqueName: \"kubernetes.io/projected/f4b0b334-a9cd-4925-afaa-d55f9f213f92-kube-api-access-qbr8w\") pod \"nova-metadata-0\" (UID: \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\") " pod="openstack/nova-metadata-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.106037 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b0b334-a9cd-4925-afaa-d55f9f213f92-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\") " pod="openstack/nova-metadata-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.106155 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4b0b334-a9cd-4925-afaa-d55f9f213f92-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\") " pod="openstack/nova-metadata-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.106214 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tswr7\" (UniqueName: \"kubernetes.io/projected/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-kube-api-access-tswr7\") pod \"nova-api-0\" (UID: \"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319\") " pod="openstack/nova-api-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.108039 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4b0b334-a9cd-4925-afaa-d55f9f213f92-logs\") pod \"nova-metadata-0\" (UID: \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\") " pod="openstack/nova-metadata-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.108272 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-logs\") pod \"nova-api-0\" (UID: \"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319\") " pod="openstack/nova-api-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.112486 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef27b779-8611-4ba0-a021-de389bb54713-kube-api-access-pwszh" (OuterVolumeSpecName: "kube-api-access-pwszh") pod "ef27b779-8611-4ba0-a021-de389bb54713" (UID: "ef27b779-8611-4ba0-a021-de389bb54713"). InnerVolumeSpecName "kube-api-access-pwszh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.112661 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4b0b334-a9cd-4925-afaa-d55f9f213f92-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\") " pod="openstack/nova-metadata-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.113826 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b0b334-a9cd-4925-afaa-d55f9f213f92-config-data\") pod \"nova-metadata-0\" (UID: \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\") " pod="openstack/nova-metadata-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.114863 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b0b334-a9cd-4925-afaa-d55f9f213f92-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\") " pod="openstack/nova-metadata-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.118551 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-config-data\") pod \"nova-api-0\" (UID: \"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319\") " pod="openstack/nova-api-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.121740 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319\") " pod="openstack/nova-api-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.125870 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbr8w\" (UniqueName: \"kubernetes.io/projected/f4b0b334-a9cd-4925-afaa-d55f9f213f92-kube-api-access-qbr8w\") pod \"nova-metadata-0\" (UID: \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\") " pod="openstack/nova-metadata-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.131565 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tswr7\" (UniqueName: \"kubernetes.io/projected/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-kube-api-access-tswr7\") pod \"nova-api-0\" (UID: \"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319\") " pod="openstack/nova-api-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.158908 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ef27b779-8611-4ba0-a021-de389bb54713" (UID: "ef27b779-8611-4ba0-a021-de389bb54713"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.167628 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-config" (OuterVolumeSpecName: "config") pod "ef27b779-8611-4ba0-a021-de389bb54713" (UID: "ef27b779-8611-4ba0-a021-de389bb54713"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.172480 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef27b779-8611-4ba0-a021-de389bb54713" (UID: "ef27b779-8611-4ba0-a021-de389bb54713"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.178483 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.185630 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.189742 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ef27b779-8611-4ba0-a021-de389bb54713" (UID: "ef27b779-8611-4ba0-a021-de389bb54713"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.208028 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.208059 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.208070 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.208079 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwszh\" (UniqueName: \"kubernetes.io/projected/ef27b779-8611-4ba0-a021-de389bb54713-kube-api-access-pwszh\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.208089 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef27b779-8611-4ba0-a021-de389bb54713-config\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.693228 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 20:54:35 crc kubenswrapper[4812]: W1124 20:54:35.696886 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c986b3c_ed7b_4a29_9ed9_c6da0bd58319.slice/crio-c54b0feeb7096ec98d0db665db373d84d89644b3c485f199cb24a5d15f43046b WatchSource:0}: Error finding container c54b0feeb7096ec98d0db665db373d84d89644b3c485f199cb24a5d15f43046b: Status 404 returned error can't find the container with id c54b0feeb7096ec98d0db665db373d84d89644b3c485f199cb24a5d15f43046b Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.707399 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" event={"ID":"ef27b779-8611-4ba0-a021-de389bb54713","Type":"ContainerDied","Data":"52d3f6f17e582aaaa4f0d733ce022b321d6e4df2dc5860e1352ecfa70d2e6b17"} Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.707446 4812 scope.go:117] "RemoveContainer" containerID="a4edb38be07ab8982447384ea7f74bee958d3b36d7ad68eb697765544a7e7f48" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.707459 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f9cd89ff9-dsg2v" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.728301 4812 scope.go:117] "RemoveContainer" containerID="3e6ed54f02ba37a8188b11faa3f92113150a6824bc96c95775be94f41da23b40" Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.772467 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f9cd89ff9-dsg2v"] Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.779609 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f9cd89ff9-dsg2v"] Nov 24 20:54:35 crc kubenswrapper[4812]: I1124 20:54:35.836110 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 20:54:36 crc kubenswrapper[4812]: I1124 20:54:36.727904 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319","Type":"ContainerStarted","Data":"dfe32da986e448b0f784d5247699b9048087dca9940e1847c7964e8338d40b12"} Nov 24 20:54:36 crc kubenswrapper[4812]: I1124 20:54:36.730505 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319","Type":"ContainerStarted","Data":"0d7a5b58d687b8bdb31dce36f7b87de1fad2f92f5750470a9077e5c3f8fe73e3"} Nov 24 20:54:36 crc kubenswrapper[4812]: I1124 20:54:36.730571 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319","Type":"ContainerStarted","Data":"c54b0feeb7096ec98d0db665db373d84d89644b3c485f199cb24a5d15f43046b"} Nov 24 20:54:36 crc kubenswrapper[4812]: I1124 20:54:36.735469 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4b0b334-a9cd-4925-afaa-d55f9f213f92","Type":"ContainerStarted","Data":"618ade59ca645ff1af3cac2d75decf80aa56939ad882bd2c86e67902a18cf88d"} Nov 24 20:54:36 crc kubenswrapper[4812]: I1124 20:54:36.735544 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4b0b334-a9cd-4925-afaa-d55f9f213f92","Type":"ContainerStarted","Data":"a02f51cf1c32348061c93eddfbff25eca65b9aee52f56292abd168de20d2440e"} Nov 24 20:54:36 crc kubenswrapper[4812]: I1124 20:54:36.735572 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4b0b334-a9cd-4925-afaa-d55f9f213f92","Type":"ContainerStarted","Data":"4f8f9084a51c11a6881ba6f16e416142921b56c502b635feca70a857d8ee669c"} Nov 24 20:54:36 crc kubenswrapper[4812]: I1124 20:54:36.769579 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.769549037 podStartE2EDuration="2.769549037s" podCreationTimestamp="2025-11-24 20:54:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:54:36.760561153 +0000 UTC m=+5870.549513664" watchObservedRunningTime="2025-11-24 20:54:36.769549037 +0000 UTC m=+5870.558501438" Nov 24 20:54:36 crc kubenswrapper[4812]: I1124 20:54:36.796440 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.796409837 podStartE2EDuration="2.796409837s" podCreationTimestamp="2025-11-24 20:54:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:54:36.79545509 +0000 UTC m=+5870.584407511" watchObservedRunningTime="2025-11-24 20:54:36.796409837 +0000 UTC m=+5870.585362248" Nov 24 20:54:36 crc kubenswrapper[4812]: I1124 20:54:36.983501 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef27b779-8611-4ba0-a021-de389bb54713" path="/var/lib/kubelet/pods/ef27b779-8611-4ba0-a021-de389bb54713/volumes" Nov 24 20:54:40 crc kubenswrapper[4812]: I1124 20:54:40.030116 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:40 crc kubenswrapper[4812]: I1124 20:54:40.060159 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:40 crc kubenswrapper[4812]: I1124 20:54:40.185948 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 20:54:40 crc kubenswrapper[4812]: I1124 20:54:40.186242 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 20:54:40 crc kubenswrapper[4812]: I1124 20:54:40.811811 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 24 20:54:41 crc kubenswrapper[4812]: I1124 20:54:41.431090 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 24 20:54:41 crc kubenswrapper[4812]: I1124 20:54:41.987134 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-bhd5n"] Nov 24 20:54:41 crc kubenswrapper[4812]: E1124 20:54:41.987581 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef27b779-8611-4ba0-a021-de389bb54713" containerName="dnsmasq-dns" Nov 24 20:54:41 crc kubenswrapper[4812]: I1124 20:54:41.987598 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef27b779-8611-4ba0-a021-de389bb54713" containerName="dnsmasq-dns" Nov 24 20:54:41 crc kubenswrapper[4812]: E1124 20:54:41.987642 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef27b779-8611-4ba0-a021-de389bb54713" containerName="init" Nov 24 20:54:41 crc kubenswrapper[4812]: I1124 20:54:41.987650 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef27b779-8611-4ba0-a021-de389bb54713" containerName="init" Nov 24 20:54:41 crc kubenswrapper[4812]: I1124 20:54:41.987842 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef27b779-8611-4ba0-a021-de389bb54713" containerName="dnsmasq-dns" Nov 24 20:54:41 crc kubenswrapper[4812]: I1124 20:54:41.988534 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bhd5n" Nov 24 20:54:41 crc kubenswrapper[4812]: I1124 20:54:41.991240 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 24 20:54:41 crc kubenswrapper[4812]: I1124 20:54:41.991637 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 24 20:54:42 crc kubenswrapper[4812]: I1124 20:54:42.004494 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bhd5n"] Nov 24 20:54:42 crc kubenswrapper[4812]: I1124 20:54:42.083003 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/390133e9-6e9e-4fe4-9d53-312c2c1ea999-config-data\") pod \"nova-cell1-cell-mapping-bhd5n\" (UID: \"390133e9-6e9e-4fe4-9d53-312c2c1ea999\") " pod="openstack/nova-cell1-cell-mapping-bhd5n" Nov 24 20:54:42 crc kubenswrapper[4812]: I1124 20:54:42.083118 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/390133e9-6e9e-4fe4-9d53-312c2c1ea999-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bhd5n\" (UID: \"390133e9-6e9e-4fe4-9d53-312c2c1ea999\") " pod="openstack/nova-cell1-cell-mapping-bhd5n" Nov 24 20:54:42 crc kubenswrapper[4812]: I1124 20:54:42.083239 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqc5w\" (UniqueName: \"kubernetes.io/projected/390133e9-6e9e-4fe4-9d53-312c2c1ea999-kube-api-access-fqc5w\") pod \"nova-cell1-cell-mapping-bhd5n\" (UID: \"390133e9-6e9e-4fe4-9d53-312c2c1ea999\") " pod="openstack/nova-cell1-cell-mapping-bhd5n" Nov 24 20:54:42 crc kubenswrapper[4812]: I1124 20:54:42.083491 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/390133e9-6e9e-4fe4-9d53-312c2c1ea999-scripts\") pod \"nova-cell1-cell-mapping-bhd5n\" (UID: \"390133e9-6e9e-4fe4-9d53-312c2c1ea999\") " pod="openstack/nova-cell1-cell-mapping-bhd5n" Nov 24 20:54:42 crc kubenswrapper[4812]: I1124 20:54:42.185209 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/390133e9-6e9e-4fe4-9d53-312c2c1ea999-scripts\") pod \"nova-cell1-cell-mapping-bhd5n\" (UID: \"390133e9-6e9e-4fe4-9d53-312c2c1ea999\") " pod="openstack/nova-cell1-cell-mapping-bhd5n" Nov 24 20:54:42 crc kubenswrapper[4812]: I1124 20:54:42.185308 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/390133e9-6e9e-4fe4-9d53-312c2c1ea999-config-data\") pod \"nova-cell1-cell-mapping-bhd5n\" (UID: \"390133e9-6e9e-4fe4-9d53-312c2c1ea999\") " pod="openstack/nova-cell1-cell-mapping-bhd5n" Nov 24 20:54:42 crc kubenswrapper[4812]: I1124 20:54:42.185360 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/390133e9-6e9e-4fe4-9d53-312c2c1ea999-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bhd5n\" (UID: \"390133e9-6e9e-4fe4-9d53-312c2c1ea999\") " pod="openstack/nova-cell1-cell-mapping-bhd5n" Nov 24 20:54:42 crc kubenswrapper[4812]: I1124 20:54:42.185413 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqc5w\" (UniqueName: \"kubernetes.io/projected/390133e9-6e9e-4fe4-9d53-312c2c1ea999-kube-api-access-fqc5w\") pod \"nova-cell1-cell-mapping-bhd5n\" (UID: \"390133e9-6e9e-4fe4-9d53-312c2c1ea999\") " pod="openstack/nova-cell1-cell-mapping-bhd5n" Nov 24 20:54:42 crc kubenswrapper[4812]: I1124 20:54:42.193811 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/390133e9-6e9e-4fe4-9d53-312c2c1ea999-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bhd5n\" (UID: \"390133e9-6e9e-4fe4-9d53-312c2c1ea999\") " pod="openstack/nova-cell1-cell-mapping-bhd5n" Nov 24 20:54:42 crc kubenswrapper[4812]: I1124 20:54:42.195542 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/390133e9-6e9e-4fe4-9d53-312c2c1ea999-config-data\") pod \"nova-cell1-cell-mapping-bhd5n\" (UID: \"390133e9-6e9e-4fe4-9d53-312c2c1ea999\") " pod="openstack/nova-cell1-cell-mapping-bhd5n" Nov 24 20:54:42 crc kubenswrapper[4812]: I1124 20:54:42.196807 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/390133e9-6e9e-4fe4-9d53-312c2c1ea999-scripts\") pod \"nova-cell1-cell-mapping-bhd5n\" (UID: \"390133e9-6e9e-4fe4-9d53-312c2c1ea999\") " pod="openstack/nova-cell1-cell-mapping-bhd5n" Nov 24 20:54:42 crc kubenswrapper[4812]: I1124 20:54:42.211043 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqc5w\" (UniqueName: \"kubernetes.io/projected/390133e9-6e9e-4fe4-9d53-312c2c1ea999-kube-api-access-fqc5w\") pod \"nova-cell1-cell-mapping-bhd5n\" (UID: \"390133e9-6e9e-4fe4-9d53-312c2c1ea999\") " pod="openstack/nova-cell1-cell-mapping-bhd5n" Nov 24 20:54:42 crc kubenswrapper[4812]: I1124 20:54:42.329235 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bhd5n" Nov 24 20:54:42 crc kubenswrapper[4812]: I1124 20:54:42.800807 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bhd5n"] Nov 24 20:54:42 crc kubenswrapper[4812]: I1124 20:54:42.813517 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bhd5n" event={"ID":"390133e9-6e9e-4fe4-9d53-312c2c1ea999","Type":"ContainerStarted","Data":"8d27026c8bf36787db01482ca647d92f3a3987efc2ff1ac4063136d45a10aead"} Nov 24 20:54:43 crc kubenswrapper[4812]: I1124 20:54:43.847665 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bhd5n" event={"ID":"390133e9-6e9e-4fe4-9d53-312c2c1ea999","Type":"ContainerStarted","Data":"e52321729579f7f95db4226503697d2d671e79d26560c4f733aba57f97cc42c9"} Nov 24 20:54:43 crc kubenswrapper[4812]: I1124 20:54:43.881246 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-bhd5n" podStartSLOduration=2.881218151 podStartE2EDuration="2.881218151s" podCreationTimestamp="2025-11-24 20:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:54:43.871695331 +0000 UTC m=+5877.660647712" watchObservedRunningTime="2025-11-24 20:54:43.881218151 +0000 UTC m=+5877.670170532" Nov 24 20:54:45 crc kubenswrapper[4812]: I1124 20:54:45.179706 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 20:54:45 crc kubenswrapper[4812]: I1124 20:54:45.180206 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 20:54:45 crc kubenswrapper[4812]: I1124 20:54:45.186717 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 20:54:45 crc kubenswrapper[4812]: I1124 20:54:45.186780 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 20:54:46 crc kubenswrapper[4812]: I1124 20:54:46.280617 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f4b0b334-a9cd-4925-afaa-d55f9f213f92" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.95:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 20:54:46 crc kubenswrapper[4812]: I1124 20:54:46.280626 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7c986b3c-ed7b-4a29-9ed9-c6da0bd58319" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.94:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 20:54:46 crc kubenswrapper[4812]: I1124 20:54:46.280744 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7c986b3c-ed7b-4a29-9ed9-c6da0bd58319" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.94:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 20:54:46 crc kubenswrapper[4812]: I1124 20:54:46.280783 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f4b0b334-a9cd-4925-afaa-d55f9f213f92" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.95:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 20:54:47 crc kubenswrapper[4812]: I1124 20:54:47.883330 4812 generic.go:334] "Generic (PLEG): container finished" podID="390133e9-6e9e-4fe4-9d53-312c2c1ea999" containerID="e52321729579f7f95db4226503697d2d671e79d26560c4f733aba57f97cc42c9" exitCode=0 Nov 24 20:54:47 crc kubenswrapper[4812]: I1124 20:54:47.883525 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bhd5n" event={"ID":"390133e9-6e9e-4fe4-9d53-312c2c1ea999","Type":"ContainerDied","Data":"e52321729579f7f95db4226503697d2d671e79d26560c4f733aba57f97cc42c9"} Nov 24 20:54:49 crc kubenswrapper[4812]: I1124 20:54:49.343569 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bhd5n" Nov 24 20:54:49 crc kubenswrapper[4812]: I1124 20:54:49.536421 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/390133e9-6e9e-4fe4-9d53-312c2c1ea999-combined-ca-bundle\") pod \"390133e9-6e9e-4fe4-9d53-312c2c1ea999\" (UID: \"390133e9-6e9e-4fe4-9d53-312c2c1ea999\") " Nov 24 20:54:49 crc kubenswrapper[4812]: I1124 20:54:49.536711 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/390133e9-6e9e-4fe4-9d53-312c2c1ea999-scripts\") pod \"390133e9-6e9e-4fe4-9d53-312c2c1ea999\" (UID: \"390133e9-6e9e-4fe4-9d53-312c2c1ea999\") " Nov 24 20:54:49 crc kubenswrapper[4812]: I1124 20:54:49.538092 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/390133e9-6e9e-4fe4-9d53-312c2c1ea999-config-data\") pod \"390133e9-6e9e-4fe4-9d53-312c2c1ea999\" (UID: \"390133e9-6e9e-4fe4-9d53-312c2c1ea999\") " Nov 24 20:54:49 crc kubenswrapper[4812]: I1124 20:54:49.538197 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqc5w\" (UniqueName: \"kubernetes.io/projected/390133e9-6e9e-4fe4-9d53-312c2c1ea999-kube-api-access-fqc5w\") pod \"390133e9-6e9e-4fe4-9d53-312c2c1ea999\" (UID: \"390133e9-6e9e-4fe4-9d53-312c2c1ea999\") " Nov 24 20:54:49 crc kubenswrapper[4812]: I1124 20:54:49.548673 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/390133e9-6e9e-4fe4-9d53-312c2c1ea999-scripts" (OuterVolumeSpecName: "scripts") pod "390133e9-6e9e-4fe4-9d53-312c2c1ea999" (UID: "390133e9-6e9e-4fe4-9d53-312c2c1ea999"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:49 crc kubenswrapper[4812]: I1124 20:54:49.549070 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/390133e9-6e9e-4fe4-9d53-312c2c1ea999-kube-api-access-fqc5w" (OuterVolumeSpecName: "kube-api-access-fqc5w") pod "390133e9-6e9e-4fe4-9d53-312c2c1ea999" (UID: "390133e9-6e9e-4fe4-9d53-312c2c1ea999"). InnerVolumeSpecName "kube-api-access-fqc5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:54:49 crc kubenswrapper[4812]: I1124 20:54:49.564510 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/390133e9-6e9e-4fe4-9d53-312c2c1ea999-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "390133e9-6e9e-4fe4-9d53-312c2c1ea999" (UID: "390133e9-6e9e-4fe4-9d53-312c2c1ea999"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:49 crc kubenswrapper[4812]: I1124 20:54:49.573722 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/390133e9-6e9e-4fe4-9d53-312c2c1ea999-config-data" (OuterVolumeSpecName: "config-data") pod "390133e9-6e9e-4fe4-9d53-312c2c1ea999" (UID: "390133e9-6e9e-4fe4-9d53-312c2c1ea999"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:49 crc kubenswrapper[4812]: I1124 20:54:49.640734 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/390133e9-6e9e-4fe4-9d53-312c2c1ea999-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:49 crc kubenswrapper[4812]: I1124 20:54:49.640777 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqc5w\" (UniqueName: \"kubernetes.io/projected/390133e9-6e9e-4fe4-9d53-312c2c1ea999-kube-api-access-fqc5w\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:49 crc kubenswrapper[4812]: I1124 20:54:49.640790 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/390133e9-6e9e-4fe4-9d53-312c2c1ea999-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:49 crc kubenswrapper[4812]: I1124 20:54:49.640803 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/390133e9-6e9e-4fe4-9d53-312c2c1ea999-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:49 crc kubenswrapper[4812]: I1124 20:54:49.908042 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bhd5n" event={"ID":"390133e9-6e9e-4fe4-9d53-312c2c1ea999","Type":"ContainerDied","Data":"8d27026c8bf36787db01482ca647d92f3a3987efc2ff1ac4063136d45a10aead"} Nov 24 20:54:49 crc kubenswrapper[4812]: I1124 20:54:49.908101 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d27026c8bf36787db01482ca647d92f3a3987efc2ff1ac4063136d45a10aead" Nov 24 20:54:49 crc kubenswrapper[4812]: I1124 20:54:49.908121 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bhd5n" Nov 24 20:54:50 crc kubenswrapper[4812]: I1124 20:54:50.114781 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 20:54:50 crc kubenswrapper[4812]: I1124 20:54:50.115381 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7c986b3c-ed7b-4a29-9ed9-c6da0bd58319" containerName="nova-api-log" containerID="cri-o://0d7a5b58d687b8bdb31dce36f7b87de1fad2f92f5750470a9077e5c3f8fe73e3" gracePeriod=30 Nov 24 20:54:50 crc kubenswrapper[4812]: I1124 20:54:50.115464 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7c986b3c-ed7b-4a29-9ed9-c6da0bd58319" containerName="nova-api-api" containerID="cri-o://dfe32da986e448b0f784d5247699b9048087dca9940e1847c7964e8338d40b12" gracePeriod=30 Nov 24 20:54:50 crc kubenswrapper[4812]: I1124 20:54:50.146136 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 20:54:50 crc kubenswrapper[4812]: I1124 20:54:50.146382 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f4b0b334-a9cd-4925-afaa-d55f9f213f92" containerName="nova-metadata-log" containerID="cri-o://a02f51cf1c32348061c93eddfbff25eca65b9aee52f56292abd168de20d2440e" gracePeriod=30 Nov 24 20:54:50 crc kubenswrapper[4812]: I1124 20:54:50.146517 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f4b0b334-a9cd-4925-afaa-d55f9f213f92" containerName="nova-metadata-metadata" containerID="cri-o://618ade59ca645ff1af3cac2d75decf80aa56939ad882bd2c86e67902a18cf88d" gracePeriod=30 Nov 24 20:54:50 crc kubenswrapper[4812]: I1124 20:54:50.921149 4812 generic.go:334] "Generic (PLEG): container finished" podID="7c986b3c-ed7b-4a29-9ed9-c6da0bd58319" containerID="0d7a5b58d687b8bdb31dce36f7b87de1fad2f92f5750470a9077e5c3f8fe73e3" exitCode=143 Nov 24 20:54:50 crc kubenswrapper[4812]: I1124 20:54:50.921247 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319","Type":"ContainerDied","Data":"0d7a5b58d687b8bdb31dce36f7b87de1fad2f92f5750470a9077e5c3f8fe73e3"} Nov 24 20:54:50 crc kubenswrapper[4812]: I1124 20:54:50.923415 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4b0b334-a9cd-4925-afaa-d55f9f213f92" containerID="a02f51cf1c32348061c93eddfbff25eca65b9aee52f56292abd168de20d2440e" exitCode=143 Nov 24 20:54:50 crc kubenswrapper[4812]: I1124 20:54:50.923463 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4b0b334-a9cd-4925-afaa-d55f9f213f92","Type":"ContainerDied","Data":"a02f51cf1c32348061c93eddfbff25eca65b9aee52f56292abd168de20d2440e"} Nov 24 20:54:53 crc kubenswrapper[4812]: I1124 20:54:53.834837 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 20:54:53 crc kubenswrapper[4812]: I1124 20:54:53.925509 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-config-data\") pod \"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319\" (UID: \"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319\") " Nov 24 20:54:53 crc kubenswrapper[4812]: I1124 20:54:53.925991 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tswr7\" (UniqueName: \"kubernetes.io/projected/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-kube-api-access-tswr7\") pod \"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319\" (UID: \"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319\") " Nov 24 20:54:53 crc kubenswrapper[4812]: I1124 20:54:53.926020 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-logs\") pod \"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319\" (UID: \"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319\") " Nov 24 20:54:53 crc kubenswrapper[4812]: I1124 20:54:53.926064 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-combined-ca-bundle\") pod \"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319\" (UID: \"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319\") " Nov 24 20:54:53 crc kubenswrapper[4812]: I1124 20:54:53.926558 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-logs" (OuterVolumeSpecName: "logs") pod "7c986b3c-ed7b-4a29-9ed9-c6da0bd58319" (UID: "7c986b3c-ed7b-4a29-9ed9-c6da0bd58319"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:54:53 crc kubenswrapper[4812]: I1124 20:54:53.926659 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-logs\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:53 crc kubenswrapper[4812]: I1124 20:54:53.931183 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-kube-api-access-tswr7" (OuterVolumeSpecName: "kube-api-access-tswr7") pod "7c986b3c-ed7b-4a29-9ed9-c6da0bd58319" (UID: "7c986b3c-ed7b-4a29-9ed9-c6da0bd58319"). InnerVolumeSpecName "kube-api-access-tswr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:54:53 crc kubenswrapper[4812]: I1124 20:54:53.977062 4812 generic.go:334] "Generic (PLEG): container finished" podID="7c986b3c-ed7b-4a29-9ed9-c6da0bd58319" containerID="dfe32da986e448b0f784d5247699b9048087dca9940e1847c7964e8338d40b12" exitCode=0 Nov 24 20:54:53 crc kubenswrapper[4812]: I1124 20:54:53.977100 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319","Type":"ContainerDied","Data":"dfe32da986e448b0f784d5247699b9048087dca9940e1847c7964e8338d40b12"} Nov 24 20:54:53 crc kubenswrapper[4812]: I1124 20:54:53.977139 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7c986b3c-ed7b-4a29-9ed9-c6da0bd58319","Type":"ContainerDied","Data":"c54b0feeb7096ec98d0db665db373d84d89644b3c485f199cb24a5d15f43046b"} Nov 24 20:54:53 crc kubenswrapper[4812]: I1124 20:54:53.977158 4812 scope.go:117] "RemoveContainer" containerID="dfe32da986e448b0f784d5247699b9048087dca9940e1847c7964e8338d40b12" Nov 24 20:54:53 crc kubenswrapper[4812]: I1124 20:54:53.977269 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 20:54:53 crc kubenswrapper[4812]: I1124 20:54:53.978316 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-config-data" (OuterVolumeSpecName: "config-data") pod "7c986b3c-ed7b-4a29-9ed9-c6da0bd58319" (UID: "7c986b3c-ed7b-4a29-9ed9-c6da0bd58319"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:53 crc kubenswrapper[4812]: I1124 20:54:53.983634 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c986b3c-ed7b-4a29-9ed9-c6da0bd58319" (UID: "7c986b3c-ed7b-4a29-9ed9-c6da0bd58319"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.028621 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.028659 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.028672 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tswr7\" (UniqueName: \"kubernetes.io/projected/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319-kube-api-access-tswr7\") on node \"crc\" DevicePath \"\"" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.036902 4812 scope.go:117] "RemoveContainer" containerID="0d7a5b58d687b8bdb31dce36f7b87de1fad2f92f5750470a9077e5c3f8fe73e3" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.059720 4812 scope.go:117] "RemoveContainer" containerID="dfe32da986e448b0f784d5247699b9048087dca9940e1847c7964e8338d40b12" Nov 24 20:54:54 crc kubenswrapper[4812]: E1124 20:54:54.060193 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfe32da986e448b0f784d5247699b9048087dca9940e1847c7964e8338d40b12\": container with ID starting with dfe32da986e448b0f784d5247699b9048087dca9940e1847c7964e8338d40b12 not found: ID does not exist" containerID="dfe32da986e448b0f784d5247699b9048087dca9940e1847c7964e8338d40b12" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.060228 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfe32da986e448b0f784d5247699b9048087dca9940e1847c7964e8338d40b12"} err="failed to get container status \"dfe32da986e448b0f784d5247699b9048087dca9940e1847c7964e8338d40b12\": rpc error: code = NotFound desc = could not find container \"dfe32da986e448b0f784d5247699b9048087dca9940e1847c7964e8338d40b12\": container with ID starting with dfe32da986e448b0f784d5247699b9048087dca9940e1847c7964e8338d40b12 not found: ID does not exist" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.060249 4812 scope.go:117] "RemoveContainer" containerID="0d7a5b58d687b8bdb31dce36f7b87de1fad2f92f5750470a9077e5c3f8fe73e3" Nov 24 20:54:54 crc kubenswrapper[4812]: E1124 20:54:54.060537 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d7a5b58d687b8bdb31dce36f7b87de1fad2f92f5750470a9077e5c3f8fe73e3\": container with ID starting with 0d7a5b58d687b8bdb31dce36f7b87de1fad2f92f5750470a9077e5c3f8fe73e3 not found: ID does not exist" containerID="0d7a5b58d687b8bdb31dce36f7b87de1fad2f92f5750470a9077e5c3f8fe73e3" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.060569 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d7a5b58d687b8bdb31dce36f7b87de1fad2f92f5750470a9077e5c3f8fe73e3"} err="failed to get container status \"0d7a5b58d687b8bdb31dce36f7b87de1fad2f92f5750470a9077e5c3f8fe73e3\": rpc error: code = NotFound desc = could not find container \"0d7a5b58d687b8bdb31dce36f7b87de1fad2f92f5750470a9077e5c3f8fe73e3\": container with ID starting with 0d7a5b58d687b8bdb31dce36f7b87de1fad2f92f5750470a9077e5c3f8fe73e3 not found: ID does not exist" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.325942 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.347621 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.356078 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 20:54:54 crc kubenswrapper[4812]: E1124 20:54:54.356810 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c986b3c-ed7b-4a29-9ed9-c6da0bd58319" containerName="nova-api-api" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.356848 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c986b3c-ed7b-4a29-9ed9-c6da0bd58319" containerName="nova-api-api" Nov 24 20:54:54 crc kubenswrapper[4812]: E1124 20:54:54.356947 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="390133e9-6e9e-4fe4-9d53-312c2c1ea999" containerName="nova-manage" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.356968 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="390133e9-6e9e-4fe4-9d53-312c2c1ea999" containerName="nova-manage" Nov 24 20:54:54 crc kubenswrapper[4812]: E1124 20:54:54.356999 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c986b3c-ed7b-4a29-9ed9-c6da0bd58319" containerName="nova-api-log" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.357017 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c986b3c-ed7b-4a29-9ed9-c6da0bd58319" containerName="nova-api-log" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.357549 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="390133e9-6e9e-4fe4-9d53-312c2c1ea999" containerName="nova-manage" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.357619 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c986b3c-ed7b-4a29-9ed9-c6da0bd58319" containerName="nova-api-log" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.357643 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c986b3c-ed7b-4a29-9ed9-c6da0bd58319" containerName="nova-api-api" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.359861 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.362937 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.384230 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.441772 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grhdm\" (UniqueName: \"kubernetes.io/projected/2a823f54-7dbb-4923-89ce-739951d8b55f-kube-api-access-grhdm\") pod \"nova-api-0\" (UID: \"2a823f54-7dbb-4923-89ce-739951d8b55f\") " pod="openstack/nova-api-0" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.442041 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a823f54-7dbb-4923-89ce-739951d8b55f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2a823f54-7dbb-4923-89ce-739951d8b55f\") " pod="openstack/nova-api-0" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.443800 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a823f54-7dbb-4923-89ce-739951d8b55f-logs\") pod \"nova-api-0\" (UID: \"2a823f54-7dbb-4923-89ce-739951d8b55f\") " pod="openstack/nova-api-0" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.444061 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a823f54-7dbb-4923-89ce-739951d8b55f-config-data\") pod \"nova-api-0\" (UID: \"2a823f54-7dbb-4923-89ce-739951d8b55f\") " pod="openstack/nova-api-0" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.546552 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a823f54-7dbb-4923-89ce-739951d8b55f-logs\") pod \"nova-api-0\" (UID: \"2a823f54-7dbb-4923-89ce-739951d8b55f\") " pod="openstack/nova-api-0" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.546612 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a823f54-7dbb-4923-89ce-739951d8b55f-config-data\") pod \"nova-api-0\" (UID: \"2a823f54-7dbb-4923-89ce-739951d8b55f\") " pod="openstack/nova-api-0" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.546677 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grhdm\" (UniqueName: \"kubernetes.io/projected/2a823f54-7dbb-4923-89ce-739951d8b55f-kube-api-access-grhdm\") pod \"nova-api-0\" (UID: \"2a823f54-7dbb-4923-89ce-739951d8b55f\") " pod="openstack/nova-api-0" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.546701 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a823f54-7dbb-4923-89ce-739951d8b55f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2a823f54-7dbb-4923-89ce-739951d8b55f\") " pod="openstack/nova-api-0" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.547100 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a823f54-7dbb-4923-89ce-739951d8b55f-logs\") pod \"nova-api-0\" (UID: \"2a823f54-7dbb-4923-89ce-739951d8b55f\") " pod="openstack/nova-api-0" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.551817 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a823f54-7dbb-4923-89ce-739951d8b55f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2a823f54-7dbb-4923-89ce-739951d8b55f\") " pod="openstack/nova-api-0" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.552310 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a823f54-7dbb-4923-89ce-739951d8b55f-config-data\") pod \"nova-api-0\" (UID: \"2a823f54-7dbb-4923-89ce-739951d8b55f\") " pod="openstack/nova-api-0" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.566772 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grhdm\" (UniqueName: \"kubernetes.io/projected/2a823f54-7dbb-4923-89ce-739951d8b55f-kube-api-access-grhdm\") pod \"nova-api-0\" (UID: \"2a823f54-7dbb-4923-89ce-739951d8b55f\") " pod="openstack/nova-api-0" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.683175 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 20:54:54 crc kubenswrapper[4812]: I1124 20:54:54.979541 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c986b3c-ed7b-4a29-9ed9-c6da0bd58319" path="/var/lib/kubelet/pods/7c986b3c-ed7b-4a29-9ed9-c6da0bd58319/volumes" Nov 24 20:54:55 crc kubenswrapper[4812]: I1124 20:54:55.233281 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 20:54:56 crc kubenswrapper[4812]: I1124 20:54:56.020211 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2a823f54-7dbb-4923-89ce-739951d8b55f","Type":"ContainerStarted","Data":"b9b63e3c2b07b898841ec8c6233df14a97c5c4f400d55e1bec86a0fe028a507b"} Nov 24 20:54:56 crc kubenswrapper[4812]: I1124 20:54:56.020586 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2a823f54-7dbb-4923-89ce-739951d8b55f","Type":"ContainerStarted","Data":"ea444804ddbcda422bbdb2c08e90b7b647c36494c5e8205396bea8271173de08"} Nov 24 20:54:56 crc kubenswrapper[4812]: I1124 20:54:56.020598 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2a823f54-7dbb-4923-89ce-739951d8b55f","Type":"ContainerStarted","Data":"ada09ab011d3c31704350caabd8113201ef2e8440af779fc3adf99008760e367"} Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.122426 4812 generic.go:334] "Generic (PLEG): container finished" podID="d05ed2f4-83c0-4baa-be13-ba5e179b4804" containerID="335ce9c8ffc70b8007a68e945e55d92701133418ef1b79818e817513f7f4e7a6" exitCode=137 Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.122768 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d05ed2f4-83c0-4baa-be13-ba5e179b4804","Type":"ContainerDied","Data":"335ce9c8ffc70b8007a68e945e55d92701133418ef1b79818e817513f7f4e7a6"} Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.346465 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.369787 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=9.369765573 podStartE2EDuration="9.369765573s" podCreationTimestamp="2025-11-24 20:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:54:56.054026206 +0000 UTC m=+5889.842978577" watchObservedRunningTime="2025-11-24 20:55:03.369765573 +0000 UTC m=+5897.158717954" Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.455466 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw4ds\" (UniqueName: \"kubernetes.io/projected/d05ed2f4-83c0-4baa-be13-ba5e179b4804-kube-api-access-jw4ds\") pod \"d05ed2f4-83c0-4baa-be13-ba5e179b4804\" (UID: \"d05ed2f4-83c0-4baa-be13-ba5e179b4804\") " Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.455581 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d05ed2f4-83c0-4baa-be13-ba5e179b4804-config-data\") pod \"d05ed2f4-83c0-4baa-be13-ba5e179b4804\" (UID: \"d05ed2f4-83c0-4baa-be13-ba5e179b4804\") " Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.455774 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05ed2f4-83c0-4baa-be13-ba5e179b4804-combined-ca-bundle\") pod \"d05ed2f4-83c0-4baa-be13-ba5e179b4804\" (UID: \"d05ed2f4-83c0-4baa-be13-ba5e179b4804\") " Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.460790 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d05ed2f4-83c0-4baa-be13-ba5e179b4804-kube-api-access-jw4ds" (OuterVolumeSpecName: "kube-api-access-jw4ds") pod "d05ed2f4-83c0-4baa-be13-ba5e179b4804" (UID: "d05ed2f4-83c0-4baa-be13-ba5e179b4804"). InnerVolumeSpecName "kube-api-access-jw4ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.481608 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d05ed2f4-83c0-4baa-be13-ba5e179b4804-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d05ed2f4-83c0-4baa-be13-ba5e179b4804" (UID: "d05ed2f4-83c0-4baa-be13-ba5e179b4804"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.483940 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d05ed2f4-83c0-4baa-be13-ba5e179b4804-config-data" (OuterVolumeSpecName: "config-data") pod "d05ed2f4-83c0-4baa-be13-ba5e179b4804" (UID: "d05ed2f4-83c0-4baa-be13-ba5e179b4804"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.559120 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw4ds\" (UniqueName: \"kubernetes.io/projected/d05ed2f4-83c0-4baa-be13-ba5e179b4804-kube-api-access-jw4ds\") on node \"crc\" DevicePath \"\"" Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.559453 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d05ed2f4-83c0-4baa-be13-ba5e179b4804-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.559584 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d05ed2f4-83c0-4baa-be13-ba5e179b4804-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.919949 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.969440 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b0b334-a9cd-4925-afaa-d55f9f213f92-combined-ca-bundle\") pod \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\" (UID: \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\") " Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.969507 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b0b334-a9cd-4925-afaa-d55f9f213f92-config-data\") pod \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\" (UID: \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\") " Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.969607 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4b0b334-a9cd-4925-afaa-d55f9f213f92-logs\") pod \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\" (UID: \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\") " Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.969792 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbr8w\" (UniqueName: \"kubernetes.io/projected/f4b0b334-a9cd-4925-afaa-d55f9f213f92-kube-api-access-qbr8w\") pod \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\" (UID: \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\") " Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.969842 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4b0b334-a9cd-4925-afaa-d55f9f213f92-nova-metadata-tls-certs\") pod \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\" (UID: \"f4b0b334-a9cd-4925-afaa-d55f9f213f92\") " Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.970135 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4b0b334-a9cd-4925-afaa-d55f9f213f92-logs" (OuterVolumeSpecName: "logs") pod "f4b0b334-a9cd-4925-afaa-d55f9f213f92" (UID: "f4b0b334-a9cd-4925-afaa-d55f9f213f92"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.970821 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4b0b334-a9cd-4925-afaa-d55f9f213f92-logs\") on node \"crc\" DevicePath \"\"" Nov 24 20:55:03 crc kubenswrapper[4812]: I1124 20:55:03.973467 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b0b334-a9cd-4925-afaa-d55f9f213f92-kube-api-access-qbr8w" (OuterVolumeSpecName: "kube-api-access-qbr8w") pod "f4b0b334-a9cd-4925-afaa-d55f9f213f92" (UID: "f4b0b334-a9cd-4925-afaa-d55f9f213f92"). InnerVolumeSpecName "kube-api-access-qbr8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.003578 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b0b334-a9cd-4925-afaa-d55f9f213f92-config-data" (OuterVolumeSpecName: "config-data") pod "f4b0b334-a9cd-4925-afaa-d55f9f213f92" (UID: "f4b0b334-a9cd-4925-afaa-d55f9f213f92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.006951 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b0b334-a9cd-4925-afaa-d55f9f213f92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4b0b334-a9cd-4925-afaa-d55f9f213f92" (UID: "f4b0b334-a9cd-4925-afaa-d55f9f213f92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.031574 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b0b334-a9cd-4925-afaa-d55f9f213f92-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f4b0b334-a9cd-4925-afaa-d55f9f213f92" (UID: "f4b0b334-a9cd-4925-afaa-d55f9f213f92"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.073048 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbr8w\" (UniqueName: \"kubernetes.io/projected/f4b0b334-a9cd-4925-afaa-d55f9f213f92-kube-api-access-qbr8w\") on node \"crc\" DevicePath \"\"" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.073083 4812 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4b0b334-a9cd-4925-afaa-d55f9f213f92-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.073093 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b0b334-a9cd-4925-afaa-d55f9f213f92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.073101 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b0b334-a9cd-4925-afaa-d55f9f213f92-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.134831 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d05ed2f4-83c0-4baa-be13-ba5e179b4804","Type":"ContainerDied","Data":"d435c46ca6593ae7585ccdedb41d6956ff5d8faa7d98e525ae2671d52eafd2c1"} Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.134858 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.134921 4812 scope.go:117] "RemoveContainer" containerID="335ce9c8ffc70b8007a68e945e55d92701133418ef1b79818e817513f7f4e7a6" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.139569 4812 generic.go:334] "Generic (PLEG): container finished" podID="f4b0b334-a9cd-4925-afaa-d55f9f213f92" containerID="618ade59ca645ff1af3cac2d75decf80aa56939ad882bd2c86e67902a18cf88d" exitCode=0 Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.139618 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4b0b334-a9cd-4925-afaa-d55f9f213f92","Type":"ContainerDied","Data":"618ade59ca645ff1af3cac2d75decf80aa56939ad882bd2c86e67902a18cf88d"} Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.139649 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f4b0b334-a9cd-4925-afaa-d55f9f213f92","Type":"ContainerDied","Data":"4f8f9084a51c11a6881ba6f16e416142921b56c502b635feca70a857d8ee669c"} Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.139708 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.171596 4812 scope.go:117] "RemoveContainer" containerID="618ade59ca645ff1af3cac2d75decf80aa56939ad882bd2c86e67902a18cf88d" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.181881 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.193203 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.202082 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.211809 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.222853 4812 scope.go:117] "RemoveContainer" containerID="a02f51cf1c32348061c93eddfbff25eca65b9aee52f56292abd168de20d2440e" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.227011 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 20:55:04 crc kubenswrapper[4812]: E1124 20:55:04.227892 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b0b334-a9cd-4925-afaa-d55f9f213f92" containerName="nova-metadata-log" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.227927 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b0b334-a9cd-4925-afaa-d55f9f213f92" containerName="nova-metadata-log" Nov 24 20:55:04 crc kubenswrapper[4812]: E1124 20:55:04.227971 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b0b334-a9cd-4925-afaa-d55f9f213f92" containerName="nova-metadata-metadata" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.228002 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b0b334-a9cd-4925-afaa-d55f9f213f92" containerName="nova-metadata-metadata" Nov 24 20:55:04 crc kubenswrapper[4812]: E1124 20:55:04.228024 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d05ed2f4-83c0-4baa-be13-ba5e179b4804" containerName="nova-scheduler-scheduler" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.228034 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d05ed2f4-83c0-4baa-be13-ba5e179b4804" containerName="nova-scheduler-scheduler" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.228359 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b0b334-a9cd-4925-afaa-d55f9f213f92" containerName="nova-metadata-metadata" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.228383 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b0b334-a9cd-4925-afaa-d55f9f213f92" containerName="nova-metadata-log" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.228404 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d05ed2f4-83c0-4baa-be13-ba5e179b4804" containerName="nova-scheduler-scheduler" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.230194 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.232815 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.233957 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.239962 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.241756 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.244812 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.244820 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.253549 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.261632 4812 scope.go:117] "RemoveContainer" containerID="618ade59ca645ff1af3cac2d75decf80aa56939ad882bd2c86e67902a18cf88d" Nov 24 20:55:04 crc kubenswrapper[4812]: E1124 20:55:04.262286 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"618ade59ca645ff1af3cac2d75decf80aa56939ad882bd2c86e67902a18cf88d\": container with ID starting with 618ade59ca645ff1af3cac2d75decf80aa56939ad882bd2c86e67902a18cf88d not found: ID does not exist" containerID="618ade59ca645ff1af3cac2d75decf80aa56939ad882bd2c86e67902a18cf88d" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.262629 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"618ade59ca645ff1af3cac2d75decf80aa56939ad882bd2c86e67902a18cf88d"} err="failed to get container status \"618ade59ca645ff1af3cac2d75decf80aa56939ad882bd2c86e67902a18cf88d\": rpc error: code = NotFound desc = could not find container \"618ade59ca645ff1af3cac2d75decf80aa56939ad882bd2c86e67902a18cf88d\": container with ID starting with 618ade59ca645ff1af3cac2d75decf80aa56939ad882bd2c86e67902a18cf88d not found: ID does not exist" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.262677 4812 scope.go:117] "RemoveContainer" containerID="a02f51cf1c32348061c93eddfbff25eca65b9aee52f56292abd168de20d2440e" Nov 24 20:55:04 crc kubenswrapper[4812]: E1124 20:55:04.263606 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a02f51cf1c32348061c93eddfbff25eca65b9aee52f56292abd168de20d2440e\": container with ID starting with a02f51cf1c32348061c93eddfbff25eca65b9aee52f56292abd168de20d2440e not found: ID does not exist" containerID="a02f51cf1c32348061c93eddfbff25eca65b9aee52f56292abd168de20d2440e" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.264674 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a02f51cf1c32348061c93eddfbff25eca65b9aee52f56292abd168de20d2440e"} err="failed to get container status \"a02f51cf1c32348061c93eddfbff25eca65b9aee52f56292abd168de20d2440e\": rpc error: code = NotFound desc = could not find container \"a02f51cf1c32348061c93eddfbff25eca65b9aee52f56292abd168de20d2440e\": container with ID starting with a02f51cf1c32348061c93eddfbff25eca65b9aee52f56292abd168de20d2440e not found: ID does not exist" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.280447 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs7cw\" (UniqueName: \"kubernetes.io/projected/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-kube-api-access-zs7cw\") pod \"nova-metadata-0\" (UID: \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\") " pod="openstack/nova-metadata-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.280535 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-logs\") pod \"nova-metadata-0\" (UID: \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\") " pod="openstack/nova-metadata-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.280808 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh4z6\" (UniqueName: \"kubernetes.io/projected/021029af-7cb0-4e84-bd75-8fe60d9d41d4-kube-api-access-qh4z6\") pod \"nova-scheduler-0\" (UID: \"021029af-7cb0-4e84-bd75-8fe60d9d41d4\") " pod="openstack/nova-scheduler-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.280892 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\") " pod="openstack/nova-metadata-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.281046 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\") " pod="openstack/nova-metadata-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.281133 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/021029af-7cb0-4e84-bd75-8fe60d9d41d4-config-data\") pod \"nova-scheduler-0\" (UID: \"021029af-7cb0-4e84-bd75-8fe60d9d41d4\") " pod="openstack/nova-scheduler-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.281243 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-config-data\") pod \"nova-metadata-0\" (UID: \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\") " pod="openstack/nova-metadata-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.281270 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/021029af-7cb0-4e84-bd75-8fe60d9d41d4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"021029af-7cb0-4e84-bd75-8fe60d9d41d4\") " pod="openstack/nova-scheduler-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.382690 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs7cw\" (UniqueName: \"kubernetes.io/projected/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-kube-api-access-zs7cw\") pod \"nova-metadata-0\" (UID: \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\") " pod="openstack/nova-metadata-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.382750 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-logs\") pod \"nova-metadata-0\" (UID: \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\") " pod="openstack/nova-metadata-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.382790 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh4z6\" (UniqueName: \"kubernetes.io/projected/021029af-7cb0-4e84-bd75-8fe60d9d41d4-kube-api-access-qh4z6\") pod \"nova-scheduler-0\" (UID: \"021029af-7cb0-4e84-bd75-8fe60d9d41d4\") " pod="openstack/nova-scheduler-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.382808 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\") " pod="openstack/nova-metadata-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.382849 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\") " pod="openstack/nova-metadata-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.382874 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/021029af-7cb0-4e84-bd75-8fe60d9d41d4-config-data\") pod \"nova-scheduler-0\" (UID: \"021029af-7cb0-4e84-bd75-8fe60d9d41d4\") " pod="openstack/nova-scheduler-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.382904 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-config-data\") pod \"nova-metadata-0\" (UID: \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\") " pod="openstack/nova-metadata-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.382919 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/021029af-7cb0-4e84-bd75-8fe60d9d41d4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"021029af-7cb0-4e84-bd75-8fe60d9d41d4\") " pod="openstack/nova-scheduler-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.383447 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-logs\") pod \"nova-metadata-0\" (UID: \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\") " pod="openstack/nova-metadata-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.386808 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-config-data\") pod \"nova-metadata-0\" (UID: \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\") " pod="openstack/nova-metadata-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.387749 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/021029af-7cb0-4e84-bd75-8fe60d9d41d4-config-data\") pod \"nova-scheduler-0\" (UID: \"021029af-7cb0-4e84-bd75-8fe60d9d41d4\") " pod="openstack/nova-scheduler-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.388049 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\") " pod="openstack/nova-metadata-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.392080 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/021029af-7cb0-4e84-bd75-8fe60d9d41d4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"021029af-7cb0-4e84-bd75-8fe60d9d41d4\") " pod="openstack/nova-scheduler-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.400293 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs7cw\" (UniqueName: \"kubernetes.io/projected/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-kube-api-access-zs7cw\") pod \"nova-metadata-0\" (UID: \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\") " pod="openstack/nova-metadata-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.422553 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\") " pod="openstack/nova-metadata-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.423205 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh4z6\" (UniqueName: \"kubernetes.io/projected/021029af-7cb0-4e84-bd75-8fe60d9d41d4-kube-api-access-qh4z6\") pod \"nova-scheduler-0\" (UID: \"021029af-7cb0-4e84-bd75-8fe60d9d41d4\") " pod="openstack/nova-scheduler-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.564409 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.573976 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.684193 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.684243 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.980115 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d05ed2f4-83c0-4baa-be13-ba5e179b4804" path="/var/lib/kubelet/pods/d05ed2f4-83c0-4baa-be13-ba5e179b4804/volumes" Nov 24 20:55:04 crc kubenswrapper[4812]: I1124 20:55:04.980701 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b0b334-a9cd-4925-afaa-d55f9f213f92" path="/var/lib/kubelet/pods/f4b0b334-a9cd-4925-afaa-d55f9f213f92/volumes" Nov 24 20:55:05 crc kubenswrapper[4812]: I1124 20:55:05.064655 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 20:55:05 crc kubenswrapper[4812]: W1124 20:55:05.065491 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b8f08ee_8290_4ce4_a03f_7717a9d8e69a.slice/crio-64968326841bd52220783ff80ff1261c1f420a44597db987c35071fad2f13974 WatchSource:0}: Error finding container 64968326841bd52220783ff80ff1261c1f420a44597db987c35071fad2f13974: Status 404 returned error can't find the container with id 64968326841bd52220783ff80ff1261c1f420a44597db987c35071fad2f13974 Nov 24 20:55:05 crc kubenswrapper[4812]: I1124 20:55:05.128875 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 20:55:05 crc kubenswrapper[4812]: I1124 20:55:05.156497 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a","Type":"ContainerStarted","Data":"64968326841bd52220783ff80ff1261c1f420a44597db987c35071fad2f13974"} Nov 24 20:55:05 crc kubenswrapper[4812]: I1124 20:55:05.158550 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"021029af-7cb0-4e84-bd75-8fe60d9d41d4","Type":"ContainerStarted","Data":"1b66ba0d0fcc2a86153e9c9f5c4c5527cebd1bf65b96cf4ff9f5190f8d396e42"} Nov 24 20:55:05 crc kubenswrapper[4812]: I1124 20:55:05.766521 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2a823f54-7dbb-4923-89ce-739951d8b55f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.97:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 20:55:05 crc kubenswrapper[4812]: I1124 20:55:05.766535 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2a823f54-7dbb-4923-89ce-739951d8b55f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.97:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 20:55:06 crc kubenswrapper[4812]: I1124 20:55:06.177591 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"021029af-7cb0-4e84-bd75-8fe60d9d41d4","Type":"ContainerStarted","Data":"6e5c45ea3c733ec40c45eb897ac744e45a8e64d860aa0ad8c3e7d5f5f070dddb"} Nov 24 20:55:06 crc kubenswrapper[4812]: I1124 20:55:06.186664 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a","Type":"ContainerStarted","Data":"a16ab86fc267249c8e93fd27d29a9b1d651bc2131ee133b0b45bc96aed0425af"} Nov 24 20:55:06 crc kubenswrapper[4812]: I1124 20:55:06.186736 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a","Type":"ContainerStarted","Data":"cca9d001e7b2b0b1503f184cba192af641e4b4b01d27e89de428755d71a90a6c"} Nov 24 20:55:06 crc kubenswrapper[4812]: I1124 20:55:06.247009 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.246991951 podStartE2EDuration="2.246991951s" podCreationTimestamp="2025-11-24 20:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:55:06.206396752 +0000 UTC m=+5899.995349213" watchObservedRunningTime="2025-11-24 20:55:06.246991951 +0000 UTC m=+5900.035944322" Nov 24 20:55:06 crc kubenswrapper[4812]: I1124 20:55:06.255315 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.255293336 podStartE2EDuration="2.255293336s" podCreationTimestamp="2025-11-24 20:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:55:06.244378107 +0000 UTC m=+5900.033330488" watchObservedRunningTime="2025-11-24 20:55:06.255293336 +0000 UTC m=+5900.044245707" Nov 24 20:55:09 crc kubenswrapper[4812]: I1124 20:55:09.564527 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 20:55:09 crc kubenswrapper[4812]: I1124 20:55:09.565680 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 20:55:09 crc kubenswrapper[4812]: I1124 20:55:09.574683 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 20:55:14 crc kubenswrapper[4812]: I1124 20:55:14.565589 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 20:55:14 crc kubenswrapper[4812]: I1124 20:55:14.566170 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 20:55:14 crc kubenswrapper[4812]: I1124 20:55:14.574652 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 20:55:14 crc kubenswrapper[4812]: I1124 20:55:14.606652 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 20:55:14 crc kubenswrapper[4812]: I1124 20:55:14.691736 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 20:55:14 crc kubenswrapper[4812]: I1124 20:55:14.692366 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 20:55:14 crc kubenswrapper[4812]: I1124 20:55:14.695154 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 20:55:14 crc kubenswrapper[4812]: I1124 20:55:14.702146 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.300385 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.306164 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.343461 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.507542 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-844f65d9f5-jrtsd"] Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.509490 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.524060 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-844f65d9f5-jrtsd"] Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.581372 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6b8f08ee-8290-4ce4-a03f-7717a9d8e69a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.98:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.581642 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6b8f08ee-8290-4ce4-a03f-7717a9d8e69a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.98:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.669100 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z5zf\" (UniqueName: \"kubernetes.io/projected/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-kube-api-access-6z5zf\") pod \"dnsmasq-dns-844f65d9f5-jrtsd\" (UID: \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\") " pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.669208 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-dns-svc\") pod \"dnsmasq-dns-844f65d9f5-jrtsd\" (UID: \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\") " pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.669230 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-ovsdbserver-sb\") pod \"dnsmasq-dns-844f65d9f5-jrtsd\" (UID: \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\") " pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.669252 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-config\") pod \"dnsmasq-dns-844f65d9f5-jrtsd\" (UID: \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\") " pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.669524 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-ovsdbserver-nb\") pod \"dnsmasq-dns-844f65d9f5-jrtsd\" (UID: \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\") " pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.770900 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-ovsdbserver-nb\") pod \"dnsmasq-dns-844f65d9f5-jrtsd\" (UID: \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\") " pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.770973 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z5zf\" (UniqueName: \"kubernetes.io/projected/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-kube-api-access-6z5zf\") pod \"dnsmasq-dns-844f65d9f5-jrtsd\" (UID: \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\") " pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.771045 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-dns-svc\") pod \"dnsmasq-dns-844f65d9f5-jrtsd\" (UID: \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\") " pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.771062 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-ovsdbserver-sb\") pod \"dnsmasq-dns-844f65d9f5-jrtsd\" (UID: \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\") " pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.771082 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-config\") pod \"dnsmasq-dns-844f65d9f5-jrtsd\" (UID: \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\") " pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.771913 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-config\") pod \"dnsmasq-dns-844f65d9f5-jrtsd\" (UID: \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\") " pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.771934 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-ovsdbserver-nb\") pod \"dnsmasq-dns-844f65d9f5-jrtsd\" (UID: \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\") " pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.771912 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-ovsdbserver-sb\") pod \"dnsmasq-dns-844f65d9f5-jrtsd\" (UID: \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\") " pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.772522 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-dns-svc\") pod \"dnsmasq-dns-844f65d9f5-jrtsd\" (UID: \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\") " pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.791208 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z5zf\" (UniqueName: \"kubernetes.io/projected/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-kube-api-access-6z5zf\") pod \"dnsmasq-dns-844f65d9f5-jrtsd\" (UID: \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\") " pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" Nov 24 20:55:15 crc kubenswrapper[4812]: I1124 20:55:15.845664 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" Nov 24 20:55:16 crc kubenswrapper[4812]: I1124 20:55:16.342599 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-844f65d9f5-jrtsd"] Nov 24 20:55:17 crc kubenswrapper[4812]: I1124 20:55:17.317098 4812 generic.go:334] "Generic (PLEG): container finished" podID="3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963" containerID="0d7bcc7d1aae202171bb3c0f7696226292977308bfaf1c15237388a2a54c08bc" exitCode=0 Nov 24 20:55:17 crc kubenswrapper[4812]: I1124 20:55:17.317279 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" event={"ID":"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963","Type":"ContainerDied","Data":"0d7bcc7d1aae202171bb3c0f7696226292977308bfaf1c15237388a2a54c08bc"} Nov 24 20:55:17 crc kubenswrapper[4812]: I1124 20:55:17.318362 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" event={"ID":"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963","Type":"ContainerStarted","Data":"4c728eac596bc24c56ac988667df536fdc236249af856b98a1eb91df9656c092"} Nov 24 20:55:18 crc kubenswrapper[4812]: I1124 20:55:18.333305 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" event={"ID":"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963","Type":"ContainerStarted","Data":"6a55f7f09228881722e65bdbda5418a6dd4003cd21d5b86fb6bc1ceb26384854"} Nov 24 20:55:18 crc kubenswrapper[4812]: I1124 20:55:18.334106 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" Nov 24 20:55:18 crc kubenswrapper[4812]: I1124 20:55:18.362153 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" podStartSLOduration=3.362125008 podStartE2EDuration="3.362125008s" podCreationTimestamp="2025-11-24 20:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:55:18.356564261 +0000 UTC m=+5912.145516672" watchObservedRunningTime="2025-11-24 20:55:18.362125008 +0000 UTC m=+5912.151077409" Nov 24 20:55:18 crc kubenswrapper[4812]: I1124 20:55:18.793647 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 20:55:18 crc kubenswrapper[4812]: I1124 20:55:18.793918 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2a823f54-7dbb-4923-89ce-739951d8b55f" containerName="nova-api-log" containerID="cri-o://ea444804ddbcda422bbdb2c08e90b7b647c36494c5e8205396bea8271173de08" gracePeriod=30 Nov 24 20:55:18 crc kubenswrapper[4812]: I1124 20:55:18.794021 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2a823f54-7dbb-4923-89ce-739951d8b55f" containerName="nova-api-api" containerID="cri-o://b9b63e3c2b07b898841ec8c6233df14a97c5c4f400d55e1bec86a0fe028a507b" gracePeriod=30 Nov 24 20:55:19 crc kubenswrapper[4812]: I1124 20:55:19.345976 4812 generic.go:334] "Generic (PLEG): container finished" podID="2a823f54-7dbb-4923-89ce-739951d8b55f" containerID="ea444804ddbcda422bbdb2c08e90b7b647c36494c5e8205396bea8271173de08" exitCode=143 Nov 24 20:55:19 crc kubenswrapper[4812]: I1124 20:55:19.346058 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2a823f54-7dbb-4923-89ce-739951d8b55f","Type":"ContainerDied","Data":"ea444804ddbcda422bbdb2c08e90b7b647c36494c5e8205396bea8271173de08"} Nov 24 20:55:22 crc kubenswrapper[4812]: I1124 20:55:22.383325 4812 generic.go:334] "Generic (PLEG): container finished" podID="2a823f54-7dbb-4923-89ce-739951d8b55f" containerID="b9b63e3c2b07b898841ec8c6233df14a97c5c4f400d55e1bec86a0fe028a507b" exitCode=0 Nov 24 20:55:22 crc kubenswrapper[4812]: I1124 20:55:22.383428 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2a823f54-7dbb-4923-89ce-739951d8b55f","Type":"ContainerDied","Data":"b9b63e3c2b07b898841ec8c6233df14a97c5c4f400d55e1bec86a0fe028a507b"} Nov 24 20:55:22 crc kubenswrapper[4812]: I1124 20:55:22.383972 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2a823f54-7dbb-4923-89ce-739951d8b55f","Type":"ContainerDied","Data":"ada09ab011d3c31704350caabd8113201ef2e8440af779fc3adf99008760e367"} Nov 24 20:55:22 crc kubenswrapper[4812]: I1124 20:55:22.384000 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ada09ab011d3c31704350caabd8113201ef2e8440af779fc3adf99008760e367" Nov 24 20:55:22 crc kubenswrapper[4812]: I1124 20:55:22.458119 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 20:55:22 crc kubenswrapper[4812]: I1124 20:55:22.514923 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a823f54-7dbb-4923-89ce-739951d8b55f-combined-ca-bundle\") pod \"2a823f54-7dbb-4923-89ce-739951d8b55f\" (UID: \"2a823f54-7dbb-4923-89ce-739951d8b55f\") " Nov 24 20:55:22 crc kubenswrapper[4812]: I1124 20:55:22.515311 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a823f54-7dbb-4923-89ce-739951d8b55f-config-data\") pod \"2a823f54-7dbb-4923-89ce-739951d8b55f\" (UID: \"2a823f54-7dbb-4923-89ce-739951d8b55f\") " Nov 24 20:55:22 crc kubenswrapper[4812]: I1124 20:55:22.515482 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grhdm\" (UniqueName: \"kubernetes.io/projected/2a823f54-7dbb-4923-89ce-739951d8b55f-kube-api-access-grhdm\") pod \"2a823f54-7dbb-4923-89ce-739951d8b55f\" (UID: \"2a823f54-7dbb-4923-89ce-739951d8b55f\") " Nov 24 20:55:22 crc kubenswrapper[4812]: I1124 20:55:22.515610 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a823f54-7dbb-4923-89ce-739951d8b55f-logs\") pod \"2a823f54-7dbb-4923-89ce-739951d8b55f\" (UID: \"2a823f54-7dbb-4923-89ce-739951d8b55f\") " Nov 24 20:55:22 crc kubenswrapper[4812]: I1124 20:55:22.524526 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a823f54-7dbb-4923-89ce-739951d8b55f-logs" (OuterVolumeSpecName: "logs") pod "2a823f54-7dbb-4923-89ce-739951d8b55f" (UID: "2a823f54-7dbb-4923-89ce-739951d8b55f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:55:22 crc kubenswrapper[4812]: I1124 20:55:22.539189 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a823f54-7dbb-4923-89ce-739951d8b55f-kube-api-access-grhdm" (OuterVolumeSpecName: "kube-api-access-grhdm") pod "2a823f54-7dbb-4923-89ce-739951d8b55f" (UID: "2a823f54-7dbb-4923-89ce-739951d8b55f"). InnerVolumeSpecName "kube-api-access-grhdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:55:22 crc kubenswrapper[4812]: I1124 20:55:22.586022 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a823f54-7dbb-4923-89ce-739951d8b55f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a823f54-7dbb-4923-89ce-739951d8b55f" (UID: "2a823f54-7dbb-4923-89ce-739951d8b55f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:55:22 crc kubenswrapper[4812]: I1124 20:55:22.602162 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a823f54-7dbb-4923-89ce-739951d8b55f-config-data" (OuterVolumeSpecName: "config-data") pod "2a823f54-7dbb-4923-89ce-739951d8b55f" (UID: "2a823f54-7dbb-4923-89ce-739951d8b55f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:55:22 crc kubenswrapper[4812]: I1124 20:55:22.618763 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a823f54-7dbb-4923-89ce-739951d8b55f-logs\") on node \"crc\" DevicePath \"\"" Nov 24 20:55:22 crc kubenswrapper[4812]: I1124 20:55:22.618877 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a823f54-7dbb-4923-89ce-739951d8b55f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:55:22 crc kubenswrapper[4812]: I1124 20:55:22.618958 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a823f54-7dbb-4923-89ce-739951d8b55f-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:55:22 crc kubenswrapper[4812]: I1124 20:55:22.619021 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grhdm\" (UniqueName: \"kubernetes.io/projected/2a823f54-7dbb-4923-89ce-739951d8b55f-kube-api-access-grhdm\") on node \"crc\" DevicePath \"\"" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.391363 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.421274 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.428459 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.438768 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 20:55:23 crc kubenswrapper[4812]: E1124 20:55:23.439150 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a823f54-7dbb-4923-89ce-739951d8b55f" containerName="nova-api-api" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.439166 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a823f54-7dbb-4923-89ce-739951d8b55f" containerName="nova-api-api" Nov 24 20:55:23 crc kubenswrapper[4812]: E1124 20:55:23.439180 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a823f54-7dbb-4923-89ce-739951d8b55f" containerName="nova-api-log" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.439187 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a823f54-7dbb-4923-89ce-739951d8b55f" containerName="nova-api-log" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.439383 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a823f54-7dbb-4923-89ce-739951d8b55f" containerName="nova-api-api" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.439406 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a823f54-7dbb-4923-89ce-739951d8b55f" containerName="nova-api-log" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.440324 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.443227 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.443239 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.443573 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.456535 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.535394 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-config-data\") pod \"nova-api-0\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " pod="openstack/nova-api-0" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.535455 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " pod="openstack/nova-api-0" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.535541 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f864a763-a0e3-49bf-a7cf-913010c09061-logs\") pod \"nova-api-0\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " pod="openstack/nova-api-0" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.535580 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " pod="openstack/nova-api-0" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.535602 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-public-tls-certs\") pod \"nova-api-0\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " pod="openstack/nova-api-0" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.535677 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndll8\" (UniqueName: \"kubernetes.io/projected/f864a763-a0e3-49bf-a7cf-913010c09061-kube-api-access-ndll8\") pod \"nova-api-0\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " pod="openstack/nova-api-0" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.638152 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f864a763-a0e3-49bf-a7cf-913010c09061-logs\") pod \"nova-api-0\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " pod="openstack/nova-api-0" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.638638 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " pod="openstack/nova-api-0" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.638680 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-public-tls-certs\") pod \"nova-api-0\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " pod="openstack/nova-api-0" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.638771 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndll8\" (UniqueName: \"kubernetes.io/projected/f864a763-a0e3-49bf-a7cf-913010c09061-kube-api-access-ndll8\") pod \"nova-api-0\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " pod="openstack/nova-api-0" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.638918 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f864a763-a0e3-49bf-a7cf-913010c09061-logs\") pod \"nova-api-0\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " pod="openstack/nova-api-0" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.638951 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-config-data\") pod \"nova-api-0\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " pod="openstack/nova-api-0" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.639134 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " pod="openstack/nova-api-0" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.644995 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-config-data\") pod \"nova-api-0\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " pod="openstack/nova-api-0" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.645301 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " pod="openstack/nova-api-0" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.648945 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " pod="openstack/nova-api-0" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.651847 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-public-tls-certs\") pod \"nova-api-0\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " pod="openstack/nova-api-0" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.670383 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndll8\" (UniqueName: \"kubernetes.io/projected/f864a763-a0e3-49bf-a7cf-913010c09061-kube-api-access-ndll8\") pod \"nova-api-0\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " pod="openstack/nova-api-0" Nov 24 20:55:23 crc kubenswrapper[4812]: I1124 20:55:23.754312 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 20:55:24 crc kubenswrapper[4812]: I1124 20:55:24.246068 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 20:55:24 crc kubenswrapper[4812]: I1124 20:55:24.400887 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f864a763-a0e3-49bf-a7cf-913010c09061","Type":"ContainerStarted","Data":"81b6f7437d5fba3e629d436ff1da94fe1869ed3994b60b4e94e000f15b3fee16"} Nov 24 20:55:24 crc kubenswrapper[4812]: I1124 20:55:24.572116 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 20:55:24 crc kubenswrapper[4812]: I1124 20:55:24.573182 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 20:55:24 crc kubenswrapper[4812]: I1124 20:55:24.580651 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 20:55:24 crc kubenswrapper[4812]: I1124 20:55:24.981196 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a823f54-7dbb-4923-89ce-739951d8b55f" path="/var/lib/kubelet/pods/2a823f54-7dbb-4923-89ce-739951d8b55f/volumes" Nov 24 20:55:25 crc kubenswrapper[4812]: I1124 20:55:25.414022 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f864a763-a0e3-49bf-a7cf-913010c09061","Type":"ContainerStarted","Data":"9e60e42af1acb373af21c44a6794f8a97611d1bd243fd93913c94b9347e72ea5"} Nov 24 20:55:25 crc kubenswrapper[4812]: I1124 20:55:25.414095 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f864a763-a0e3-49bf-a7cf-913010c09061","Type":"ContainerStarted","Data":"5ea0e9eb78951157455459dd1cebfc399725d65e14242ce624635fac7bd597b7"} Nov 24 20:55:25 crc kubenswrapper[4812]: I1124 20:55:25.421520 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 20:55:25 crc kubenswrapper[4812]: I1124 20:55:25.449372 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.44935268 podStartE2EDuration="2.44935268s" podCreationTimestamp="2025-11-24 20:55:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:55:25.434389567 +0000 UTC m=+5919.223341948" watchObservedRunningTime="2025-11-24 20:55:25.44935268 +0000 UTC m=+5919.238305061" Nov 24 20:55:25 crc kubenswrapper[4812]: I1124 20:55:25.847592 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" Nov 24 20:55:25 crc kubenswrapper[4812]: I1124 20:55:25.923981 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6678c6cd77-b7z64"] Nov 24 20:55:25 crc kubenswrapper[4812]: I1124 20:55:25.924258 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" podUID="b617d316-138a-4c93-a1bf-8bb810a33398" containerName="dnsmasq-dns" containerID="cri-o://80f76c14a785e7460005aca193011ab9c16732b2c6318b05149fd30316046a1e" gracePeriod=10 Nov 24 20:55:26 crc kubenswrapper[4812]: I1124 20:55:26.431088 4812 generic.go:334] "Generic (PLEG): container finished" podID="b617d316-138a-4c93-a1bf-8bb810a33398" containerID="80f76c14a785e7460005aca193011ab9c16732b2c6318b05149fd30316046a1e" exitCode=0 Nov 24 20:55:26 crc kubenswrapper[4812]: I1124 20:55:26.431374 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" event={"ID":"b617d316-138a-4c93-a1bf-8bb810a33398","Type":"ContainerDied","Data":"80f76c14a785e7460005aca193011ab9c16732b2c6318b05149fd30316046a1e"} Nov 24 20:55:26 crc kubenswrapper[4812]: I1124 20:55:26.431415 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" event={"ID":"b617d316-138a-4c93-a1bf-8bb810a33398","Type":"ContainerDied","Data":"f4b9ee761b7d4baaf160f6df7989f36daa71c5fe3374174de12477ee2fbf64b4"} Nov 24 20:55:26 crc kubenswrapper[4812]: I1124 20:55:26.431429 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4b9ee761b7d4baaf160f6df7989f36daa71c5fe3374174de12477ee2fbf64b4" Nov 24 20:55:26 crc kubenswrapper[4812]: I1124 20:55:26.449326 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" Nov 24 20:55:26 crc kubenswrapper[4812]: I1124 20:55:26.494379 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv72p\" (UniqueName: \"kubernetes.io/projected/b617d316-138a-4c93-a1bf-8bb810a33398-kube-api-access-wv72p\") pod \"b617d316-138a-4c93-a1bf-8bb810a33398\" (UID: \"b617d316-138a-4c93-a1bf-8bb810a33398\") " Nov 24 20:55:26 crc kubenswrapper[4812]: I1124 20:55:26.494453 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-dns-svc\") pod \"b617d316-138a-4c93-a1bf-8bb810a33398\" (UID: \"b617d316-138a-4c93-a1bf-8bb810a33398\") " Nov 24 20:55:26 crc kubenswrapper[4812]: I1124 20:55:26.494481 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-ovsdbserver-sb\") pod \"b617d316-138a-4c93-a1bf-8bb810a33398\" (UID: \"b617d316-138a-4c93-a1bf-8bb810a33398\") " Nov 24 20:55:26 crc kubenswrapper[4812]: I1124 20:55:26.494529 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-ovsdbserver-nb\") pod \"b617d316-138a-4c93-a1bf-8bb810a33398\" (UID: \"b617d316-138a-4c93-a1bf-8bb810a33398\") " Nov 24 20:55:26 crc kubenswrapper[4812]: I1124 20:55:26.494668 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-config\") pod \"b617d316-138a-4c93-a1bf-8bb810a33398\" (UID: \"b617d316-138a-4c93-a1bf-8bb810a33398\") " Nov 24 20:55:26 crc kubenswrapper[4812]: I1124 20:55:26.506821 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b617d316-138a-4c93-a1bf-8bb810a33398-kube-api-access-wv72p" (OuterVolumeSpecName: "kube-api-access-wv72p") pod "b617d316-138a-4c93-a1bf-8bb810a33398" (UID: "b617d316-138a-4c93-a1bf-8bb810a33398"). InnerVolumeSpecName "kube-api-access-wv72p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:55:26 crc kubenswrapper[4812]: I1124 20:55:26.562841 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b617d316-138a-4c93-a1bf-8bb810a33398" (UID: "b617d316-138a-4c93-a1bf-8bb810a33398"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:55:26 crc kubenswrapper[4812]: I1124 20:55:26.597399 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wv72p\" (UniqueName: \"kubernetes.io/projected/b617d316-138a-4c93-a1bf-8bb810a33398-kube-api-access-wv72p\") on node \"crc\" DevicePath \"\"" Nov 24 20:55:26 crc kubenswrapper[4812]: I1124 20:55:26.597427 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 20:55:26 crc kubenswrapper[4812]: I1124 20:55:26.612243 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-config" (OuterVolumeSpecName: "config") pod "b617d316-138a-4c93-a1bf-8bb810a33398" (UID: "b617d316-138a-4c93-a1bf-8bb810a33398"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:55:26 crc kubenswrapper[4812]: I1124 20:55:26.613796 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b617d316-138a-4c93-a1bf-8bb810a33398" (UID: "b617d316-138a-4c93-a1bf-8bb810a33398"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:55:26 crc kubenswrapper[4812]: I1124 20:55:26.643695 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b617d316-138a-4c93-a1bf-8bb810a33398" (UID: "b617d316-138a-4c93-a1bf-8bb810a33398"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:55:26 crc kubenswrapper[4812]: I1124 20:55:26.698842 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-config\") on node \"crc\" DevicePath \"\"" Nov 24 20:55:26 crc kubenswrapper[4812]: I1124 20:55:26.698868 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 20:55:26 crc kubenswrapper[4812]: I1124 20:55:26.698878 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b617d316-138a-4c93-a1bf-8bb810a33398-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 20:55:27 crc kubenswrapper[4812]: I1124 20:55:27.439953 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6678c6cd77-b7z64" Nov 24 20:55:27 crc kubenswrapper[4812]: I1124 20:55:27.462528 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6678c6cd77-b7z64"] Nov 24 20:55:27 crc kubenswrapper[4812]: I1124 20:55:27.469383 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6678c6cd77-b7z64"] Nov 24 20:55:28 crc kubenswrapper[4812]: I1124 20:55:28.980120 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b617d316-138a-4c93-a1bf-8bb810a33398" path="/var/lib/kubelet/pods/b617d316-138a-4c93-a1bf-8bb810a33398/volumes" Nov 24 20:55:33 crc kubenswrapper[4812]: I1124 20:55:33.755130 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 20:55:33 crc kubenswrapper[4812]: I1124 20:55:33.755781 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 20:55:34 crc kubenswrapper[4812]: I1124 20:55:34.769640 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f864a763-a0e3-49bf-a7cf-913010c09061" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.101:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 20:55:34 crc kubenswrapper[4812]: I1124 20:55:34.769648 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f864a763-a0e3-49bf-a7cf-913010c09061" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.101:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 20:55:43 crc kubenswrapper[4812]: I1124 20:55:43.764818 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 20:55:43 crc kubenswrapper[4812]: I1124 20:55:43.765848 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 20:55:43 crc kubenswrapper[4812]: I1124 20:55:43.767583 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 20:55:43 crc kubenswrapper[4812]: I1124 20:55:43.782832 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 20:55:44 crc kubenswrapper[4812]: I1124 20:55:44.651400 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 20:55:44 crc kubenswrapper[4812]: I1124 20:55:44.662012 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 20:56:02 crc kubenswrapper[4812]: I1124 20:56:02.999064 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:56:03 crc kubenswrapper[4812]: I1124 20:56:02.999709 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.017992 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xrmlq"] Nov 24 20:56:07 crc kubenswrapper[4812]: E1124 20:56:07.018833 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b617d316-138a-4c93-a1bf-8bb810a33398" containerName="init" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.018847 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b617d316-138a-4c93-a1bf-8bb810a33398" containerName="init" Nov 24 20:56:07 crc kubenswrapper[4812]: E1124 20:56:07.018868 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b617d316-138a-4c93-a1bf-8bb810a33398" containerName="dnsmasq-dns" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.018874 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b617d316-138a-4c93-a1bf-8bb810a33398" containerName="dnsmasq-dns" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.019227 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b617d316-138a-4c93-a1bf-8bb810a33398" containerName="dnsmasq-dns" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.021785 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-7n6sd"] Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.022702 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.025909 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xrmlq"] Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.026537 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.028660 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.029260 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.033994 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-7bwlt" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.055285 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7n6sd"] Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.207518 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq7nm\" (UniqueName: \"kubernetes.io/projected/00bfc83b-f048-4338-bd95-93abfee34089-kube-api-access-qq7nm\") pod \"ovn-controller-xrmlq\" (UID: \"00bfc83b-f048-4338-bd95-93abfee34089\") " pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.207633 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b7be236c-9a14-4aa5-97ed-df06b89d34b7-etc-ovs\") pod \"ovn-controller-ovs-7n6sd\" (UID: \"b7be236c-9a14-4aa5-97ed-df06b89d34b7\") " pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.207660 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/00bfc83b-f048-4338-bd95-93abfee34089-ovn-controller-tls-certs\") pod \"ovn-controller-xrmlq\" (UID: \"00bfc83b-f048-4338-bd95-93abfee34089\") " pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.207714 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b7be236c-9a14-4aa5-97ed-df06b89d34b7-var-run\") pod \"ovn-controller-ovs-7n6sd\" (UID: \"b7be236c-9a14-4aa5-97ed-df06b89d34b7\") " pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.207758 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7be236c-9a14-4aa5-97ed-df06b89d34b7-scripts\") pod \"ovn-controller-ovs-7n6sd\" (UID: \"b7be236c-9a14-4aa5-97ed-df06b89d34b7\") " pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.207790 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b7be236c-9a14-4aa5-97ed-df06b89d34b7-var-log\") pod \"ovn-controller-ovs-7n6sd\" (UID: \"b7be236c-9a14-4aa5-97ed-df06b89d34b7\") " pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.207842 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b7be236c-9a14-4aa5-97ed-df06b89d34b7-var-lib\") pod \"ovn-controller-ovs-7n6sd\" (UID: \"b7be236c-9a14-4aa5-97ed-df06b89d34b7\") " pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.207858 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/00bfc83b-f048-4338-bd95-93abfee34089-var-run-ovn\") pod \"ovn-controller-xrmlq\" (UID: \"00bfc83b-f048-4338-bd95-93abfee34089\") " pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.207924 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/00bfc83b-f048-4338-bd95-93abfee34089-var-run\") pod \"ovn-controller-xrmlq\" (UID: \"00bfc83b-f048-4338-bd95-93abfee34089\") " pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.207976 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6b4p\" (UniqueName: \"kubernetes.io/projected/b7be236c-9a14-4aa5-97ed-df06b89d34b7-kube-api-access-s6b4p\") pod \"ovn-controller-ovs-7n6sd\" (UID: \"b7be236c-9a14-4aa5-97ed-df06b89d34b7\") " pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.208010 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00bfc83b-f048-4338-bd95-93abfee34089-scripts\") pod \"ovn-controller-xrmlq\" (UID: \"00bfc83b-f048-4338-bd95-93abfee34089\") " pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.208064 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/00bfc83b-f048-4338-bd95-93abfee34089-var-log-ovn\") pod \"ovn-controller-xrmlq\" (UID: \"00bfc83b-f048-4338-bd95-93abfee34089\") " pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.208088 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00bfc83b-f048-4338-bd95-93abfee34089-combined-ca-bundle\") pod \"ovn-controller-xrmlq\" (UID: \"00bfc83b-f048-4338-bd95-93abfee34089\") " pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.310518 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b7be236c-9a14-4aa5-97ed-df06b89d34b7-var-run\") pod \"ovn-controller-ovs-7n6sd\" (UID: \"b7be236c-9a14-4aa5-97ed-df06b89d34b7\") " pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.310050 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b7be236c-9a14-4aa5-97ed-df06b89d34b7-var-run\") pod \"ovn-controller-ovs-7n6sd\" (UID: \"b7be236c-9a14-4aa5-97ed-df06b89d34b7\") " pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.310915 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7be236c-9a14-4aa5-97ed-df06b89d34b7-scripts\") pod \"ovn-controller-ovs-7n6sd\" (UID: \"b7be236c-9a14-4aa5-97ed-df06b89d34b7\") " pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.315868 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7be236c-9a14-4aa5-97ed-df06b89d34b7-scripts\") pod \"ovn-controller-ovs-7n6sd\" (UID: \"b7be236c-9a14-4aa5-97ed-df06b89d34b7\") " pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.316091 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b7be236c-9a14-4aa5-97ed-df06b89d34b7-var-log\") pod \"ovn-controller-ovs-7n6sd\" (UID: \"b7be236c-9a14-4aa5-97ed-df06b89d34b7\") " pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.316225 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/b7be236c-9a14-4aa5-97ed-df06b89d34b7-var-log\") pod \"ovn-controller-ovs-7n6sd\" (UID: \"b7be236c-9a14-4aa5-97ed-df06b89d34b7\") " pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.316445 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b7be236c-9a14-4aa5-97ed-df06b89d34b7-var-lib\") pod \"ovn-controller-ovs-7n6sd\" (UID: \"b7be236c-9a14-4aa5-97ed-df06b89d34b7\") " pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.316561 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/b7be236c-9a14-4aa5-97ed-df06b89d34b7-var-lib\") pod \"ovn-controller-ovs-7n6sd\" (UID: \"b7be236c-9a14-4aa5-97ed-df06b89d34b7\") " pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.316677 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/00bfc83b-f048-4338-bd95-93abfee34089-var-run-ovn\") pod \"ovn-controller-xrmlq\" (UID: \"00bfc83b-f048-4338-bd95-93abfee34089\") " pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.316805 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/00bfc83b-f048-4338-bd95-93abfee34089-var-run-ovn\") pod \"ovn-controller-xrmlq\" (UID: \"00bfc83b-f048-4338-bd95-93abfee34089\") " pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.317007 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/00bfc83b-f048-4338-bd95-93abfee34089-var-run\") pod \"ovn-controller-xrmlq\" (UID: \"00bfc83b-f048-4338-bd95-93abfee34089\") " pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.317117 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/00bfc83b-f048-4338-bd95-93abfee34089-var-run\") pod \"ovn-controller-xrmlq\" (UID: \"00bfc83b-f048-4338-bd95-93abfee34089\") " pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.317206 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6b4p\" (UniqueName: \"kubernetes.io/projected/b7be236c-9a14-4aa5-97ed-df06b89d34b7-kube-api-access-s6b4p\") pod \"ovn-controller-ovs-7n6sd\" (UID: \"b7be236c-9a14-4aa5-97ed-df06b89d34b7\") " pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.317873 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00bfc83b-f048-4338-bd95-93abfee34089-scripts\") pod \"ovn-controller-xrmlq\" (UID: \"00bfc83b-f048-4338-bd95-93abfee34089\") " pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.319649 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/00bfc83b-f048-4338-bd95-93abfee34089-var-log-ovn\") pod \"ovn-controller-xrmlq\" (UID: \"00bfc83b-f048-4338-bd95-93abfee34089\") " pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.319728 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00bfc83b-f048-4338-bd95-93abfee34089-combined-ca-bundle\") pod \"ovn-controller-xrmlq\" (UID: \"00bfc83b-f048-4338-bd95-93abfee34089\") " pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.319832 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq7nm\" (UniqueName: \"kubernetes.io/projected/00bfc83b-f048-4338-bd95-93abfee34089-kube-api-access-qq7nm\") pod \"ovn-controller-xrmlq\" (UID: \"00bfc83b-f048-4338-bd95-93abfee34089\") " pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.319868 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/00bfc83b-f048-4338-bd95-93abfee34089-var-log-ovn\") pod \"ovn-controller-xrmlq\" (UID: \"00bfc83b-f048-4338-bd95-93abfee34089\") " pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.319887 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b7be236c-9a14-4aa5-97ed-df06b89d34b7-etc-ovs\") pod \"ovn-controller-ovs-7n6sd\" (UID: \"b7be236c-9a14-4aa5-97ed-df06b89d34b7\") " pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.320162 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/00bfc83b-f048-4338-bd95-93abfee34089-ovn-controller-tls-certs\") pod \"ovn-controller-xrmlq\" (UID: \"00bfc83b-f048-4338-bd95-93abfee34089\") " pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.320157 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/b7be236c-9a14-4aa5-97ed-df06b89d34b7-etc-ovs\") pod \"ovn-controller-ovs-7n6sd\" (UID: \"b7be236c-9a14-4aa5-97ed-df06b89d34b7\") " pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.322241 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/00bfc83b-f048-4338-bd95-93abfee34089-scripts\") pod \"ovn-controller-xrmlq\" (UID: \"00bfc83b-f048-4338-bd95-93abfee34089\") " pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.326165 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/00bfc83b-f048-4338-bd95-93abfee34089-ovn-controller-tls-certs\") pod \"ovn-controller-xrmlq\" (UID: \"00bfc83b-f048-4338-bd95-93abfee34089\") " pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.326235 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00bfc83b-f048-4338-bd95-93abfee34089-combined-ca-bundle\") pod \"ovn-controller-xrmlq\" (UID: \"00bfc83b-f048-4338-bd95-93abfee34089\") " pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.338118 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6b4p\" (UniqueName: \"kubernetes.io/projected/b7be236c-9a14-4aa5-97ed-df06b89d34b7-kube-api-access-s6b4p\") pod \"ovn-controller-ovs-7n6sd\" (UID: \"b7be236c-9a14-4aa5-97ed-df06b89d34b7\") " pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.346660 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq7nm\" (UniqueName: \"kubernetes.io/projected/00bfc83b-f048-4338-bd95-93abfee34089-kube-api-access-qq7nm\") pod \"ovn-controller-xrmlq\" (UID: \"00bfc83b-f048-4338-bd95-93abfee34089\") " pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.357005 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:07 crc kubenswrapper[4812]: I1124 20:56:07.376753 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:08 crc kubenswrapper[4812]: I1124 20:56:08.034827 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xrmlq"] Nov 24 20:56:08 crc kubenswrapper[4812]: I1124 20:56:08.426201 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-vp2pp"] Nov 24 20:56:08 crc kubenswrapper[4812]: I1124 20:56:08.438159 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-vp2pp"] Nov 24 20:56:08 crc kubenswrapper[4812]: I1124 20:56:08.438242 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-vp2pp" Nov 24 20:56:08 crc kubenswrapper[4812]: I1124 20:56:08.471182 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51aa012f-3b22-4157-8f1c-5f3259e247cf-operator-scripts\") pod \"octavia-db-create-vp2pp\" (UID: \"51aa012f-3b22-4157-8f1c-5f3259e247cf\") " pod="openstack/octavia-db-create-vp2pp" Nov 24 20:56:08 crc kubenswrapper[4812]: I1124 20:56:08.471519 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbhks\" (UniqueName: \"kubernetes.io/projected/51aa012f-3b22-4157-8f1c-5f3259e247cf-kube-api-access-cbhks\") pod \"octavia-db-create-vp2pp\" (UID: \"51aa012f-3b22-4157-8f1c-5f3259e247cf\") " pod="openstack/octavia-db-create-vp2pp" Nov 24 20:56:08 crc kubenswrapper[4812]: I1124 20:56:08.509018 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7n6sd"] Nov 24 20:56:08 crc kubenswrapper[4812]: W1124 20:56:08.529668 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7be236c_9a14_4aa5_97ed_df06b89d34b7.slice/crio-8ae26b8eb7755539e922b1df8e956e59951f004e43edbd5605a75f691cb731a4 WatchSource:0}: Error finding container 8ae26b8eb7755539e922b1df8e956e59951f004e43edbd5605a75f691cb731a4: Status 404 returned error can't find the container with id 8ae26b8eb7755539e922b1df8e956e59951f004e43edbd5605a75f691cb731a4 Nov 24 20:56:08 crc kubenswrapper[4812]: I1124 20:56:08.573290 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51aa012f-3b22-4157-8f1c-5f3259e247cf-operator-scripts\") pod \"octavia-db-create-vp2pp\" (UID: \"51aa012f-3b22-4157-8f1c-5f3259e247cf\") " pod="openstack/octavia-db-create-vp2pp" Nov 24 20:56:08 crc kubenswrapper[4812]: I1124 20:56:08.573353 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbhks\" (UniqueName: \"kubernetes.io/projected/51aa012f-3b22-4157-8f1c-5f3259e247cf-kube-api-access-cbhks\") pod \"octavia-db-create-vp2pp\" (UID: \"51aa012f-3b22-4157-8f1c-5f3259e247cf\") " pod="openstack/octavia-db-create-vp2pp" Nov 24 20:56:08 crc kubenswrapper[4812]: I1124 20:56:08.574389 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51aa012f-3b22-4157-8f1c-5f3259e247cf-operator-scripts\") pod \"octavia-db-create-vp2pp\" (UID: \"51aa012f-3b22-4157-8f1c-5f3259e247cf\") " pod="openstack/octavia-db-create-vp2pp" Nov 24 20:56:08 crc kubenswrapper[4812]: I1124 20:56:08.591445 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbhks\" (UniqueName: \"kubernetes.io/projected/51aa012f-3b22-4157-8f1c-5f3259e247cf-kube-api-access-cbhks\") pod \"octavia-db-create-vp2pp\" (UID: \"51aa012f-3b22-4157-8f1c-5f3259e247cf\") " pod="openstack/octavia-db-create-vp2pp" Nov 24 20:56:08 crc kubenswrapper[4812]: I1124 20:56:08.784739 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-vp2pp" Nov 24 20:56:08 crc kubenswrapper[4812]: I1124 20:56:08.912541 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7n6sd" event={"ID":"b7be236c-9a14-4aa5-97ed-df06b89d34b7","Type":"ContainerStarted","Data":"06b9f53bc5362d6609a6ecfc9547d22db3644f60393e8df562b99b5bf36b2f40"} Nov 24 20:56:08 crc kubenswrapper[4812]: I1124 20:56:08.912869 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7n6sd" event={"ID":"b7be236c-9a14-4aa5-97ed-df06b89d34b7","Type":"ContainerStarted","Data":"8ae26b8eb7755539e922b1df8e956e59951f004e43edbd5605a75f691cb731a4"} Nov 24 20:56:08 crc kubenswrapper[4812]: I1124 20:56:08.917696 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xrmlq" event={"ID":"00bfc83b-f048-4338-bd95-93abfee34089","Type":"ContainerStarted","Data":"efa7870df4ecae5599eef1f57fdc0358e03782201077d9d71ddb818348132b78"} Nov 24 20:56:08 crc kubenswrapper[4812]: I1124 20:56:08.917736 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xrmlq" event={"ID":"00bfc83b-f048-4338-bd95-93abfee34089","Type":"ContainerStarted","Data":"7a37acf92a626c5fc57ea2828be27b6d6a2cabebfc52b241e8198918488c1e55"} Nov 24 20:56:08 crc kubenswrapper[4812]: I1124 20:56:08.917866 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:08 crc kubenswrapper[4812]: I1124 20:56:08.956132 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xrmlq" podStartSLOduration=2.956112175 podStartE2EDuration="2.956112175s" podCreationTimestamp="2025-11-24 20:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:56:08.948187211 +0000 UTC m=+5962.737139582" watchObservedRunningTime="2025-11-24 20:56:08.956112175 +0000 UTC m=+5962.745064546" Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.261173 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-vp2pp"] Nov 24 20:56:09 crc kubenswrapper[4812]: W1124 20:56:09.268641 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51aa012f_3b22_4157_8f1c_5f3259e247cf.slice/crio-36f1ab9a50860b7ddb184434c29a8b5a95b43179614e0cb47c90cc5f5ca84cdc WatchSource:0}: Error finding container 36f1ab9a50860b7ddb184434c29a8b5a95b43179614e0cb47c90cc5f5ca84cdc: Status 404 returned error can't find the container with id 36f1ab9a50860b7ddb184434c29a8b5a95b43179614e0cb47c90cc5f5ca84cdc Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.372589 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-e71f-account-create-plb85"] Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.374182 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-e71f-account-create-plb85" Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.375814 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.388566 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-e71f-account-create-plb85"] Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.488208 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnsxc\" (UniqueName: \"kubernetes.io/projected/00a5f747-977c-40b4-81a0-13aae8abac4b-kube-api-access-rnsxc\") pod \"octavia-e71f-account-create-plb85\" (UID: \"00a5f747-977c-40b4-81a0-13aae8abac4b\") " pod="openstack/octavia-e71f-account-create-plb85" Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.488279 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00a5f747-977c-40b4-81a0-13aae8abac4b-operator-scripts\") pod \"octavia-e71f-account-create-plb85\" (UID: \"00a5f747-977c-40b4-81a0-13aae8abac4b\") " pod="openstack/octavia-e71f-account-create-plb85" Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.591626 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnsxc\" (UniqueName: \"kubernetes.io/projected/00a5f747-977c-40b4-81a0-13aae8abac4b-kube-api-access-rnsxc\") pod \"octavia-e71f-account-create-plb85\" (UID: \"00a5f747-977c-40b4-81a0-13aae8abac4b\") " pod="openstack/octavia-e71f-account-create-plb85" Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.591696 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00a5f747-977c-40b4-81a0-13aae8abac4b-operator-scripts\") pod \"octavia-e71f-account-create-plb85\" (UID: \"00a5f747-977c-40b4-81a0-13aae8abac4b\") " pod="openstack/octavia-e71f-account-create-plb85" Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.592473 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00a5f747-977c-40b4-81a0-13aae8abac4b-operator-scripts\") pod \"octavia-e71f-account-create-plb85\" (UID: \"00a5f747-977c-40b4-81a0-13aae8abac4b\") " pod="openstack/octavia-e71f-account-create-plb85" Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.616495 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnsxc\" (UniqueName: \"kubernetes.io/projected/00a5f747-977c-40b4-81a0-13aae8abac4b-kube-api-access-rnsxc\") pod \"octavia-e71f-account-create-plb85\" (UID: \"00a5f747-977c-40b4-81a0-13aae8abac4b\") " pod="openstack/octavia-e71f-account-create-plb85" Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.818534 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-e71f-account-create-plb85" Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.905605 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-qxt9h"] Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.911893 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qxt9h" Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.914383 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.934934 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-vp2pp" event={"ID":"51aa012f-3b22-4157-8f1c-5f3259e247cf","Type":"ContainerStarted","Data":"9be2e22325ce88be23babfabb1887ceba8b3851dbf9e41db7b9baae9e5576b93"} Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.934987 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-vp2pp" event={"ID":"51aa012f-3b22-4157-8f1c-5f3259e247cf","Type":"ContainerStarted","Data":"36f1ab9a50860b7ddb184434c29a8b5a95b43179614e0cb47c90cc5f5ca84cdc"} Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.935538 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qxt9h"] Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.950655 4812 generic.go:334] "Generic (PLEG): container finished" podID="b7be236c-9a14-4aa5-97ed-df06b89d34b7" containerID="06b9f53bc5362d6609a6ecfc9547d22db3644f60393e8df562b99b5bf36b2f40" exitCode=0 Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.952475 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7n6sd" event={"ID":"b7be236c-9a14-4aa5-97ed-df06b89d34b7","Type":"ContainerDied","Data":"06b9f53bc5362d6609a6ecfc9547d22db3644f60393e8df562b99b5bf36b2f40"} Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.997300 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-create-vp2pp" podStartSLOduration=1.997277191 podStartE2EDuration="1.997277191s" podCreationTimestamp="2025-11-24 20:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:56:09.960117399 +0000 UTC m=+5963.749069770" watchObservedRunningTime="2025-11-24 20:56:09.997277191 +0000 UTC m=+5963.786229572" Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.999203 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0adcb27e-b811-40b7-a37a-4179f629c4f9-ovs-rundir\") pod \"ovn-controller-metrics-qxt9h\" (UID: \"0adcb27e-b811-40b7-a37a-4179f629c4f9\") " pod="openstack/ovn-controller-metrics-qxt9h" Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.999424 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44ktv\" (UniqueName: \"kubernetes.io/projected/0adcb27e-b811-40b7-a37a-4179f629c4f9-kube-api-access-44ktv\") pod \"ovn-controller-metrics-qxt9h\" (UID: \"0adcb27e-b811-40b7-a37a-4179f629c4f9\") " pod="openstack/ovn-controller-metrics-qxt9h" Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.999519 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0adcb27e-b811-40b7-a37a-4179f629c4f9-ovn-rundir\") pod \"ovn-controller-metrics-qxt9h\" (UID: \"0adcb27e-b811-40b7-a37a-4179f629c4f9\") " pod="openstack/ovn-controller-metrics-qxt9h" Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.999557 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0adcb27e-b811-40b7-a37a-4179f629c4f9-combined-ca-bundle\") pod \"ovn-controller-metrics-qxt9h\" (UID: \"0adcb27e-b811-40b7-a37a-4179f629c4f9\") " pod="openstack/ovn-controller-metrics-qxt9h" Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.999605 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0adcb27e-b811-40b7-a37a-4179f629c4f9-config\") pod \"ovn-controller-metrics-qxt9h\" (UID: \"0adcb27e-b811-40b7-a37a-4179f629c4f9\") " pod="openstack/ovn-controller-metrics-qxt9h" Nov 24 20:56:09 crc kubenswrapper[4812]: I1124 20:56:09.999686 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0adcb27e-b811-40b7-a37a-4179f629c4f9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qxt9h\" (UID: \"0adcb27e-b811-40b7-a37a-4179f629c4f9\") " pod="openstack/ovn-controller-metrics-qxt9h" Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.110900 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0adcb27e-b811-40b7-a37a-4179f629c4f9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qxt9h\" (UID: \"0adcb27e-b811-40b7-a37a-4179f629c4f9\") " pod="openstack/ovn-controller-metrics-qxt9h" Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.111795 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0adcb27e-b811-40b7-a37a-4179f629c4f9-ovs-rundir\") pod \"ovn-controller-metrics-qxt9h\" (UID: \"0adcb27e-b811-40b7-a37a-4179f629c4f9\") " pod="openstack/ovn-controller-metrics-qxt9h" Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.111896 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44ktv\" (UniqueName: \"kubernetes.io/projected/0adcb27e-b811-40b7-a37a-4179f629c4f9-kube-api-access-44ktv\") pod \"ovn-controller-metrics-qxt9h\" (UID: \"0adcb27e-b811-40b7-a37a-4179f629c4f9\") " pod="openstack/ovn-controller-metrics-qxt9h" Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.111954 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0adcb27e-b811-40b7-a37a-4179f629c4f9-ovn-rundir\") pod \"ovn-controller-metrics-qxt9h\" (UID: \"0adcb27e-b811-40b7-a37a-4179f629c4f9\") " pod="openstack/ovn-controller-metrics-qxt9h" Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.111988 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0adcb27e-b811-40b7-a37a-4179f629c4f9-combined-ca-bundle\") pod \"ovn-controller-metrics-qxt9h\" (UID: \"0adcb27e-b811-40b7-a37a-4179f629c4f9\") " pod="openstack/ovn-controller-metrics-qxt9h" Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.112021 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0adcb27e-b811-40b7-a37a-4179f629c4f9-config\") pod \"ovn-controller-metrics-qxt9h\" (UID: \"0adcb27e-b811-40b7-a37a-4179f629c4f9\") " pod="openstack/ovn-controller-metrics-qxt9h" Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.112747 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0adcb27e-b811-40b7-a37a-4179f629c4f9-ovn-rundir\") pod \"ovn-controller-metrics-qxt9h\" (UID: \"0adcb27e-b811-40b7-a37a-4179f629c4f9\") " pod="openstack/ovn-controller-metrics-qxt9h" Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.113866 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0adcb27e-b811-40b7-a37a-4179f629c4f9-ovs-rundir\") pod \"ovn-controller-metrics-qxt9h\" (UID: \"0adcb27e-b811-40b7-a37a-4179f629c4f9\") " pod="openstack/ovn-controller-metrics-qxt9h" Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.114886 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0adcb27e-b811-40b7-a37a-4179f629c4f9-config\") pod \"ovn-controller-metrics-qxt9h\" (UID: \"0adcb27e-b811-40b7-a37a-4179f629c4f9\") " pod="openstack/ovn-controller-metrics-qxt9h" Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.118152 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0adcb27e-b811-40b7-a37a-4179f629c4f9-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-qxt9h\" (UID: \"0adcb27e-b811-40b7-a37a-4179f629c4f9\") " pod="openstack/ovn-controller-metrics-qxt9h" Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.120301 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0adcb27e-b811-40b7-a37a-4179f629c4f9-combined-ca-bundle\") pod \"ovn-controller-metrics-qxt9h\" (UID: \"0adcb27e-b811-40b7-a37a-4179f629c4f9\") " pod="openstack/ovn-controller-metrics-qxt9h" Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.132246 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44ktv\" (UniqueName: \"kubernetes.io/projected/0adcb27e-b811-40b7-a37a-4179f629c4f9-kube-api-access-44ktv\") pod \"ovn-controller-metrics-qxt9h\" (UID: \"0adcb27e-b811-40b7-a37a-4179f629c4f9\") " pod="openstack/ovn-controller-metrics-qxt9h" Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.284747 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-qxt9h" Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.479447 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-e71f-account-create-plb85"] Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.860674 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-qxt9h"] Nov 24 20:56:10 crc kubenswrapper[4812]: W1124 20:56:10.885356 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0adcb27e_b811_40b7_a37a_4179f629c4f9.slice/crio-2076fb3c5ba5053d82c651b6e0232599c020795c3baf49737d3132155615c44f WatchSource:0}: Error finding container 2076fb3c5ba5053d82c651b6e0232599c020795c3baf49737d3132155615c44f: Status 404 returned error can't find the container with id 2076fb3c5ba5053d82c651b6e0232599c020795c3baf49737d3132155615c44f Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.963567 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qxt9h" event={"ID":"0adcb27e-b811-40b7-a37a-4179f629c4f9","Type":"ContainerStarted","Data":"2076fb3c5ba5053d82c651b6e0232599c020795c3baf49737d3132155615c44f"} Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.978685 4812 generic.go:334] "Generic (PLEG): container finished" podID="51aa012f-3b22-4157-8f1c-5f3259e247cf" containerID="9be2e22325ce88be23babfabb1887ceba8b3851dbf9e41db7b9baae9e5576b93" exitCode=0 Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.982906 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.982954 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-e71f-account-create-plb85" event={"ID":"00a5f747-977c-40b4-81a0-13aae8abac4b","Type":"ContainerStarted","Data":"a218716b939c05345035ce286cc5d597587d852fd31b35d6e8891cf6de31311f"} Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.982980 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-e71f-account-create-plb85" event={"ID":"00a5f747-977c-40b4-81a0-13aae8abac4b","Type":"ContainerStarted","Data":"2c4a59c29f3515bd5af8758f64dd18d78f4616acf19e4d94d8ef14d318e2d6e0"} Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.983027 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-vp2pp" event={"ID":"51aa012f-3b22-4157-8f1c-5f3259e247cf","Type":"ContainerDied","Data":"9be2e22325ce88be23babfabb1887ceba8b3851dbf9e41db7b9baae9e5576b93"} Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.983043 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7n6sd" event={"ID":"b7be236c-9a14-4aa5-97ed-df06b89d34b7","Type":"ContainerStarted","Data":"aa777f106234df47274c525ae3b26c75e8b38b8acd820c9a2a732a95c12c7b05"} Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.983055 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7n6sd" event={"ID":"b7be236c-9a14-4aa5-97ed-df06b89d34b7","Type":"ContainerStarted","Data":"b5e406af581eb38a6ea02ae7bac8197a549e34e5605a10f2ada3829467388aa3"} Nov 24 20:56:10 crc kubenswrapper[4812]: I1124 20:56:10.993086 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-e71f-account-create-plb85" podStartSLOduration=1.993070391 podStartE2EDuration="1.993070391s" podCreationTimestamp="2025-11-24 20:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:56:10.988915254 +0000 UTC m=+5964.777867625" watchObservedRunningTime="2025-11-24 20:56:10.993070391 +0000 UTC m=+5964.782022752" Nov 24 20:56:11 crc kubenswrapper[4812]: I1124 20:56:11.029277 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-7n6sd" podStartSLOduration=5.029223184 podStartE2EDuration="5.029223184s" podCreationTimestamp="2025-11-24 20:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:56:11.024866211 +0000 UTC m=+5964.813818592" watchObservedRunningTime="2025-11-24 20:56:11.029223184 +0000 UTC m=+5964.818175555" Nov 24 20:56:11 crc kubenswrapper[4812]: I1124 20:56:11.995764 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-qxt9h" event={"ID":"0adcb27e-b811-40b7-a37a-4179f629c4f9","Type":"ContainerStarted","Data":"ee0cdd2a53693e35ece1e0423ad5f3d9710efe841b55790208d30b7290a330d9"} Nov 24 20:56:11 crc kubenswrapper[4812]: I1124 20:56:11.998040 4812 generic.go:334] "Generic (PLEG): container finished" podID="00a5f747-977c-40b4-81a0-13aae8abac4b" containerID="a218716b939c05345035ce286cc5d597587d852fd31b35d6e8891cf6de31311f" exitCode=0 Nov 24 20:56:11 crc kubenswrapper[4812]: I1124 20:56:11.999178 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-e71f-account-create-plb85" event={"ID":"00a5f747-977c-40b4-81a0-13aae8abac4b","Type":"ContainerDied","Data":"a218716b939c05345035ce286cc5d597587d852fd31b35d6e8891cf6de31311f"} Nov 24 20:56:11 crc kubenswrapper[4812]: I1124 20:56:11.999319 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:12 crc kubenswrapper[4812]: I1124 20:56:12.021306 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-qxt9h" podStartSLOduration=3.021288421 podStartE2EDuration="3.021288421s" podCreationTimestamp="2025-11-24 20:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:56:12.010431804 +0000 UTC m=+5965.799384175" watchObservedRunningTime="2025-11-24 20:56:12.021288421 +0000 UTC m=+5965.810240792" Nov 24 20:56:12 crc kubenswrapper[4812]: I1124 20:56:12.441097 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-vp2pp" Nov 24 20:56:12 crc kubenswrapper[4812]: I1124 20:56:12.574853 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbhks\" (UniqueName: \"kubernetes.io/projected/51aa012f-3b22-4157-8f1c-5f3259e247cf-kube-api-access-cbhks\") pod \"51aa012f-3b22-4157-8f1c-5f3259e247cf\" (UID: \"51aa012f-3b22-4157-8f1c-5f3259e247cf\") " Nov 24 20:56:12 crc kubenswrapper[4812]: I1124 20:56:12.575031 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51aa012f-3b22-4157-8f1c-5f3259e247cf-operator-scripts\") pod \"51aa012f-3b22-4157-8f1c-5f3259e247cf\" (UID: \"51aa012f-3b22-4157-8f1c-5f3259e247cf\") " Nov 24 20:56:12 crc kubenswrapper[4812]: I1124 20:56:12.576235 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51aa012f-3b22-4157-8f1c-5f3259e247cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51aa012f-3b22-4157-8f1c-5f3259e247cf" (UID: "51aa012f-3b22-4157-8f1c-5f3259e247cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:56:12 crc kubenswrapper[4812]: I1124 20:56:12.584168 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51aa012f-3b22-4157-8f1c-5f3259e247cf-kube-api-access-cbhks" (OuterVolumeSpecName: "kube-api-access-cbhks") pod "51aa012f-3b22-4157-8f1c-5f3259e247cf" (UID: "51aa012f-3b22-4157-8f1c-5f3259e247cf"). InnerVolumeSpecName "kube-api-access-cbhks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:56:12 crc kubenswrapper[4812]: I1124 20:56:12.679560 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbhks\" (UniqueName: \"kubernetes.io/projected/51aa012f-3b22-4157-8f1c-5f3259e247cf-kube-api-access-cbhks\") on node \"crc\" DevicePath \"\"" Nov 24 20:56:12 crc kubenswrapper[4812]: I1124 20:56:12.679602 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51aa012f-3b22-4157-8f1c-5f3259e247cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:56:13 crc kubenswrapper[4812]: I1124 20:56:13.020272 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-vp2pp" event={"ID":"51aa012f-3b22-4157-8f1c-5f3259e247cf","Type":"ContainerDied","Data":"36f1ab9a50860b7ddb184434c29a8b5a95b43179614e0cb47c90cc5f5ca84cdc"} Nov 24 20:56:13 crc kubenswrapper[4812]: I1124 20:56:13.020742 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36f1ab9a50860b7ddb184434c29a8b5a95b43179614e0cb47c90cc5f5ca84cdc" Nov 24 20:56:13 crc kubenswrapper[4812]: I1124 20:56:13.020367 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-vp2pp" Nov 24 20:56:13 crc kubenswrapper[4812]: I1124 20:56:13.369507 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-e71f-account-create-plb85" Nov 24 20:56:13 crc kubenswrapper[4812]: I1124 20:56:13.500326 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00a5f747-977c-40b4-81a0-13aae8abac4b-operator-scripts\") pod \"00a5f747-977c-40b4-81a0-13aae8abac4b\" (UID: \"00a5f747-977c-40b4-81a0-13aae8abac4b\") " Nov 24 20:56:13 crc kubenswrapper[4812]: I1124 20:56:13.501056 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnsxc\" (UniqueName: \"kubernetes.io/projected/00a5f747-977c-40b4-81a0-13aae8abac4b-kube-api-access-rnsxc\") pod \"00a5f747-977c-40b4-81a0-13aae8abac4b\" (UID: \"00a5f747-977c-40b4-81a0-13aae8abac4b\") " Nov 24 20:56:13 crc kubenswrapper[4812]: I1124 20:56:13.502053 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00a5f747-977c-40b4-81a0-13aae8abac4b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "00a5f747-977c-40b4-81a0-13aae8abac4b" (UID: "00a5f747-977c-40b4-81a0-13aae8abac4b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:56:13 crc kubenswrapper[4812]: I1124 20:56:13.510049 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00a5f747-977c-40b4-81a0-13aae8abac4b-kube-api-access-rnsxc" (OuterVolumeSpecName: "kube-api-access-rnsxc") pod "00a5f747-977c-40b4-81a0-13aae8abac4b" (UID: "00a5f747-977c-40b4-81a0-13aae8abac4b"). InnerVolumeSpecName "kube-api-access-rnsxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:56:13 crc kubenswrapper[4812]: I1124 20:56:13.603185 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnsxc\" (UniqueName: \"kubernetes.io/projected/00a5f747-977c-40b4-81a0-13aae8abac4b-kube-api-access-rnsxc\") on node \"crc\" DevicePath \"\"" Nov 24 20:56:13 crc kubenswrapper[4812]: I1124 20:56:13.603222 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00a5f747-977c-40b4-81a0-13aae8abac4b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:56:14 crc kubenswrapper[4812]: I1124 20:56:14.030732 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-e71f-account-create-plb85" event={"ID":"00a5f747-977c-40b4-81a0-13aae8abac4b","Type":"ContainerDied","Data":"2c4a59c29f3515bd5af8758f64dd18d78f4616acf19e4d94d8ef14d318e2d6e0"} Nov 24 20:56:14 crc kubenswrapper[4812]: I1124 20:56:14.030779 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c4a59c29f3515bd5af8758f64dd18d78f4616acf19e4d94d8ef14d318e2d6e0" Nov 24 20:56:14 crc kubenswrapper[4812]: I1124 20:56:14.030752 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-e71f-account-create-plb85" Nov 24 20:56:15 crc kubenswrapper[4812]: I1124 20:56:15.376520 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-ngtmc"] Nov 24 20:56:15 crc kubenswrapper[4812]: E1124 20:56:15.377395 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00a5f747-977c-40b4-81a0-13aae8abac4b" containerName="mariadb-account-create" Nov 24 20:56:15 crc kubenswrapper[4812]: I1124 20:56:15.377410 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="00a5f747-977c-40b4-81a0-13aae8abac4b" containerName="mariadb-account-create" Nov 24 20:56:15 crc kubenswrapper[4812]: E1124 20:56:15.377429 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51aa012f-3b22-4157-8f1c-5f3259e247cf" containerName="mariadb-database-create" Nov 24 20:56:15 crc kubenswrapper[4812]: I1124 20:56:15.377435 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="51aa012f-3b22-4157-8f1c-5f3259e247cf" containerName="mariadb-database-create" Nov 24 20:56:15 crc kubenswrapper[4812]: I1124 20:56:15.377634 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="00a5f747-977c-40b4-81a0-13aae8abac4b" containerName="mariadb-account-create" Nov 24 20:56:15 crc kubenswrapper[4812]: I1124 20:56:15.377668 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="51aa012f-3b22-4157-8f1c-5f3259e247cf" containerName="mariadb-database-create" Nov 24 20:56:15 crc kubenswrapper[4812]: I1124 20:56:15.378960 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-ngtmc" Nov 24 20:56:15 crc kubenswrapper[4812]: I1124 20:56:15.401813 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-ngtmc"] Nov 24 20:56:15 crc kubenswrapper[4812]: I1124 20:56:15.438355 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c916ac76-e4bf-4630-9290-f98440a62829-operator-scripts\") pod \"octavia-persistence-db-create-ngtmc\" (UID: \"c916ac76-e4bf-4630-9290-f98440a62829\") " pod="openstack/octavia-persistence-db-create-ngtmc" Nov 24 20:56:15 crc kubenswrapper[4812]: I1124 20:56:15.438491 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9knf\" (UniqueName: \"kubernetes.io/projected/c916ac76-e4bf-4630-9290-f98440a62829-kube-api-access-v9knf\") pod \"octavia-persistence-db-create-ngtmc\" (UID: \"c916ac76-e4bf-4630-9290-f98440a62829\") " pod="openstack/octavia-persistence-db-create-ngtmc" Nov 24 20:56:15 crc kubenswrapper[4812]: I1124 20:56:15.540358 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9knf\" (UniqueName: \"kubernetes.io/projected/c916ac76-e4bf-4630-9290-f98440a62829-kube-api-access-v9knf\") pod \"octavia-persistence-db-create-ngtmc\" (UID: \"c916ac76-e4bf-4630-9290-f98440a62829\") " pod="openstack/octavia-persistence-db-create-ngtmc" Nov 24 20:56:15 crc kubenswrapper[4812]: I1124 20:56:15.540504 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c916ac76-e4bf-4630-9290-f98440a62829-operator-scripts\") pod \"octavia-persistence-db-create-ngtmc\" (UID: \"c916ac76-e4bf-4630-9290-f98440a62829\") " pod="openstack/octavia-persistence-db-create-ngtmc" Nov 24 20:56:15 crc kubenswrapper[4812]: I1124 20:56:15.541479 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c916ac76-e4bf-4630-9290-f98440a62829-operator-scripts\") pod \"octavia-persistence-db-create-ngtmc\" (UID: \"c916ac76-e4bf-4630-9290-f98440a62829\") " pod="openstack/octavia-persistence-db-create-ngtmc" Nov 24 20:56:15 crc kubenswrapper[4812]: I1124 20:56:15.560275 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9knf\" (UniqueName: \"kubernetes.io/projected/c916ac76-e4bf-4630-9290-f98440a62829-kube-api-access-v9knf\") pod \"octavia-persistence-db-create-ngtmc\" (UID: \"c916ac76-e4bf-4630-9290-f98440a62829\") " pod="openstack/octavia-persistence-db-create-ngtmc" Nov 24 20:56:15 crc kubenswrapper[4812]: I1124 20:56:15.699314 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-ngtmc" Nov 24 20:56:16 crc kubenswrapper[4812]: I1124 20:56:16.033358 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-e98c-account-create-kr4vr"] Nov 24 20:56:16 crc kubenswrapper[4812]: I1124 20:56:16.035822 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-e98c-account-create-kr4vr" Nov 24 20:56:16 crc kubenswrapper[4812]: I1124 20:56:16.037461 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Nov 24 20:56:16 crc kubenswrapper[4812]: I1124 20:56:16.066695 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-e98c-account-create-kr4vr"] Nov 24 20:56:16 crc kubenswrapper[4812]: I1124 20:56:16.153180 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2689091-8414-46ad-9ff3-a7ba09821ae0-operator-scripts\") pod \"octavia-e98c-account-create-kr4vr\" (UID: \"f2689091-8414-46ad-9ff3-a7ba09821ae0\") " pod="openstack/octavia-e98c-account-create-kr4vr" Nov 24 20:56:16 crc kubenswrapper[4812]: I1124 20:56:16.153532 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw67r\" (UniqueName: \"kubernetes.io/projected/f2689091-8414-46ad-9ff3-a7ba09821ae0-kube-api-access-tw67r\") pod \"octavia-e98c-account-create-kr4vr\" (UID: \"f2689091-8414-46ad-9ff3-a7ba09821ae0\") " pod="openstack/octavia-e98c-account-create-kr4vr" Nov 24 20:56:16 crc kubenswrapper[4812]: I1124 20:56:16.230865 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-ngtmc"] Nov 24 20:56:16 crc kubenswrapper[4812]: I1124 20:56:16.255620 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2689091-8414-46ad-9ff3-a7ba09821ae0-operator-scripts\") pod \"octavia-e98c-account-create-kr4vr\" (UID: \"f2689091-8414-46ad-9ff3-a7ba09821ae0\") " pod="openstack/octavia-e98c-account-create-kr4vr" Nov 24 20:56:16 crc kubenswrapper[4812]: I1124 20:56:16.255791 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw67r\" (UniqueName: \"kubernetes.io/projected/f2689091-8414-46ad-9ff3-a7ba09821ae0-kube-api-access-tw67r\") pod \"octavia-e98c-account-create-kr4vr\" (UID: \"f2689091-8414-46ad-9ff3-a7ba09821ae0\") " pod="openstack/octavia-e98c-account-create-kr4vr" Nov 24 20:56:16 crc kubenswrapper[4812]: I1124 20:56:16.256401 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2689091-8414-46ad-9ff3-a7ba09821ae0-operator-scripts\") pod \"octavia-e98c-account-create-kr4vr\" (UID: \"f2689091-8414-46ad-9ff3-a7ba09821ae0\") " pod="openstack/octavia-e98c-account-create-kr4vr" Nov 24 20:56:16 crc kubenswrapper[4812]: I1124 20:56:16.279316 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw67r\" (UniqueName: \"kubernetes.io/projected/f2689091-8414-46ad-9ff3-a7ba09821ae0-kube-api-access-tw67r\") pod \"octavia-e98c-account-create-kr4vr\" (UID: \"f2689091-8414-46ad-9ff3-a7ba09821ae0\") " pod="openstack/octavia-e98c-account-create-kr4vr" Nov 24 20:56:16 crc kubenswrapper[4812]: I1124 20:56:16.370938 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-e98c-account-create-kr4vr" Nov 24 20:56:16 crc kubenswrapper[4812]: W1124 20:56:16.637919 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2689091_8414_46ad_9ff3_a7ba09821ae0.slice/crio-05fa550f365f790d75f86ca6160fbbf135115b7231a635e5552d97cd802bdf69 WatchSource:0}: Error finding container 05fa550f365f790d75f86ca6160fbbf135115b7231a635e5552d97cd802bdf69: Status 404 returned error can't find the container with id 05fa550f365f790d75f86ca6160fbbf135115b7231a635e5552d97cd802bdf69 Nov 24 20:56:16 crc kubenswrapper[4812]: I1124 20:56:16.642684 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-e98c-account-create-kr4vr"] Nov 24 20:56:17 crc kubenswrapper[4812]: I1124 20:56:17.063408 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-e98c-account-create-kr4vr" event={"ID":"f2689091-8414-46ad-9ff3-a7ba09821ae0","Type":"ContainerStarted","Data":"64ce64891f7dfb7fa6e4963d21f239138b26c94a48f6a665cf57ea61ab57bd4a"} Nov 24 20:56:17 crc kubenswrapper[4812]: I1124 20:56:17.063847 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-e98c-account-create-kr4vr" event={"ID":"f2689091-8414-46ad-9ff3-a7ba09821ae0","Type":"ContainerStarted","Data":"05fa550f365f790d75f86ca6160fbbf135115b7231a635e5552d97cd802bdf69"} Nov 24 20:56:17 crc kubenswrapper[4812]: I1124 20:56:17.067471 4812 generic.go:334] "Generic (PLEG): container finished" podID="c916ac76-e4bf-4630-9290-f98440a62829" containerID="3035ebcbea5a2f5adeb3b1aecfc1b950baf4eaf8cfeed1475779784e15de9d28" exitCode=0 Nov 24 20:56:17 crc kubenswrapper[4812]: I1124 20:56:17.067572 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-ngtmc" event={"ID":"c916ac76-e4bf-4630-9290-f98440a62829","Type":"ContainerDied","Data":"3035ebcbea5a2f5adeb3b1aecfc1b950baf4eaf8cfeed1475779784e15de9d28"} Nov 24 20:56:17 crc kubenswrapper[4812]: I1124 20:56:17.067617 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-ngtmc" event={"ID":"c916ac76-e4bf-4630-9290-f98440a62829","Type":"ContainerStarted","Data":"1723bd648808ad0cf3dfb388368443f060ef273a4b46847302dea4f396926619"} Nov 24 20:56:17 crc kubenswrapper[4812]: I1124 20:56:17.098300 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-e98c-account-create-kr4vr" podStartSLOduration=1.098278142 podStartE2EDuration="1.098278142s" podCreationTimestamp="2025-11-24 20:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:56:17.082766403 +0000 UTC m=+5970.871718794" watchObservedRunningTime="2025-11-24 20:56:17.098278142 +0000 UTC m=+5970.887230523" Nov 24 20:56:18 crc kubenswrapper[4812]: I1124 20:56:18.090124 4812 generic.go:334] "Generic (PLEG): container finished" podID="f2689091-8414-46ad-9ff3-a7ba09821ae0" containerID="64ce64891f7dfb7fa6e4963d21f239138b26c94a48f6a665cf57ea61ab57bd4a" exitCode=0 Nov 24 20:56:18 crc kubenswrapper[4812]: I1124 20:56:18.090282 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-e98c-account-create-kr4vr" event={"ID":"f2689091-8414-46ad-9ff3-a7ba09821ae0","Type":"ContainerDied","Data":"64ce64891f7dfb7fa6e4963d21f239138b26c94a48f6a665cf57ea61ab57bd4a"} Nov 24 20:56:18 crc kubenswrapper[4812]: I1124 20:56:18.509641 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-ngtmc" Nov 24 20:56:18 crc kubenswrapper[4812]: I1124 20:56:18.603369 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c916ac76-e4bf-4630-9290-f98440a62829-operator-scripts\") pod \"c916ac76-e4bf-4630-9290-f98440a62829\" (UID: \"c916ac76-e4bf-4630-9290-f98440a62829\") " Nov 24 20:56:18 crc kubenswrapper[4812]: I1124 20:56:18.603647 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9knf\" (UniqueName: \"kubernetes.io/projected/c916ac76-e4bf-4630-9290-f98440a62829-kube-api-access-v9knf\") pod \"c916ac76-e4bf-4630-9290-f98440a62829\" (UID: \"c916ac76-e4bf-4630-9290-f98440a62829\") " Nov 24 20:56:18 crc kubenswrapper[4812]: I1124 20:56:18.604649 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c916ac76-e4bf-4630-9290-f98440a62829-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c916ac76-e4bf-4630-9290-f98440a62829" (UID: "c916ac76-e4bf-4630-9290-f98440a62829"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:56:18 crc kubenswrapper[4812]: I1124 20:56:18.610267 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c916ac76-e4bf-4630-9290-f98440a62829-kube-api-access-v9knf" (OuterVolumeSpecName: "kube-api-access-v9knf") pod "c916ac76-e4bf-4630-9290-f98440a62829" (UID: "c916ac76-e4bf-4630-9290-f98440a62829"). InnerVolumeSpecName "kube-api-access-v9knf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:56:18 crc kubenswrapper[4812]: I1124 20:56:18.706829 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9knf\" (UniqueName: \"kubernetes.io/projected/c916ac76-e4bf-4630-9290-f98440a62829-kube-api-access-v9knf\") on node \"crc\" DevicePath \"\"" Nov 24 20:56:18 crc kubenswrapper[4812]: I1124 20:56:18.706886 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c916ac76-e4bf-4630-9290-f98440a62829-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:56:19 crc kubenswrapper[4812]: I1124 20:56:19.104851 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-ngtmc" event={"ID":"c916ac76-e4bf-4630-9290-f98440a62829","Type":"ContainerDied","Data":"1723bd648808ad0cf3dfb388368443f060ef273a4b46847302dea4f396926619"} Nov 24 20:56:19 crc kubenswrapper[4812]: I1124 20:56:19.105190 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1723bd648808ad0cf3dfb388368443f060ef273a4b46847302dea4f396926619" Nov 24 20:56:19 crc kubenswrapper[4812]: I1124 20:56:19.104875 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-ngtmc" Nov 24 20:56:19 crc kubenswrapper[4812]: I1124 20:56:19.570704 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-e98c-account-create-kr4vr" Nov 24 20:56:19 crc kubenswrapper[4812]: I1124 20:56:19.643648 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw67r\" (UniqueName: \"kubernetes.io/projected/f2689091-8414-46ad-9ff3-a7ba09821ae0-kube-api-access-tw67r\") pod \"f2689091-8414-46ad-9ff3-a7ba09821ae0\" (UID: \"f2689091-8414-46ad-9ff3-a7ba09821ae0\") " Nov 24 20:56:19 crc kubenswrapper[4812]: I1124 20:56:19.647919 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2689091-8414-46ad-9ff3-a7ba09821ae0-operator-scripts\") pod \"f2689091-8414-46ad-9ff3-a7ba09821ae0\" (UID: \"f2689091-8414-46ad-9ff3-a7ba09821ae0\") " Nov 24 20:56:19 crc kubenswrapper[4812]: I1124 20:56:19.651492 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2689091-8414-46ad-9ff3-a7ba09821ae0-kube-api-access-tw67r" (OuterVolumeSpecName: "kube-api-access-tw67r") pod "f2689091-8414-46ad-9ff3-a7ba09821ae0" (UID: "f2689091-8414-46ad-9ff3-a7ba09821ae0"). InnerVolumeSpecName "kube-api-access-tw67r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:56:19 crc kubenswrapper[4812]: I1124 20:56:19.657505 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2689091-8414-46ad-9ff3-a7ba09821ae0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2689091-8414-46ad-9ff3-a7ba09821ae0" (UID: "f2689091-8414-46ad-9ff3-a7ba09821ae0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:56:19 crc kubenswrapper[4812]: I1124 20:56:19.755682 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2689091-8414-46ad-9ff3-a7ba09821ae0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:56:19 crc kubenswrapper[4812]: I1124 20:56:19.755729 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw67r\" (UniqueName: \"kubernetes.io/projected/f2689091-8414-46ad-9ff3-a7ba09821ae0-kube-api-access-tw67r\") on node \"crc\" DevicePath \"\"" Nov 24 20:56:20 crc kubenswrapper[4812]: I1124 20:56:20.119144 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-e98c-account-create-kr4vr" event={"ID":"f2689091-8414-46ad-9ff3-a7ba09821ae0","Type":"ContainerDied","Data":"05fa550f365f790d75f86ca6160fbbf135115b7231a635e5552d97cd802bdf69"} Nov 24 20:56:20 crc kubenswrapper[4812]: I1124 20:56:20.119195 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05fa550f365f790d75f86ca6160fbbf135115b7231a635e5552d97cd802bdf69" Nov 24 20:56:20 crc kubenswrapper[4812]: I1124 20:56:20.119257 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-e98c-account-create-kr4vr" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.502949 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-6dfb9bffb6-fskh7"] Nov 24 20:56:21 crc kubenswrapper[4812]: E1124 20:56:21.504360 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c916ac76-e4bf-4630-9290-f98440a62829" containerName="mariadb-database-create" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.504450 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c916ac76-e4bf-4630-9290-f98440a62829" containerName="mariadb-database-create" Nov 24 20:56:21 crc kubenswrapper[4812]: E1124 20:56:21.504521 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2689091-8414-46ad-9ff3-a7ba09821ae0" containerName="mariadb-account-create" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.504579 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2689091-8414-46ad-9ff3-a7ba09821ae0" containerName="mariadb-account-create" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.504813 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c916ac76-e4bf-4630-9290-f98440a62829" containerName="mariadb-database-create" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.504888 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2689091-8414-46ad-9ff3-a7ba09821ae0" containerName="mariadb-account-create" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.506215 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.508786 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.508955 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-ovndbs" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.509217 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-qrdfv" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.509351 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.514855 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-6dfb9bffb6-fskh7"] Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.598897 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d86d2f3d-5efe-452c-9af4-91c2b509bced-config-data-merged\") pod \"octavia-api-6dfb9bffb6-fskh7\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.598952 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/d86d2f3d-5efe-452c-9af4-91c2b509bced-octavia-run\") pod \"octavia-api-6dfb9bffb6-fskh7\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.599031 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-ovndb-tls-certs\") pod \"octavia-api-6dfb9bffb6-fskh7\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.599092 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-config-data\") pod \"octavia-api-6dfb9bffb6-fskh7\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.599205 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-combined-ca-bundle\") pod \"octavia-api-6dfb9bffb6-fskh7\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.599424 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-scripts\") pod \"octavia-api-6dfb9bffb6-fskh7\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.701370 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/d86d2f3d-5efe-452c-9af4-91c2b509bced-octavia-run\") pod \"octavia-api-6dfb9bffb6-fskh7\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.701498 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-ovndb-tls-certs\") pod \"octavia-api-6dfb9bffb6-fskh7\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.701564 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-config-data\") pod \"octavia-api-6dfb9bffb6-fskh7\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.701603 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-combined-ca-bundle\") pod \"octavia-api-6dfb9bffb6-fskh7\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.701662 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-scripts\") pod \"octavia-api-6dfb9bffb6-fskh7\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.701739 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d86d2f3d-5efe-452c-9af4-91c2b509bced-config-data-merged\") pod \"octavia-api-6dfb9bffb6-fskh7\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.702461 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/d86d2f3d-5efe-452c-9af4-91c2b509bced-octavia-run\") pod \"octavia-api-6dfb9bffb6-fskh7\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.702532 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d86d2f3d-5efe-452c-9af4-91c2b509bced-config-data-merged\") pod \"octavia-api-6dfb9bffb6-fskh7\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.707576 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-scripts\") pod \"octavia-api-6dfb9bffb6-fskh7\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.707980 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-combined-ca-bundle\") pod \"octavia-api-6dfb9bffb6-fskh7\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.708193 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-ovndb-tls-certs\") pod \"octavia-api-6dfb9bffb6-fskh7\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.709060 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-config-data\") pod \"octavia-api-6dfb9bffb6-fskh7\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:21 crc kubenswrapper[4812]: I1124 20:56:21.831983 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:22 crc kubenswrapper[4812]: I1124 20:56:22.341695 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-6dfb9bffb6-fskh7"] Nov 24 20:56:22 crc kubenswrapper[4812]: W1124 20:56:22.350450 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd86d2f3d_5efe_452c_9af4_91c2b509bced.slice/crio-e35631aa6bdf30e6ef243598165f1718a8d11f66aa232ed94e19b01575aaf440 WatchSource:0}: Error finding container e35631aa6bdf30e6ef243598165f1718a8d11f66aa232ed94e19b01575aaf440: Status 404 returned error can't find the container with id e35631aa6bdf30e6ef243598165f1718a8d11f66aa232ed94e19b01575aaf440 Nov 24 20:56:23 crc kubenswrapper[4812]: I1124 20:56:23.145889 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6dfb9bffb6-fskh7" event={"ID":"d86d2f3d-5efe-452c-9af4-91c2b509bced","Type":"ContainerStarted","Data":"e35631aa6bdf30e6ef243598165f1718a8d11f66aa232ed94e19b01575aaf440"} Nov 24 20:56:32 crc kubenswrapper[4812]: I1124 20:56:32.236570 4812 generic.go:334] "Generic (PLEG): container finished" podID="d86d2f3d-5efe-452c-9af4-91c2b509bced" containerID="07db2f6fb2056f3a44bcf2d520de8f9c92b82be7a203d7d60b094adc3ac380e9" exitCode=0 Nov 24 20:56:32 crc kubenswrapper[4812]: I1124 20:56:32.237282 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6dfb9bffb6-fskh7" event={"ID":"d86d2f3d-5efe-452c-9af4-91c2b509bced","Type":"ContainerDied","Data":"07db2f6fb2056f3a44bcf2d520de8f9c92b82be7a203d7d60b094adc3ac380e9"} Nov 24 20:56:32 crc kubenswrapper[4812]: I1124 20:56:32.998004 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:56:32 crc kubenswrapper[4812]: I1124 20:56:32.998384 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:56:33 crc kubenswrapper[4812]: I1124 20:56:33.254019 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6dfb9bffb6-fskh7" event={"ID":"d86d2f3d-5efe-452c-9af4-91c2b509bced","Type":"ContainerStarted","Data":"493126fb8b38d79d48d38a4a467e1de7f003252ca3841a0529fe884a2fa3822f"} Nov 24 20:56:33 crc kubenswrapper[4812]: I1124 20:56:33.254370 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6dfb9bffb6-fskh7" event={"ID":"d86d2f3d-5efe-452c-9af4-91c2b509bced","Type":"ContainerStarted","Data":"9fc0a617a00324b23da90bcbedbbc17abb77e62e36c427ff995971cc3717044b"} Nov 24 20:56:33 crc kubenswrapper[4812]: I1124 20:56:33.254595 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:33 crc kubenswrapper[4812]: I1124 20:56:33.254649 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:33 crc kubenswrapper[4812]: I1124 20:56:33.281026 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-6dfb9bffb6-fskh7" podStartSLOduration=3.12874746 podStartE2EDuration="12.281003453s" podCreationTimestamp="2025-11-24 20:56:21 +0000 UTC" firstStartedPulling="2025-11-24 20:56:22.352640743 +0000 UTC m=+5976.141593154" lastFinishedPulling="2025-11-24 20:56:31.504896746 +0000 UTC m=+5985.293849147" observedRunningTime="2025-11-24 20:56:33.274310933 +0000 UTC m=+5987.063263344" watchObservedRunningTime="2025-11-24 20:56:33.281003453 +0000 UTC m=+5987.069955824" Nov 24 20:56:37 crc kubenswrapper[4812]: I1124 20:56:37.066860 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-h5786"] Nov 24 20:56:37 crc kubenswrapper[4812]: I1124 20:56:37.076504 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-688e-account-create-6wp5w"] Nov 24 20:56:37 crc kubenswrapper[4812]: I1124 20:56:37.087942 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-h5786"] Nov 24 20:56:37 crc kubenswrapper[4812]: I1124 20:56:37.095798 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-688e-account-create-6wp5w"] Nov 24 20:56:38 crc kubenswrapper[4812]: I1124 20:56:38.987255 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0" path="/var/lib/kubelet/pods/119cd9b0-4a1e-4eeb-a8c6-46e31fc619e0/volumes" Nov 24 20:56:38 crc kubenswrapper[4812]: I1124 20:56:38.988921 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc8526b1-356f-4e7f-bee9-4af150a24496" path="/var/lib/kubelet/pods/dc8526b1-356f-4e7f-bee9-4af150a24496/volumes" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.428087 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-xrmlq" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.446111 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.456687 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7n6sd" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.588420 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xrmlq-config-6k77f"] Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.589822 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.593586 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.600604 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xrmlq-config-6k77f"] Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.735048 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc9dx\" (UniqueName: \"kubernetes.io/projected/ccf7ea57-8fd0-4515-be74-9f7e940d0380-kube-api-access-nc9dx\") pod \"ovn-controller-xrmlq-config-6k77f\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.735108 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ccf7ea57-8fd0-4515-be74-9f7e940d0380-var-log-ovn\") pod \"ovn-controller-xrmlq-config-6k77f\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.735157 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ccf7ea57-8fd0-4515-be74-9f7e940d0380-var-run\") pod \"ovn-controller-xrmlq-config-6k77f\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.735379 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ccf7ea57-8fd0-4515-be74-9f7e940d0380-scripts\") pod \"ovn-controller-xrmlq-config-6k77f\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.735577 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ccf7ea57-8fd0-4515-be74-9f7e940d0380-var-run-ovn\") pod \"ovn-controller-xrmlq-config-6k77f\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.735668 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ccf7ea57-8fd0-4515-be74-9f7e940d0380-additional-scripts\") pod \"ovn-controller-xrmlq-config-6k77f\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.837582 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ccf7ea57-8fd0-4515-be74-9f7e940d0380-scripts\") pod \"ovn-controller-xrmlq-config-6k77f\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.837700 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ccf7ea57-8fd0-4515-be74-9f7e940d0380-var-run-ovn\") pod \"ovn-controller-xrmlq-config-6k77f\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.837750 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ccf7ea57-8fd0-4515-be74-9f7e940d0380-additional-scripts\") pod \"ovn-controller-xrmlq-config-6k77f\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.837788 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9dx\" (UniqueName: \"kubernetes.io/projected/ccf7ea57-8fd0-4515-be74-9f7e940d0380-kube-api-access-nc9dx\") pod \"ovn-controller-xrmlq-config-6k77f\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.837818 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ccf7ea57-8fd0-4515-be74-9f7e940d0380-var-log-ovn\") pod \"ovn-controller-xrmlq-config-6k77f\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.837859 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ccf7ea57-8fd0-4515-be74-9f7e940d0380-var-run\") pod \"ovn-controller-xrmlq-config-6k77f\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.838054 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ccf7ea57-8fd0-4515-be74-9f7e940d0380-var-run\") pod \"ovn-controller-xrmlq-config-6k77f\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.838058 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ccf7ea57-8fd0-4515-be74-9f7e940d0380-var-run-ovn\") pod \"ovn-controller-xrmlq-config-6k77f\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.838217 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ccf7ea57-8fd0-4515-be74-9f7e940d0380-var-log-ovn\") pod \"ovn-controller-xrmlq-config-6k77f\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.838719 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ccf7ea57-8fd0-4515-be74-9f7e940d0380-additional-scripts\") pod \"ovn-controller-xrmlq-config-6k77f\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.840294 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ccf7ea57-8fd0-4515-be74-9f7e940d0380-scripts\") pod \"ovn-controller-xrmlq-config-6k77f\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.860137 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc9dx\" (UniqueName: \"kubernetes.io/projected/ccf7ea57-8fd0-4515-be74-9f7e940d0380-kube-api-access-nc9dx\") pod \"ovn-controller-xrmlq-config-6k77f\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:42 crc kubenswrapper[4812]: I1124 20:56:42.909394 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:43 crc kubenswrapper[4812]: I1124 20:56:43.049631 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-x6dpw"] Nov 24 20:56:43 crc kubenswrapper[4812]: I1124 20:56:43.058272 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-x6dpw"] Nov 24 20:56:43 crc kubenswrapper[4812]: I1124 20:56:43.771721 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xrmlq-config-6k77f"] Nov 24 20:56:44 crc kubenswrapper[4812]: I1124 20:56:44.382293 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xrmlq-config-6k77f" event={"ID":"ccf7ea57-8fd0-4515-be74-9f7e940d0380","Type":"ContainerStarted","Data":"626f4f319849b2a007b337d0e47ca243b36a8ede53f59bcacc89682800a8a007"} Nov 24 20:56:44 crc kubenswrapper[4812]: I1124 20:56:44.382680 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xrmlq-config-6k77f" event={"ID":"ccf7ea57-8fd0-4515-be74-9f7e940d0380","Type":"ContainerStarted","Data":"2787ead8f81f6ff0155d35357cf3dc8dfa075e73005b65b6bfced09ed48a27e2"} Nov 24 20:56:44 crc kubenswrapper[4812]: I1124 20:56:44.416230 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xrmlq-config-6k77f" podStartSLOduration=2.416211134 podStartE2EDuration="2.416211134s" podCreationTimestamp="2025-11-24 20:56:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:56:44.402836056 +0000 UTC m=+5998.191788457" watchObservedRunningTime="2025-11-24 20:56:44.416211134 +0000 UTC m=+5998.205163515" Nov 24 20:56:44 crc kubenswrapper[4812]: I1124 20:56:44.981297 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="349153cd-04ae-4ca4-8a0d-f688bb60c77e" path="/var/lib/kubelet/pods/349153cd-04ae-4ca4-8a0d-f688bb60c77e/volumes" Nov 24 20:56:45 crc kubenswrapper[4812]: I1124 20:56:45.399672 4812 generic.go:334] "Generic (PLEG): container finished" podID="ccf7ea57-8fd0-4515-be74-9f7e940d0380" containerID="626f4f319849b2a007b337d0e47ca243b36a8ede53f59bcacc89682800a8a007" exitCode=0 Nov 24 20:56:45 crc kubenswrapper[4812]: I1124 20:56:45.399737 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xrmlq-config-6k77f" event={"ID":"ccf7ea57-8fd0-4515-be74-9f7e940d0380","Type":"ContainerDied","Data":"626f4f319849b2a007b337d0e47ca243b36a8ede53f59bcacc89682800a8a007"} Nov 24 20:56:46 crc kubenswrapper[4812]: I1124 20:56:46.791717 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:46 crc kubenswrapper[4812]: I1124 20:56:46.926796 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ccf7ea57-8fd0-4515-be74-9f7e940d0380-var-run-ovn\") pod \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " Nov 24 20:56:46 crc kubenswrapper[4812]: I1124 20:56:46.926916 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ccf7ea57-8fd0-4515-be74-9f7e940d0380-scripts\") pod \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " Nov 24 20:56:46 crc kubenswrapper[4812]: I1124 20:56:46.926992 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ccf7ea57-8fd0-4515-be74-9f7e940d0380-var-run\") pod \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " Nov 24 20:56:46 crc kubenswrapper[4812]: I1124 20:56:46.927038 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc9dx\" (UniqueName: \"kubernetes.io/projected/ccf7ea57-8fd0-4515-be74-9f7e940d0380-kube-api-access-nc9dx\") pod \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " Nov 24 20:56:46 crc kubenswrapper[4812]: I1124 20:56:46.927046 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccf7ea57-8fd0-4515-be74-9f7e940d0380-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ccf7ea57-8fd0-4515-be74-9f7e940d0380" (UID: "ccf7ea57-8fd0-4515-be74-9f7e940d0380"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 20:56:46 crc kubenswrapper[4812]: I1124 20:56:46.927111 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccf7ea57-8fd0-4515-be74-9f7e940d0380-var-run" (OuterVolumeSpecName: "var-run") pod "ccf7ea57-8fd0-4515-be74-9f7e940d0380" (UID: "ccf7ea57-8fd0-4515-be74-9f7e940d0380"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 20:56:46 crc kubenswrapper[4812]: I1124 20:56:46.927089 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ccf7ea57-8fd0-4515-be74-9f7e940d0380-additional-scripts\") pod \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " Nov 24 20:56:46 crc kubenswrapper[4812]: I1124 20:56:46.927264 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ccf7ea57-8fd0-4515-be74-9f7e940d0380-var-log-ovn\") pod \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\" (UID: \"ccf7ea57-8fd0-4515-be74-9f7e940d0380\") " Nov 24 20:56:46 crc kubenswrapper[4812]: I1124 20:56:46.927418 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccf7ea57-8fd0-4515-be74-9f7e940d0380-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ccf7ea57-8fd0-4515-be74-9f7e940d0380" (UID: "ccf7ea57-8fd0-4515-be74-9f7e940d0380"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 20:56:46 crc kubenswrapper[4812]: I1124 20:56:46.928175 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccf7ea57-8fd0-4515-be74-9f7e940d0380-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ccf7ea57-8fd0-4515-be74-9f7e940d0380" (UID: "ccf7ea57-8fd0-4515-be74-9f7e940d0380"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:56:46 crc kubenswrapper[4812]: I1124 20:56:46.928485 4812 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ccf7ea57-8fd0-4515-be74-9f7e940d0380-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 20:56:46 crc kubenswrapper[4812]: I1124 20:56:46.928523 4812 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ccf7ea57-8fd0-4515-be74-9f7e940d0380-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:56:46 crc kubenswrapper[4812]: I1124 20:56:46.928543 4812 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ccf7ea57-8fd0-4515-be74-9f7e940d0380-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 20:56:46 crc kubenswrapper[4812]: I1124 20:56:46.928561 4812 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ccf7ea57-8fd0-4515-be74-9f7e940d0380-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 20:56:46 crc kubenswrapper[4812]: I1124 20:56:46.928640 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccf7ea57-8fd0-4515-be74-9f7e940d0380-scripts" (OuterVolumeSpecName: "scripts") pod "ccf7ea57-8fd0-4515-be74-9f7e940d0380" (UID: "ccf7ea57-8fd0-4515-be74-9f7e940d0380"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 20:56:46 crc kubenswrapper[4812]: I1124 20:56:46.933993 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccf7ea57-8fd0-4515-be74-9f7e940d0380-kube-api-access-nc9dx" (OuterVolumeSpecName: "kube-api-access-nc9dx") pod "ccf7ea57-8fd0-4515-be74-9f7e940d0380" (UID: "ccf7ea57-8fd0-4515-be74-9f7e940d0380"). InnerVolumeSpecName "kube-api-access-nc9dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:56:47 crc kubenswrapper[4812]: I1124 20:56:47.031186 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc9dx\" (UniqueName: \"kubernetes.io/projected/ccf7ea57-8fd0-4515-be74-9f7e940d0380-kube-api-access-nc9dx\") on node \"crc\" DevicePath \"\"" Nov 24 20:56:47 crc kubenswrapper[4812]: I1124 20:56:47.031221 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ccf7ea57-8fd0-4515-be74-9f7e940d0380-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:56:47 crc kubenswrapper[4812]: I1124 20:56:47.437314 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xrmlq-config-6k77f" event={"ID":"ccf7ea57-8fd0-4515-be74-9f7e940d0380","Type":"ContainerDied","Data":"2787ead8f81f6ff0155d35357cf3dc8dfa075e73005b65b6bfced09ed48a27e2"} Nov 24 20:56:47 crc kubenswrapper[4812]: I1124 20:56:47.437400 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2787ead8f81f6ff0155d35357cf3dc8dfa075e73005b65b6bfced09ed48a27e2" Nov 24 20:56:47 crc kubenswrapper[4812]: I1124 20:56:47.437479 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xrmlq-config-6k77f" Nov 24 20:56:47 crc kubenswrapper[4812]: I1124 20:56:47.910475 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xrmlq-config-6k77f"] Nov 24 20:56:47 crc kubenswrapper[4812]: I1124 20:56:47.921765 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xrmlq-config-6k77f"] Nov 24 20:56:48 crc kubenswrapper[4812]: I1124 20:56:48.977611 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccf7ea57-8fd0-4515-be74-9f7e940d0380" path="/var/lib/kubelet/pods/ccf7ea57-8fd0-4515-be74-9f7e940d0380/volumes" Nov 24 20:56:48 crc kubenswrapper[4812]: I1124 20:56:48.999047 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-5mflp"] Nov 24 20:56:49 crc kubenswrapper[4812]: E1124 20:56:49.002840 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccf7ea57-8fd0-4515-be74-9f7e940d0380" containerName="ovn-config" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.002876 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccf7ea57-8fd0-4515-be74-9f7e940d0380" containerName="ovn-config" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.003165 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccf7ea57-8fd0-4515-be74-9f7e940d0380" containerName="ovn-config" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.004190 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-5mflp" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.009764 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.010108 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.010468 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.012794 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-5mflp"] Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.083766 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d5ceee2-ae1a-468e-9c5f-ed0160b4db18-scripts\") pod \"octavia-rsyslog-5mflp\" (UID: \"1d5ceee2-ae1a-468e-9c5f-ed0160b4db18\") " pod="openstack/octavia-rsyslog-5mflp" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.083988 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/1d5ceee2-ae1a-468e-9c5f-ed0160b4db18-hm-ports\") pod \"octavia-rsyslog-5mflp\" (UID: \"1d5ceee2-ae1a-468e-9c5f-ed0160b4db18\") " pod="openstack/octavia-rsyslog-5mflp" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.084019 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1d5ceee2-ae1a-468e-9c5f-ed0160b4db18-config-data-merged\") pod \"octavia-rsyslog-5mflp\" (UID: \"1d5ceee2-ae1a-468e-9c5f-ed0160b4db18\") " pod="openstack/octavia-rsyslog-5mflp" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.084047 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5ceee2-ae1a-468e-9c5f-ed0160b4db18-config-data\") pod \"octavia-rsyslog-5mflp\" (UID: \"1d5ceee2-ae1a-468e-9c5f-ed0160b4db18\") " pod="openstack/octavia-rsyslog-5mflp" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.185819 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/1d5ceee2-ae1a-468e-9c5f-ed0160b4db18-hm-ports\") pod \"octavia-rsyslog-5mflp\" (UID: \"1d5ceee2-ae1a-468e-9c5f-ed0160b4db18\") " pod="openstack/octavia-rsyslog-5mflp" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.185894 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1d5ceee2-ae1a-468e-9c5f-ed0160b4db18-config-data-merged\") pod \"octavia-rsyslog-5mflp\" (UID: \"1d5ceee2-ae1a-468e-9c5f-ed0160b4db18\") " pod="openstack/octavia-rsyslog-5mflp" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.185930 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5ceee2-ae1a-468e-9c5f-ed0160b4db18-config-data\") pod \"octavia-rsyslog-5mflp\" (UID: \"1d5ceee2-ae1a-468e-9c5f-ed0160b4db18\") " pod="openstack/octavia-rsyslog-5mflp" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.185984 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d5ceee2-ae1a-468e-9c5f-ed0160b4db18-scripts\") pod \"octavia-rsyslog-5mflp\" (UID: \"1d5ceee2-ae1a-468e-9c5f-ed0160b4db18\") " pod="openstack/octavia-rsyslog-5mflp" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.186767 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/1d5ceee2-ae1a-468e-9c5f-ed0160b4db18-hm-ports\") pod \"octavia-rsyslog-5mflp\" (UID: \"1d5ceee2-ae1a-468e-9c5f-ed0160b4db18\") " pod="openstack/octavia-rsyslog-5mflp" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.186907 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1d5ceee2-ae1a-468e-9c5f-ed0160b4db18-config-data-merged\") pod \"octavia-rsyslog-5mflp\" (UID: \"1d5ceee2-ae1a-468e-9c5f-ed0160b4db18\") " pod="openstack/octavia-rsyslog-5mflp" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.193115 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d5ceee2-ae1a-468e-9c5f-ed0160b4db18-config-data\") pod \"octavia-rsyslog-5mflp\" (UID: \"1d5ceee2-ae1a-468e-9c5f-ed0160b4db18\") " pod="openstack/octavia-rsyslog-5mflp" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.193311 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d5ceee2-ae1a-468e-9c5f-ed0160b4db18-scripts\") pod \"octavia-rsyslog-5mflp\" (UID: \"1d5ceee2-ae1a-468e-9c5f-ed0160b4db18\") " pod="openstack/octavia-rsyslog-5mflp" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.329044 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-5mflp" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.717847 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-5955f5554b-x7gsx"] Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.723379 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-5955f5554b-x7gsx" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.731321 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.738531 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-5955f5554b-x7gsx"] Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.797978 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/1c4cb7be-9f38-4cbf-b05c-d531f336818c-amphora-image\") pod \"octavia-image-upload-5955f5554b-x7gsx\" (UID: \"1c4cb7be-9f38-4cbf-b05c-d531f336818c\") " pod="openstack/octavia-image-upload-5955f5554b-x7gsx" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.798034 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1c4cb7be-9f38-4cbf-b05c-d531f336818c-httpd-config\") pod \"octavia-image-upload-5955f5554b-x7gsx\" (UID: \"1c4cb7be-9f38-4cbf-b05c-d531f336818c\") " pod="openstack/octavia-image-upload-5955f5554b-x7gsx" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.892070 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-5mflp"] Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.899733 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/1c4cb7be-9f38-4cbf-b05c-d531f336818c-amphora-image\") pod \"octavia-image-upload-5955f5554b-x7gsx\" (UID: \"1c4cb7be-9f38-4cbf-b05c-d531f336818c\") " pod="openstack/octavia-image-upload-5955f5554b-x7gsx" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.900189 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/1c4cb7be-9f38-4cbf-b05c-d531f336818c-amphora-image\") pod \"octavia-image-upload-5955f5554b-x7gsx\" (UID: \"1c4cb7be-9f38-4cbf-b05c-d531f336818c\") " pod="openstack/octavia-image-upload-5955f5554b-x7gsx" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.900319 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1c4cb7be-9f38-4cbf-b05c-d531f336818c-httpd-config\") pod \"octavia-image-upload-5955f5554b-x7gsx\" (UID: \"1c4cb7be-9f38-4cbf-b05c-d531f336818c\") " pod="openstack/octavia-image-upload-5955f5554b-x7gsx" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.906029 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1c4cb7be-9f38-4cbf-b05c-d531f336818c-httpd-config\") pod \"octavia-image-upload-5955f5554b-x7gsx\" (UID: \"1c4cb7be-9f38-4cbf-b05c-d531f336818c\") " pod="openstack/octavia-image-upload-5955f5554b-x7gsx" Nov 24 20:56:49 crc kubenswrapper[4812]: I1124 20:56:49.907938 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 20:56:50 crc kubenswrapper[4812]: I1124 20:56:50.057743 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-5955f5554b-x7gsx" Nov 24 20:56:50 crc kubenswrapper[4812]: I1124 20:56:50.471373 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-5mflp" event={"ID":"1d5ceee2-ae1a-468e-9c5f-ed0160b4db18","Type":"ContainerStarted","Data":"72b30264cd6dafe32591a3644d1a52255d7118f77e36881652d7dac1d502dad4"} Nov 24 20:56:50 crc kubenswrapper[4812]: I1124 20:56:50.505866 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-5955f5554b-x7gsx"] Nov 24 20:56:50 crc kubenswrapper[4812]: W1124 20:56:50.512516 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c4cb7be_9f38_4cbf_b05c_d531f336818c.slice/crio-ddcf82a12a7c82ff8efd84b2722c03b6783ee7357793105cc63307b86aabe0aa WatchSource:0}: Error finding container ddcf82a12a7c82ff8efd84b2722c03b6783ee7357793105cc63307b86aabe0aa: Status 404 returned error can't find the container with id ddcf82a12a7c82ff8efd84b2722c03b6783ee7357793105cc63307b86aabe0aa Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.399979 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-56d499f456-hts46"] Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.414431 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.417878 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-internal-svc" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.418717 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-octavia-public-svc" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.460529 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-56d499f456-hts46"] Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.500297 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-5955f5554b-x7gsx" event={"ID":"1c4cb7be-9f38-4cbf-b05c-d531f336818c","Type":"ContainerStarted","Data":"ddcf82a12a7c82ff8efd84b2722c03b6783ee7357793105cc63307b86aabe0aa"} Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.548523 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-config-data\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.548580 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-scripts\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.548600 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-public-tls-certs\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.548637 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-ovndb-tls-certs\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.548653 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-octavia-run\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.548740 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-internal-tls-certs\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.548758 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-config-data-merged\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.548826 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-combined-ca-bundle\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.650429 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-config-data\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.650601 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-scripts\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.650634 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-public-tls-certs\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.651644 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-ovndb-tls-certs\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.651673 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-octavia-run\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.651733 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-internal-tls-certs\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.651754 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-config-data-merged\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.651777 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-combined-ca-bundle\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.652279 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-octavia-run\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.654671 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-config-data-merged\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.656483 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-scripts\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.657192 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-public-tls-certs\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.657698 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-combined-ca-bundle\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.658708 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-internal-tls-certs\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.659848 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-config-data\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.673278 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3527688d-f0b4-4a46-aadd-92bc13dc3f0e-ovndb-tls-certs\") pod \"octavia-api-56d499f456-hts46\" (UID: \"3527688d-f0b4-4a46-aadd-92bc13dc3f0e\") " pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:51 crc kubenswrapper[4812]: I1124 20:56:51.775496 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:52 crc kubenswrapper[4812]: I1124 20:56:52.515467 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-5mflp" event={"ID":"1d5ceee2-ae1a-468e-9c5f-ed0160b4db18","Type":"ContainerStarted","Data":"14e619dcb17b1beb4321b6d0acea9df9d5a8acbeb9f26948722b39d5562f6c61"} Nov 24 20:56:52 crc kubenswrapper[4812]: I1124 20:56:52.996294 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-56d499f456-hts46"] Nov 24 20:56:52 crc kubenswrapper[4812]: W1124 20:56:52.999698 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3527688d_f0b4_4a46_aadd_92bc13dc3f0e.slice/crio-02a56939aefd9705efe37631dfcb8aff5ece36333be82652d809bedffc6c9ee9 WatchSource:0}: Error finding container 02a56939aefd9705efe37631dfcb8aff5ece36333be82652d809bedffc6c9ee9: Status 404 returned error can't find the container with id 02a56939aefd9705efe37631dfcb8aff5ece36333be82652d809bedffc6c9ee9 Nov 24 20:56:53 crc kubenswrapper[4812]: E1124 20:56:53.496757 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccf7ea57_8fd0_4515_be74_9f7e940d0380.slice/crio-2787ead8f81f6ff0155d35357cf3dc8dfa075e73005b65b6bfced09ed48a27e2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccf7ea57_8fd0_4515_be74_9f7e940d0380.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3527688d_f0b4_4a46_aadd_92bc13dc3f0e.slice/crio-4f3da799ce24ec2843f2c4e68af0e5ddf1215e1af973303009850b83a763c0fc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3527688d_f0b4_4a46_aadd_92bc13dc3f0e.slice/crio-conmon-4f3da799ce24ec2843f2c4e68af0e5ddf1215e1af973303009850b83a763c0fc.scope\": RecentStats: unable to find data in memory cache]" Nov 24 20:56:53 crc kubenswrapper[4812]: I1124 20:56:53.526154 4812 generic.go:334] "Generic (PLEG): container finished" podID="3527688d-f0b4-4a46-aadd-92bc13dc3f0e" containerID="4f3da799ce24ec2843f2c4e68af0e5ddf1215e1af973303009850b83a763c0fc" exitCode=0 Nov 24 20:56:53 crc kubenswrapper[4812]: I1124 20:56:53.526375 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-56d499f456-hts46" event={"ID":"3527688d-f0b4-4a46-aadd-92bc13dc3f0e","Type":"ContainerDied","Data":"4f3da799ce24ec2843f2c4e68af0e5ddf1215e1af973303009850b83a763c0fc"} Nov 24 20:56:53 crc kubenswrapper[4812]: I1124 20:56:53.526557 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-56d499f456-hts46" event={"ID":"3527688d-f0b4-4a46-aadd-92bc13dc3f0e","Type":"ContainerStarted","Data":"02a56939aefd9705efe37631dfcb8aff5ece36333be82652d809bedffc6c9ee9"} Nov 24 20:56:54 crc kubenswrapper[4812]: I1124 20:56:54.537810 4812 generic.go:334] "Generic (PLEG): container finished" podID="1d5ceee2-ae1a-468e-9c5f-ed0160b4db18" containerID="14e619dcb17b1beb4321b6d0acea9df9d5a8acbeb9f26948722b39d5562f6c61" exitCode=0 Nov 24 20:56:54 crc kubenswrapper[4812]: I1124 20:56:54.538009 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-5mflp" event={"ID":"1d5ceee2-ae1a-468e-9c5f-ed0160b4db18","Type":"ContainerDied","Data":"14e619dcb17b1beb4321b6d0acea9df9d5a8acbeb9f26948722b39d5562f6c61"} Nov 24 20:56:54 crc kubenswrapper[4812]: I1124 20:56:54.540865 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-56d499f456-hts46" event={"ID":"3527688d-f0b4-4a46-aadd-92bc13dc3f0e","Type":"ContainerStarted","Data":"5e528717600f8a23530ba8c722c26a4df80e6c6e7f11f52cd5c579d893b8b04d"} Nov 24 20:56:54 crc kubenswrapper[4812]: I1124 20:56:54.645957 4812 scope.go:117] "RemoveContainer" containerID="417df04c1243c6e3c908c9f8f3bab6db48031f739393c4e59a0f4a4ed814df74" Nov 24 20:56:54 crc kubenswrapper[4812]: I1124 20:56:54.852850 4812 scope.go:117] "RemoveContainer" containerID="b7bf7d9941ec8bdd867374a0ad2cf73d0c1929935331809190c4e54a3056cb05" Nov 24 20:56:54 crc kubenswrapper[4812]: I1124 20:56:54.880046 4812 scope.go:117] "RemoveContainer" containerID="b3716cd45b7c1fb7cbc2ef23c00686aba0549080fe89d82dcb88541da2068703" Nov 24 20:56:55 crc kubenswrapper[4812]: I1124 20:56:55.550951 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-56d499f456-hts46" event={"ID":"3527688d-f0b4-4a46-aadd-92bc13dc3f0e","Type":"ContainerStarted","Data":"1ffd7c123ed24278974cfb2aaecf2ff925fb0e906b285121aaf40ee02ac91003"} Nov 24 20:56:55 crc kubenswrapper[4812]: I1124 20:56:55.551205 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:55 crc kubenswrapper[4812]: I1124 20:56:55.551219 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:56:56 crc kubenswrapper[4812]: I1124 20:56:56.030593 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-56d499f456-hts46" podStartSLOduration=5.030571833 podStartE2EDuration="5.030571833s" podCreationTimestamp="2025-11-24 20:56:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:56:55.575965362 +0000 UTC m=+6009.364917743" watchObservedRunningTime="2025-11-24 20:56:56.030571833 +0000 UTC m=+6009.819524204" Nov 24 20:56:56 crc kubenswrapper[4812]: I1124 20:56:56.040688 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-78hhg"] Nov 24 20:56:56 crc kubenswrapper[4812]: I1124 20:56:56.049048 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-78hhg"] Nov 24 20:56:56 crc kubenswrapper[4812]: I1124 20:56:56.070245 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:56 crc kubenswrapper[4812]: I1124 20:56:56.086637 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:56:56 crc kubenswrapper[4812]: I1124 20:56:56.561246 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-5mflp" event={"ID":"1d5ceee2-ae1a-468e-9c5f-ed0160b4db18","Type":"ContainerStarted","Data":"221d345c8ad951c6c5ee88748e637250d62e579754d3b6190add1c0faeb73cd1"} Nov 24 20:56:56 crc kubenswrapper[4812]: I1124 20:56:56.561877 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-5mflp" Nov 24 20:56:56 crc kubenswrapper[4812]: I1124 20:56:56.583932 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-5mflp" podStartSLOduration=2.487386951 podStartE2EDuration="8.583916298s" podCreationTimestamp="2025-11-24 20:56:48 +0000 UTC" firstStartedPulling="2025-11-24 20:56:49.907687721 +0000 UTC m=+6003.696640092" lastFinishedPulling="2025-11-24 20:56:56.004217068 +0000 UTC m=+6009.793169439" observedRunningTime="2025-11-24 20:56:56.582638942 +0000 UTC m=+6010.371591313" watchObservedRunningTime="2025-11-24 20:56:56.583916298 +0000 UTC m=+6010.372868669" Nov 24 20:56:56 crc kubenswrapper[4812]: I1124 20:56:56.979664 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a84172ce-260b-4d2e-ae44-f247a870acdf" path="/var/lib/kubelet/pods/a84172ce-260b-4d2e-ae44-f247a870acdf/volumes" Nov 24 20:56:57 crc kubenswrapper[4812]: I1124 20:56:57.358365 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-rhkzt"] Nov 24 20:56:57 crc kubenswrapper[4812]: I1124 20:56:57.360112 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-rhkzt" Nov 24 20:56:57 crc kubenswrapper[4812]: I1124 20:56:57.371968 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Nov 24 20:56:57 crc kubenswrapper[4812]: I1124 20:56:57.391372 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-rhkzt"] Nov 24 20:56:57 crc kubenswrapper[4812]: I1124 20:56:57.491425 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0f2daa50-f7db-469b-bfc0-06d96f21dcef-config-data-merged\") pod \"octavia-db-sync-rhkzt\" (UID: \"0f2daa50-f7db-469b-bfc0-06d96f21dcef\") " pod="openstack/octavia-db-sync-rhkzt" Nov 24 20:56:57 crc kubenswrapper[4812]: I1124 20:56:57.491490 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2daa50-f7db-469b-bfc0-06d96f21dcef-combined-ca-bundle\") pod \"octavia-db-sync-rhkzt\" (UID: \"0f2daa50-f7db-469b-bfc0-06d96f21dcef\") " pod="openstack/octavia-db-sync-rhkzt" Nov 24 20:56:57 crc kubenswrapper[4812]: I1124 20:56:57.491534 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f2daa50-f7db-469b-bfc0-06d96f21dcef-scripts\") pod \"octavia-db-sync-rhkzt\" (UID: \"0f2daa50-f7db-469b-bfc0-06d96f21dcef\") " pod="openstack/octavia-db-sync-rhkzt" Nov 24 20:56:57 crc kubenswrapper[4812]: I1124 20:56:57.491553 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f2daa50-f7db-469b-bfc0-06d96f21dcef-config-data\") pod \"octavia-db-sync-rhkzt\" (UID: \"0f2daa50-f7db-469b-bfc0-06d96f21dcef\") " pod="openstack/octavia-db-sync-rhkzt" Nov 24 20:56:57 crc kubenswrapper[4812]: I1124 20:56:57.592962 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f2daa50-f7db-469b-bfc0-06d96f21dcef-scripts\") pod \"octavia-db-sync-rhkzt\" (UID: \"0f2daa50-f7db-469b-bfc0-06d96f21dcef\") " pod="openstack/octavia-db-sync-rhkzt" Nov 24 20:56:57 crc kubenswrapper[4812]: I1124 20:56:57.593003 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f2daa50-f7db-469b-bfc0-06d96f21dcef-config-data\") pod \"octavia-db-sync-rhkzt\" (UID: \"0f2daa50-f7db-469b-bfc0-06d96f21dcef\") " pod="openstack/octavia-db-sync-rhkzt" Nov 24 20:56:57 crc kubenswrapper[4812]: I1124 20:56:57.593102 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0f2daa50-f7db-469b-bfc0-06d96f21dcef-config-data-merged\") pod \"octavia-db-sync-rhkzt\" (UID: \"0f2daa50-f7db-469b-bfc0-06d96f21dcef\") " pod="openstack/octavia-db-sync-rhkzt" Nov 24 20:56:57 crc kubenswrapper[4812]: I1124 20:56:57.593146 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2daa50-f7db-469b-bfc0-06d96f21dcef-combined-ca-bundle\") pod \"octavia-db-sync-rhkzt\" (UID: \"0f2daa50-f7db-469b-bfc0-06d96f21dcef\") " pod="openstack/octavia-db-sync-rhkzt" Nov 24 20:56:57 crc kubenswrapper[4812]: I1124 20:56:57.594069 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0f2daa50-f7db-469b-bfc0-06d96f21dcef-config-data-merged\") pod \"octavia-db-sync-rhkzt\" (UID: \"0f2daa50-f7db-469b-bfc0-06d96f21dcef\") " pod="openstack/octavia-db-sync-rhkzt" Nov 24 20:56:57 crc kubenswrapper[4812]: I1124 20:56:57.601830 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f2daa50-f7db-469b-bfc0-06d96f21dcef-config-data\") pod \"octavia-db-sync-rhkzt\" (UID: \"0f2daa50-f7db-469b-bfc0-06d96f21dcef\") " pod="openstack/octavia-db-sync-rhkzt" Nov 24 20:56:57 crc kubenswrapper[4812]: I1124 20:56:57.611207 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f2daa50-f7db-469b-bfc0-06d96f21dcef-scripts\") pod \"octavia-db-sync-rhkzt\" (UID: \"0f2daa50-f7db-469b-bfc0-06d96f21dcef\") " pod="openstack/octavia-db-sync-rhkzt" Nov 24 20:56:57 crc kubenswrapper[4812]: I1124 20:56:57.615679 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2daa50-f7db-469b-bfc0-06d96f21dcef-combined-ca-bundle\") pod \"octavia-db-sync-rhkzt\" (UID: \"0f2daa50-f7db-469b-bfc0-06d96f21dcef\") " pod="openstack/octavia-db-sync-rhkzt" Nov 24 20:56:57 crc kubenswrapper[4812]: I1124 20:56:57.688808 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-rhkzt" Nov 24 20:57:02 crc kubenswrapper[4812]: I1124 20:57:02.433386 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-rhkzt"] Nov 24 20:57:02 crc kubenswrapper[4812]: I1124 20:57:02.694949 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-rhkzt" event={"ID":"0f2daa50-f7db-469b-bfc0-06d96f21dcef","Type":"ContainerStarted","Data":"6f4af1090dcf12d32a97e6fb73946f7f54cae9957aef393c2b90a5eefa03fccb"} Nov 24 20:57:02 crc kubenswrapper[4812]: I1124 20:57:02.695267 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-rhkzt" event={"ID":"0f2daa50-f7db-469b-bfc0-06d96f21dcef","Type":"ContainerStarted","Data":"57092e3cd9fe3ca295106e20823b2b4d676f72cae4c13bef4343641deb07dad5"} Nov 24 20:57:02 crc kubenswrapper[4812]: I1124 20:57:02.698652 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-5955f5554b-x7gsx" event={"ID":"1c4cb7be-9f38-4cbf-b05c-d531f336818c","Type":"ContainerStarted","Data":"d89521a91e57f3f5e7d480e360765c7459b892648b06df952caa15dab4af2a4b"} Nov 24 20:57:02 crc kubenswrapper[4812]: I1124 20:57:02.998490 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 20:57:02 crc kubenswrapper[4812]: I1124 20:57:02.998554 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 20:57:02 crc kubenswrapper[4812]: I1124 20:57:02.998601 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 20:57:02 crc kubenswrapper[4812]: I1124 20:57:02.999701 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 20:57:02 crc kubenswrapper[4812]: I1124 20:57:02.999778 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" gracePeriod=600 Nov 24 20:57:03 crc kubenswrapper[4812]: E1124 20:57:03.133381 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:57:03 crc kubenswrapper[4812]: I1124 20:57:03.714457 4812 generic.go:334] "Generic (PLEG): container finished" podID="0f2daa50-f7db-469b-bfc0-06d96f21dcef" containerID="6f4af1090dcf12d32a97e6fb73946f7f54cae9957aef393c2b90a5eefa03fccb" exitCode=0 Nov 24 20:57:03 crc kubenswrapper[4812]: I1124 20:57:03.714559 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-rhkzt" event={"ID":"0f2daa50-f7db-469b-bfc0-06d96f21dcef","Type":"ContainerDied","Data":"6f4af1090dcf12d32a97e6fb73946f7f54cae9957aef393c2b90a5eefa03fccb"} Nov 24 20:57:03 crc kubenswrapper[4812]: I1124 20:57:03.720106 4812 generic.go:334] "Generic (PLEG): container finished" podID="1c4cb7be-9f38-4cbf-b05c-d531f336818c" containerID="d89521a91e57f3f5e7d480e360765c7459b892648b06df952caa15dab4af2a4b" exitCode=0 Nov 24 20:57:03 crc kubenswrapper[4812]: I1124 20:57:03.720236 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-5955f5554b-x7gsx" event={"ID":"1c4cb7be-9f38-4cbf-b05c-d531f336818c","Type":"ContainerDied","Data":"d89521a91e57f3f5e7d480e360765c7459b892648b06df952caa15dab4af2a4b"} Nov 24 20:57:03 crc kubenswrapper[4812]: I1124 20:57:03.735379 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" exitCode=0 Nov 24 20:57:03 crc kubenswrapper[4812]: I1124 20:57:03.735432 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196"} Nov 24 20:57:03 crc kubenswrapper[4812]: I1124 20:57:03.735949 4812 scope.go:117] "RemoveContainer" containerID="1ed217e6b672176474a4c358d1230ff543c81838c55c07bc096791453845da5d" Nov 24 20:57:03 crc kubenswrapper[4812]: I1124 20:57:03.737164 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 20:57:03 crc kubenswrapper[4812]: E1124 20:57:03.741890 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:57:03 crc kubenswrapper[4812]: E1124 20:57:03.922881 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccf7ea57_8fd0_4515_be74_9f7e940d0380.slice/crio-2787ead8f81f6ff0155d35357cf3dc8dfa075e73005b65b6bfced09ed48a27e2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccf7ea57_8fd0_4515_be74_9f7e940d0380.slice\": RecentStats: unable to find data in memory cache]" Nov 24 20:57:04 crc kubenswrapper[4812]: I1124 20:57:04.384876 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-5mflp" Nov 24 20:57:04 crc kubenswrapper[4812]: I1124 20:57:04.748300 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-rhkzt" event={"ID":"0f2daa50-f7db-469b-bfc0-06d96f21dcef","Type":"ContainerStarted","Data":"59e05967d6bc599acc21cdce59b2715d00ea220c8dd65649d00999b9a8baa908"} Nov 24 20:57:04 crc kubenswrapper[4812]: I1124 20:57:04.750306 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-5955f5554b-x7gsx" event={"ID":"1c4cb7be-9f38-4cbf-b05c-d531f336818c","Type":"ContainerStarted","Data":"a10f4cbb19e592d015a3b2ca22d0757c4eb161e5a12b67cd52ee5987df8ff7b4"} Nov 24 20:57:04 crc kubenswrapper[4812]: I1124 20:57:04.765293 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-rhkzt" podStartSLOduration=7.765272983 podStartE2EDuration="7.765272983s" podCreationTimestamp="2025-11-24 20:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:57:04.763489082 +0000 UTC m=+6018.552441483" watchObservedRunningTime="2025-11-24 20:57:04.765272983 +0000 UTC m=+6018.554225364" Nov 24 20:57:04 crc kubenswrapper[4812]: I1124 20:57:04.782329 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-5955f5554b-x7gsx" podStartSLOduration=4.253027113 podStartE2EDuration="15.782310435s" podCreationTimestamp="2025-11-24 20:56:49 +0000 UTC" firstStartedPulling="2025-11-24 20:56:50.515086855 +0000 UTC m=+6004.304039226" lastFinishedPulling="2025-11-24 20:57:02.044370137 +0000 UTC m=+6015.833322548" observedRunningTime="2025-11-24 20:57:04.78074437 +0000 UTC m=+6018.569696761" watchObservedRunningTime="2025-11-24 20:57:04.782310435 +0000 UTC m=+6018.571262806" Nov 24 20:57:06 crc kubenswrapper[4812]: I1124 20:57:06.782684 4812 generic.go:334] "Generic (PLEG): container finished" podID="0f2daa50-f7db-469b-bfc0-06d96f21dcef" containerID="59e05967d6bc599acc21cdce59b2715d00ea220c8dd65649d00999b9a8baa908" exitCode=0 Nov 24 20:57:06 crc kubenswrapper[4812]: I1124 20:57:06.782888 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-rhkzt" event={"ID":"0f2daa50-f7db-469b-bfc0-06d96f21dcef","Type":"ContainerDied","Data":"59e05967d6bc599acc21cdce59b2715d00ea220c8dd65649d00999b9a8baa908"} Nov 24 20:57:08 crc kubenswrapper[4812]: I1124 20:57:08.255605 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-rhkzt" Nov 24 20:57:08 crc kubenswrapper[4812]: I1124 20:57:08.354789 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2daa50-f7db-469b-bfc0-06d96f21dcef-combined-ca-bundle\") pod \"0f2daa50-f7db-469b-bfc0-06d96f21dcef\" (UID: \"0f2daa50-f7db-469b-bfc0-06d96f21dcef\") " Nov 24 20:57:08 crc kubenswrapper[4812]: I1124 20:57:08.354989 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0f2daa50-f7db-469b-bfc0-06d96f21dcef-config-data-merged\") pod \"0f2daa50-f7db-469b-bfc0-06d96f21dcef\" (UID: \"0f2daa50-f7db-469b-bfc0-06d96f21dcef\") " Nov 24 20:57:08 crc kubenswrapper[4812]: I1124 20:57:08.355867 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f2daa50-f7db-469b-bfc0-06d96f21dcef-scripts\") pod \"0f2daa50-f7db-469b-bfc0-06d96f21dcef\" (UID: \"0f2daa50-f7db-469b-bfc0-06d96f21dcef\") " Nov 24 20:57:08 crc kubenswrapper[4812]: I1124 20:57:08.356009 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f2daa50-f7db-469b-bfc0-06d96f21dcef-config-data\") pod \"0f2daa50-f7db-469b-bfc0-06d96f21dcef\" (UID: \"0f2daa50-f7db-469b-bfc0-06d96f21dcef\") " Nov 24 20:57:08 crc kubenswrapper[4812]: I1124 20:57:08.366630 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f2daa50-f7db-469b-bfc0-06d96f21dcef-config-data" (OuterVolumeSpecName: "config-data") pod "0f2daa50-f7db-469b-bfc0-06d96f21dcef" (UID: "0f2daa50-f7db-469b-bfc0-06d96f21dcef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:57:08 crc kubenswrapper[4812]: I1124 20:57:08.367329 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f2daa50-f7db-469b-bfc0-06d96f21dcef-scripts" (OuterVolumeSpecName: "scripts") pod "0f2daa50-f7db-469b-bfc0-06d96f21dcef" (UID: "0f2daa50-f7db-469b-bfc0-06d96f21dcef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:57:08 crc kubenswrapper[4812]: I1124 20:57:08.385407 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f2daa50-f7db-469b-bfc0-06d96f21dcef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f2daa50-f7db-469b-bfc0-06d96f21dcef" (UID: "0f2daa50-f7db-469b-bfc0-06d96f21dcef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:57:08 crc kubenswrapper[4812]: I1124 20:57:08.402321 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f2daa50-f7db-469b-bfc0-06d96f21dcef-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "0f2daa50-f7db-469b-bfc0-06d96f21dcef" (UID: "0f2daa50-f7db-469b-bfc0-06d96f21dcef"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:57:08 crc kubenswrapper[4812]: I1124 20:57:08.458930 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f2daa50-f7db-469b-bfc0-06d96f21dcef-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:57:08 crc kubenswrapper[4812]: I1124 20:57:08.458969 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f2daa50-f7db-469b-bfc0-06d96f21dcef-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:57:08 crc kubenswrapper[4812]: I1124 20:57:08.458980 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2daa50-f7db-469b-bfc0-06d96f21dcef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:57:08 crc kubenswrapper[4812]: I1124 20:57:08.458990 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/0f2daa50-f7db-469b-bfc0-06d96f21dcef-config-data-merged\") on node \"crc\" DevicePath \"\"" Nov 24 20:57:08 crc kubenswrapper[4812]: I1124 20:57:08.806865 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-rhkzt" event={"ID":"0f2daa50-f7db-469b-bfc0-06d96f21dcef","Type":"ContainerDied","Data":"57092e3cd9fe3ca295106e20823b2b4d676f72cae4c13bef4343641deb07dad5"} Nov 24 20:57:08 crc kubenswrapper[4812]: I1124 20:57:08.807185 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57092e3cd9fe3ca295106e20823b2b4d676f72cae4c13bef4343641deb07dad5" Nov 24 20:57:08 crc kubenswrapper[4812]: I1124 20:57:08.807396 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-rhkzt" Nov 24 20:57:10 crc kubenswrapper[4812]: I1124 20:57:10.538812 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:57:10 crc kubenswrapper[4812]: I1124 20:57:10.844101 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-56d499f456-hts46" Nov 24 20:57:10 crc kubenswrapper[4812]: I1124 20:57:10.926012 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-api-6dfb9bffb6-fskh7"] Nov 24 20:57:10 crc kubenswrapper[4812]: I1124 20:57:10.926270 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-api-6dfb9bffb6-fskh7" podUID="d86d2f3d-5efe-452c-9af4-91c2b509bced" containerName="octavia-api" containerID="cri-o://9fc0a617a00324b23da90bcbedbbc17abb77e62e36c427ff995971cc3717044b" gracePeriod=30 Nov 24 20:57:10 crc kubenswrapper[4812]: I1124 20:57:10.926329 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-api-6dfb9bffb6-fskh7" podUID="d86d2f3d-5efe-452c-9af4-91c2b509bced" containerName="octavia-api-provider-agent" containerID="cri-o://493126fb8b38d79d48d38a4a467e1de7f003252ca3841a0529fe884a2fa3822f" gracePeriod=30 Nov 24 20:57:11 crc kubenswrapper[4812]: I1124 20:57:11.856152 4812 generic.go:334] "Generic (PLEG): container finished" podID="d86d2f3d-5efe-452c-9af4-91c2b509bced" containerID="493126fb8b38d79d48d38a4a467e1de7f003252ca3841a0529fe884a2fa3822f" exitCode=0 Nov 24 20:57:11 crc kubenswrapper[4812]: I1124 20:57:11.856211 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6dfb9bffb6-fskh7" event={"ID":"d86d2f3d-5efe-452c-9af4-91c2b509bced","Type":"ContainerDied","Data":"493126fb8b38d79d48d38a4a467e1de7f003252ca3841a0529fe884a2fa3822f"} Nov 24 20:57:14 crc kubenswrapper[4812]: E1124 20:57:14.212264 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccf7ea57_8fd0_4515_be74_9f7e940d0380.slice/crio-2787ead8f81f6ff0155d35357cf3dc8dfa075e73005b65b6bfced09ed48a27e2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd86d2f3d_5efe_452c_9af4_91c2b509bced.slice/crio-conmon-9fc0a617a00324b23da90bcbedbbc17abb77e62e36c427ff995971cc3717044b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccf7ea57_8fd0_4515_be74_9f7e940d0380.slice\": RecentStats: unable to find data in memory cache]" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.549452 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.707501 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-combined-ca-bundle\") pod \"d86d2f3d-5efe-452c-9af4-91c2b509bced\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.707599 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-ovndb-tls-certs\") pod \"d86d2f3d-5efe-452c-9af4-91c2b509bced\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.707631 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/d86d2f3d-5efe-452c-9af4-91c2b509bced-octavia-run\") pod \"d86d2f3d-5efe-452c-9af4-91c2b509bced\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.707671 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-config-data\") pod \"d86d2f3d-5efe-452c-9af4-91c2b509bced\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.707733 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-scripts\") pod \"d86d2f3d-5efe-452c-9af4-91c2b509bced\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.707822 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d86d2f3d-5efe-452c-9af4-91c2b509bced-config-data-merged\") pod \"d86d2f3d-5efe-452c-9af4-91c2b509bced\" (UID: \"d86d2f3d-5efe-452c-9af4-91c2b509bced\") " Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.707970 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d86d2f3d-5efe-452c-9af4-91c2b509bced-octavia-run" (OuterVolumeSpecName: "octavia-run") pod "d86d2f3d-5efe-452c-9af4-91c2b509bced" (UID: "d86d2f3d-5efe-452c-9af4-91c2b509bced"). InnerVolumeSpecName "octavia-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.708209 4812 reconciler_common.go:293] "Volume detached for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/d86d2f3d-5efe-452c-9af4-91c2b509bced-octavia-run\") on node \"crc\" DevicePath \"\"" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.713139 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-config-data" (OuterVolumeSpecName: "config-data") pod "d86d2f3d-5efe-452c-9af4-91c2b509bced" (UID: "d86d2f3d-5efe-452c-9af4-91c2b509bced"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.713288 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-scripts" (OuterVolumeSpecName: "scripts") pod "d86d2f3d-5efe-452c-9af4-91c2b509bced" (UID: "d86d2f3d-5efe-452c-9af4-91c2b509bced"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.765147 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d86d2f3d-5efe-452c-9af4-91c2b509bced" (UID: "d86d2f3d-5efe-452c-9af4-91c2b509bced"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.779142 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d86d2f3d-5efe-452c-9af4-91c2b509bced-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "d86d2f3d-5efe-452c-9af4-91c2b509bced" (UID: "d86d2f3d-5efe-452c-9af4-91c2b509bced"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.811104 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/d86d2f3d-5efe-452c-9af4-91c2b509bced-config-data-merged\") on node \"crc\" DevicePath \"\"" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.811159 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.811179 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.811196 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.859468 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d86d2f3d-5efe-452c-9af4-91c2b509bced" (UID: "d86d2f3d-5efe-452c-9af4-91c2b509bced"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.889886 4812 generic.go:334] "Generic (PLEG): container finished" podID="d86d2f3d-5efe-452c-9af4-91c2b509bced" containerID="9fc0a617a00324b23da90bcbedbbc17abb77e62e36c427ff995971cc3717044b" exitCode=0 Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.889931 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6dfb9bffb6-fskh7" event={"ID":"d86d2f3d-5efe-452c-9af4-91c2b509bced","Type":"ContainerDied","Data":"9fc0a617a00324b23da90bcbedbbc17abb77e62e36c427ff995971cc3717044b"} Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.889964 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-6dfb9bffb6-fskh7" event={"ID":"d86d2f3d-5efe-452c-9af4-91c2b509bced","Type":"ContainerDied","Data":"e35631aa6bdf30e6ef243598165f1718a8d11f66aa232ed94e19b01575aaf440"} Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.889984 4812 scope.go:117] "RemoveContainer" containerID="493126fb8b38d79d48d38a4a467e1de7f003252ca3841a0529fe884a2fa3822f" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.890048 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-6dfb9bffb6-fskh7" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.920155 4812 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d86d2f3d-5efe-452c-9af4-91c2b509bced-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.927521 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-api-6dfb9bffb6-fskh7"] Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.928694 4812 scope.go:117] "RemoveContainer" containerID="9fc0a617a00324b23da90bcbedbbc17abb77e62e36c427ff995971cc3717044b" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.935946 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-api-6dfb9bffb6-fskh7"] Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.951533 4812 scope.go:117] "RemoveContainer" containerID="07db2f6fb2056f3a44bcf2d520de8f9c92b82be7a203d7d60b094adc3ac380e9" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.973210 4812 scope.go:117] "RemoveContainer" containerID="493126fb8b38d79d48d38a4a467e1de7f003252ca3841a0529fe884a2fa3822f" Nov 24 20:57:14 crc kubenswrapper[4812]: E1124 20:57:14.973637 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"493126fb8b38d79d48d38a4a467e1de7f003252ca3841a0529fe884a2fa3822f\": container with ID starting with 493126fb8b38d79d48d38a4a467e1de7f003252ca3841a0529fe884a2fa3822f not found: ID does not exist" containerID="493126fb8b38d79d48d38a4a467e1de7f003252ca3841a0529fe884a2fa3822f" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.973670 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"493126fb8b38d79d48d38a4a467e1de7f003252ca3841a0529fe884a2fa3822f"} err="failed to get container status \"493126fb8b38d79d48d38a4a467e1de7f003252ca3841a0529fe884a2fa3822f\": rpc error: code = NotFound desc = could not find container \"493126fb8b38d79d48d38a4a467e1de7f003252ca3841a0529fe884a2fa3822f\": container with ID starting with 493126fb8b38d79d48d38a4a467e1de7f003252ca3841a0529fe884a2fa3822f not found: ID does not exist" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.973693 4812 scope.go:117] "RemoveContainer" containerID="9fc0a617a00324b23da90bcbedbbc17abb77e62e36c427ff995971cc3717044b" Nov 24 20:57:14 crc kubenswrapper[4812]: E1124 20:57:14.974062 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fc0a617a00324b23da90bcbedbbc17abb77e62e36c427ff995971cc3717044b\": container with ID starting with 9fc0a617a00324b23da90bcbedbbc17abb77e62e36c427ff995971cc3717044b not found: ID does not exist" containerID="9fc0a617a00324b23da90bcbedbbc17abb77e62e36c427ff995971cc3717044b" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.974086 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fc0a617a00324b23da90bcbedbbc17abb77e62e36c427ff995971cc3717044b"} err="failed to get container status \"9fc0a617a00324b23da90bcbedbbc17abb77e62e36c427ff995971cc3717044b\": rpc error: code = NotFound desc = could not find container \"9fc0a617a00324b23da90bcbedbbc17abb77e62e36c427ff995971cc3717044b\": container with ID starting with 9fc0a617a00324b23da90bcbedbbc17abb77e62e36c427ff995971cc3717044b not found: ID does not exist" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.974102 4812 scope.go:117] "RemoveContainer" containerID="07db2f6fb2056f3a44bcf2d520de8f9c92b82be7a203d7d60b094adc3ac380e9" Nov 24 20:57:14 crc kubenswrapper[4812]: E1124 20:57:14.974447 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07db2f6fb2056f3a44bcf2d520de8f9c92b82be7a203d7d60b094adc3ac380e9\": container with ID starting with 07db2f6fb2056f3a44bcf2d520de8f9c92b82be7a203d7d60b094adc3ac380e9 not found: ID does not exist" containerID="07db2f6fb2056f3a44bcf2d520de8f9c92b82be7a203d7d60b094adc3ac380e9" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.974475 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07db2f6fb2056f3a44bcf2d520de8f9c92b82be7a203d7d60b094adc3ac380e9"} err="failed to get container status \"07db2f6fb2056f3a44bcf2d520de8f9c92b82be7a203d7d60b094adc3ac380e9\": rpc error: code = NotFound desc = could not find container \"07db2f6fb2056f3a44bcf2d520de8f9c92b82be7a203d7d60b094adc3ac380e9\": container with ID starting with 07db2f6fb2056f3a44bcf2d520de8f9c92b82be7a203d7d60b094adc3ac380e9 not found: ID does not exist" Nov 24 20:57:14 crc kubenswrapper[4812]: I1124 20:57:14.983066 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d86d2f3d-5efe-452c-9af4-91c2b509bced" path="/var/lib/kubelet/pods/d86d2f3d-5efe-452c-9af4-91c2b509bced/volumes" Nov 24 20:57:18 crc kubenswrapper[4812]: I1124 20:57:18.966793 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 20:57:18 crc kubenswrapper[4812]: E1124 20:57:18.967873 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:57:24 crc kubenswrapper[4812]: E1124 20:57:24.522624 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccf7ea57_8fd0_4515_be74_9f7e940d0380.slice/crio-2787ead8f81f6ff0155d35357cf3dc8dfa075e73005b65b6bfced09ed48a27e2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccf7ea57_8fd0_4515_be74_9f7e940d0380.slice\": RecentStats: unable to find data in memory cache]" Nov 24 20:57:31 crc kubenswrapper[4812]: I1124 20:57:31.308017 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-5955f5554b-x7gsx"] Nov 24 20:57:31 crc kubenswrapper[4812]: I1124 20:57:31.308806 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-5955f5554b-x7gsx" podUID="1c4cb7be-9f38-4cbf-b05c-d531f336818c" containerName="octavia-amphora-httpd" containerID="cri-o://a10f4cbb19e592d015a3b2ca22d0757c4eb161e5a12b67cd52ee5987df8ff7b4" gracePeriod=30 Nov 24 20:57:31 crc kubenswrapper[4812]: I1124 20:57:31.966130 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 20:57:31 crc kubenswrapper[4812]: E1124 20:57:31.966607 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:57:32 crc kubenswrapper[4812]: I1124 20:57:32.013357 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-5955f5554b-x7gsx" Nov 24 20:57:32 crc kubenswrapper[4812]: I1124 20:57:32.098241 4812 generic.go:334] "Generic (PLEG): container finished" podID="1c4cb7be-9f38-4cbf-b05c-d531f336818c" containerID="a10f4cbb19e592d015a3b2ca22d0757c4eb161e5a12b67cd52ee5987df8ff7b4" exitCode=0 Nov 24 20:57:32 crc kubenswrapper[4812]: I1124 20:57:32.098277 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-5955f5554b-x7gsx" event={"ID":"1c4cb7be-9f38-4cbf-b05c-d531f336818c","Type":"ContainerDied","Data":"a10f4cbb19e592d015a3b2ca22d0757c4eb161e5a12b67cd52ee5987df8ff7b4"} Nov 24 20:57:32 crc kubenswrapper[4812]: I1124 20:57:32.098301 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-5955f5554b-x7gsx" event={"ID":"1c4cb7be-9f38-4cbf-b05c-d531f336818c","Type":"ContainerDied","Data":"ddcf82a12a7c82ff8efd84b2722c03b6783ee7357793105cc63307b86aabe0aa"} Nov 24 20:57:32 crc kubenswrapper[4812]: I1124 20:57:32.098317 4812 scope.go:117] "RemoveContainer" containerID="a10f4cbb19e592d015a3b2ca22d0757c4eb161e5a12b67cd52ee5987df8ff7b4" Nov 24 20:57:32 crc kubenswrapper[4812]: I1124 20:57:32.098437 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-5955f5554b-x7gsx" Nov 24 20:57:32 crc kubenswrapper[4812]: I1124 20:57:32.121061 4812 scope.go:117] "RemoveContainer" containerID="d89521a91e57f3f5e7d480e360765c7459b892648b06df952caa15dab4af2a4b" Nov 24 20:57:32 crc kubenswrapper[4812]: I1124 20:57:32.142794 4812 scope.go:117] "RemoveContainer" containerID="a10f4cbb19e592d015a3b2ca22d0757c4eb161e5a12b67cd52ee5987df8ff7b4" Nov 24 20:57:32 crc kubenswrapper[4812]: E1124 20:57:32.143239 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a10f4cbb19e592d015a3b2ca22d0757c4eb161e5a12b67cd52ee5987df8ff7b4\": container with ID starting with a10f4cbb19e592d015a3b2ca22d0757c4eb161e5a12b67cd52ee5987df8ff7b4 not found: ID does not exist" containerID="a10f4cbb19e592d015a3b2ca22d0757c4eb161e5a12b67cd52ee5987df8ff7b4" Nov 24 20:57:32 crc kubenswrapper[4812]: I1124 20:57:32.143270 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a10f4cbb19e592d015a3b2ca22d0757c4eb161e5a12b67cd52ee5987df8ff7b4"} err="failed to get container status \"a10f4cbb19e592d015a3b2ca22d0757c4eb161e5a12b67cd52ee5987df8ff7b4\": rpc error: code = NotFound desc = could not find container \"a10f4cbb19e592d015a3b2ca22d0757c4eb161e5a12b67cd52ee5987df8ff7b4\": container with ID starting with a10f4cbb19e592d015a3b2ca22d0757c4eb161e5a12b67cd52ee5987df8ff7b4 not found: ID does not exist" Nov 24 20:57:32 crc kubenswrapper[4812]: I1124 20:57:32.143298 4812 scope.go:117] "RemoveContainer" containerID="d89521a91e57f3f5e7d480e360765c7459b892648b06df952caa15dab4af2a4b" Nov 24 20:57:32 crc kubenswrapper[4812]: E1124 20:57:32.143591 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d89521a91e57f3f5e7d480e360765c7459b892648b06df952caa15dab4af2a4b\": container with ID starting with d89521a91e57f3f5e7d480e360765c7459b892648b06df952caa15dab4af2a4b not found: ID does not exist" containerID="d89521a91e57f3f5e7d480e360765c7459b892648b06df952caa15dab4af2a4b" Nov 24 20:57:32 crc kubenswrapper[4812]: I1124 20:57:32.143612 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d89521a91e57f3f5e7d480e360765c7459b892648b06df952caa15dab4af2a4b"} err="failed to get container status \"d89521a91e57f3f5e7d480e360765c7459b892648b06df952caa15dab4af2a4b\": rpc error: code = NotFound desc = could not find container \"d89521a91e57f3f5e7d480e360765c7459b892648b06df952caa15dab4af2a4b\": container with ID starting with d89521a91e57f3f5e7d480e360765c7459b892648b06df952caa15dab4af2a4b not found: ID does not exist" Nov 24 20:57:32 crc kubenswrapper[4812]: I1124 20:57:32.150516 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/1c4cb7be-9f38-4cbf-b05c-d531f336818c-amphora-image\") pod \"1c4cb7be-9f38-4cbf-b05c-d531f336818c\" (UID: \"1c4cb7be-9f38-4cbf-b05c-d531f336818c\") " Nov 24 20:57:32 crc kubenswrapper[4812]: I1124 20:57:32.151361 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1c4cb7be-9f38-4cbf-b05c-d531f336818c-httpd-config\") pod \"1c4cb7be-9f38-4cbf-b05c-d531f336818c\" (UID: \"1c4cb7be-9f38-4cbf-b05c-d531f336818c\") " Nov 24 20:57:32 crc kubenswrapper[4812]: I1124 20:57:32.211889 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c4cb7be-9f38-4cbf-b05c-d531f336818c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "1c4cb7be-9f38-4cbf-b05c-d531f336818c" (UID: "1c4cb7be-9f38-4cbf-b05c-d531f336818c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:57:32 crc kubenswrapper[4812]: I1124 20:57:32.255810 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1c4cb7be-9f38-4cbf-b05c-d531f336818c-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 24 20:57:32 crc kubenswrapper[4812]: I1124 20:57:32.264710 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c4cb7be-9f38-4cbf-b05c-d531f336818c-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "1c4cb7be-9f38-4cbf-b05c-d531f336818c" (UID: "1c4cb7be-9f38-4cbf-b05c-d531f336818c"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:57:32 crc kubenswrapper[4812]: I1124 20:57:32.358114 4812 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/1c4cb7be-9f38-4cbf-b05c-d531f336818c-amphora-image\") on node \"crc\" DevicePath \"\"" Nov 24 20:57:32 crc kubenswrapper[4812]: I1124 20:57:32.450318 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-5955f5554b-x7gsx"] Nov 24 20:57:32 crc kubenswrapper[4812]: I1124 20:57:32.461643 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-5955f5554b-x7gsx"] Nov 24 20:57:32 crc kubenswrapper[4812]: I1124 20:57:32.979185 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c4cb7be-9f38-4cbf-b05c-d531f336818c" path="/var/lib/kubelet/pods/1c4cb7be-9f38-4cbf-b05c-d531f336818c/volumes" Nov 24 20:57:34 crc kubenswrapper[4812]: E1124 20:57:34.775495 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccf7ea57_8fd0_4515_be74_9f7e940d0380.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccf7ea57_8fd0_4515_be74_9f7e940d0380.slice/crio-2787ead8f81f6ff0155d35357cf3dc8dfa075e73005b65b6bfced09ed48a27e2\": RecentStats: unable to find data in memory cache]" Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.248312 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-5955f5554b-657vt"] Nov 24 20:57:37 crc kubenswrapper[4812]: E1124 20:57:37.249479 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d86d2f3d-5efe-452c-9af4-91c2b509bced" containerName="octavia-api-provider-agent" Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.249506 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d86d2f3d-5efe-452c-9af4-91c2b509bced" containerName="octavia-api-provider-agent" Nov 24 20:57:37 crc kubenswrapper[4812]: E1124 20:57:37.249549 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c4cb7be-9f38-4cbf-b05c-d531f336818c" containerName="init" Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.249562 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c4cb7be-9f38-4cbf-b05c-d531f336818c" containerName="init" Nov 24 20:57:37 crc kubenswrapper[4812]: E1124 20:57:37.249589 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d86d2f3d-5efe-452c-9af4-91c2b509bced" containerName="init" Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.249601 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d86d2f3d-5efe-452c-9af4-91c2b509bced" containerName="init" Nov 24 20:57:37 crc kubenswrapper[4812]: E1124 20:57:37.249618 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f2daa50-f7db-469b-bfc0-06d96f21dcef" containerName="init" Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.249629 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f2daa50-f7db-469b-bfc0-06d96f21dcef" containerName="init" Nov 24 20:57:37 crc kubenswrapper[4812]: E1124 20:57:37.249649 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d86d2f3d-5efe-452c-9af4-91c2b509bced" containerName="octavia-api" Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.249661 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d86d2f3d-5efe-452c-9af4-91c2b509bced" containerName="octavia-api" Nov 24 20:57:37 crc kubenswrapper[4812]: E1124 20:57:37.249687 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f2daa50-f7db-469b-bfc0-06d96f21dcef" containerName="octavia-db-sync" Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.249699 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f2daa50-f7db-469b-bfc0-06d96f21dcef" containerName="octavia-db-sync" Nov 24 20:57:37 crc kubenswrapper[4812]: E1124 20:57:37.249735 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c4cb7be-9f38-4cbf-b05c-d531f336818c" containerName="octavia-amphora-httpd" Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.249747 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c4cb7be-9f38-4cbf-b05c-d531f336818c" containerName="octavia-amphora-httpd" Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.250108 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c4cb7be-9f38-4cbf-b05c-d531f336818c" containerName="octavia-amphora-httpd" Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.250134 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d86d2f3d-5efe-452c-9af4-91c2b509bced" containerName="octavia-api" Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.250163 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f2daa50-f7db-469b-bfc0-06d96f21dcef" containerName="octavia-db-sync" Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.250191 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d86d2f3d-5efe-452c-9af4-91c2b509bced" containerName="octavia-api-provider-agent" Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.252218 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-5955f5554b-657vt" Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.260489 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.264159 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/1228f7a5-f547-4e4c-a49d-597bcbe2860c-amphora-image\") pod \"octavia-image-upload-5955f5554b-657vt\" (UID: \"1228f7a5-f547-4e4c-a49d-597bcbe2860c\") " pod="openstack/octavia-image-upload-5955f5554b-657vt" Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.264576 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1228f7a5-f547-4e4c-a49d-597bcbe2860c-httpd-config\") pod \"octavia-image-upload-5955f5554b-657vt\" (UID: \"1228f7a5-f547-4e4c-a49d-597bcbe2860c\") " pod="openstack/octavia-image-upload-5955f5554b-657vt" Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.265563 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-5955f5554b-657vt"] Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.366944 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1228f7a5-f547-4e4c-a49d-597bcbe2860c-httpd-config\") pod \"octavia-image-upload-5955f5554b-657vt\" (UID: \"1228f7a5-f547-4e4c-a49d-597bcbe2860c\") " pod="openstack/octavia-image-upload-5955f5554b-657vt" Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.367028 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/1228f7a5-f547-4e4c-a49d-597bcbe2860c-amphora-image\") pod \"octavia-image-upload-5955f5554b-657vt\" (UID: \"1228f7a5-f547-4e4c-a49d-597bcbe2860c\") " pod="openstack/octavia-image-upload-5955f5554b-657vt" Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.369026 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/1228f7a5-f547-4e4c-a49d-597bcbe2860c-amphora-image\") pod \"octavia-image-upload-5955f5554b-657vt\" (UID: \"1228f7a5-f547-4e4c-a49d-597bcbe2860c\") " pod="openstack/octavia-image-upload-5955f5554b-657vt" Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.373180 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1228f7a5-f547-4e4c-a49d-597bcbe2860c-httpd-config\") pod \"octavia-image-upload-5955f5554b-657vt\" (UID: \"1228f7a5-f547-4e4c-a49d-597bcbe2860c\") " pod="openstack/octavia-image-upload-5955f5554b-657vt" Nov 24 20:57:37 crc kubenswrapper[4812]: I1124 20:57:37.595405 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-5955f5554b-657vt" Nov 24 20:57:38 crc kubenswrapper[4812]: I1124 20:57:38.123302 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-5955f5554b-657vt"] Nov 24 20:57:38 crc kubenswrapper[4812]: I1124 20:57:38.190748 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-5955f5554b-657vt" event={"ID":"1228f7a5-f547-4e4c-a49d-597bcbe2860c","Type":"ContainerStarted","Data":"e176882b7f4f8b9ad63824fd1b13c73ecee0c23c062ab8079161b0c0cac864c9"} Nov 24 20:57:39 crc kubenswrapper[4812]: I1124 20:57:39.210658 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-5955f5554b-657vt" event={"ID":"1228f7a5-f547-4e4c-a49d-597bcbe2860c","Type":"ContainerStarted","Data":"c0c8a937b387ef998fcb3749f8704d37104234a08dbc8964f11736b6a4ccd808"} Nov 24 20:57:40 crc kubenswrapper[4812]: I1124 20:57:40.222898 4812 generic.go:334] "Generic (PLEG): container finished" podID="1228f7a5-f547-4e4c-a49d-597bcbe2860c" containerID="c0c8a937b387ef998fcb3749f8704d37104234a08dbc8964f11736b6a4ccd808" exitCode=0 Nov 24 20:57:40 crc kubenswrapper[4812]: I1124 20:57:40.222984 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-5955f5554b-657vt" event={"ID":"1228f7a5-f547-4e4c-a49d-597bcbe2860c","Type":"ContainerDied","Data":"c0c8a937b387ef998fcb3749f8704d37104234a08dbc8964f11736b6a4ccd808"} Nov 24 20:57:41 crc kubenswrapper[4812]: I1124 20:57:41.233209 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-5955f5554b-657vt" event={"ID":"1228f7a5-f547-4e4c-a49d-597bcbe2860c","Type":"ContainerStarted","Data":"c407b17fb98ae90b5bc7d72eba6ab5a245ed0f70430c055bfeea4e188f63c624"} Nov 24 20:57:41 crc kubenswrapper[4812]: I1124 20:57:41.255014 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-5955f5554b-657vt" podStartSLOduration=3.790934333 podStartE2EDuration="4.254993971s" podCreationTimestamp="2025-11-24 20:57:37 +0000 UTC" firstStartedPulling="2025-11-24 20:57:38.12704803 +0000 UTC m=+6051.916000411" lastFinishedPulling="2025-11-24 20:57:38.591107678 +0000 UTC m=+6052.380060049" observedRunningTime="2025-11-24 20:57:41.24825189 +0000 UTC m=+6055.037204281" watchObservedRunningTime="2025-11-24 20:57:41.254993971 +0000 UTC m=+6055.043946362" Nov 24 20:57:45 crc kubenswrapper[4812]: E1124 20:57:45.014888 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccf7ea57_8fd0_4515_be74_9f7e940d0380.slice/crio-2787ead8f81f6ff0155d35357cf3dc8dfa075e73005b65b6bfced09ed48a27e2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podccf7ea57_8fd0_4515_be74_9f7e940d0380.slice\": RecentStats: unable to find data in memory cache]" Nov 24 20:57:45 crc kubenswrapper[4812]: I1124 20:57:45.966864 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 20:57:45 crc kubenswrapper[4812]: E1124 20:57:45.968166 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:57:54 crc kubenswrapper[4812]: I1124 20:57:54.994302 4812 scope.go:117] "RemoveContainer" containerID="fe80deb1c02551173cd990b9ad31f041cf3f3c706a986f035fde6869e9b4d5f4" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.617138 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-9z7d6"] Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.619083 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.623442 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.623448 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.623857 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.627978 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-9z7d6"] Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.758611 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb542bb-fa97-46fa-9c1b-7809b8ae59e2-config-data\") pod \"octavia-healthmanager-9z7d6\" (UID: \"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2\") " pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.758708 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/1bb542bb-fa97-46fa-9c1b-7809b8ae59e2-amphora-certs\") pod \"octavia-healthmanager-9z7d6\" (UID: \"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2\") " pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.758819 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb542bb-fa97-46fa-9c1b-7809b8ae59e2-combined-ca-bundle\") pod \"octavia-healthmanager-9z7d6\" (UID: \"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2\") " pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.758969 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/1bb542bb-fa97-46fa-9c1b-7809b8ae59e2-hm-ports\") pod \"octavia-healthmanager-9z7d6\" (UID: \"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2\") " pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.759082 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1bb542bb-fa97-46fa-9c1b-7809b8ae59e2-config-data-merged\") pod \"octavia-healthmanager-9z7d6\" (UID: \"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2\") " pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.759235 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bb542bb-fa97-46fa-9c1b-7809b8ae59e2-scripts\") pod \"octavia-healthmanager-9z7d6\" (UID: \"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2\") " pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.860946 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb542bb-fa97-46fa-9c1b-7809b8ae59e2-config-data\") pod \"octavia-healthmanager-9z7d6\" (UID: \"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2\") " pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.861022 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/1bb542bb-fa97-46fa-9c1b-7809b8ae59e2-amphora-certs\") pod \"octavia-healthmanager-9z7d6\" (UID: \"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2\") " pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.861046 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb542bb-fa97-46fa-9c1b-7809b8ae59e2-combined-ca-bundle\") pod \"octavia-healthmanager-9z7d6\" (UID: \"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2\") " pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.861076 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/1bb542bb-fa97-46fa-9c1b-7809b8ae59e2-hm-ports\") pod \"octavia-healthmanager-9z7d6\" (UID: \"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2\") " pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.861108 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1bb542bb-fa97-46fa-9c1b-7809b8ae59e2-config-data-merged\") pod \"octavia-healthmanager-9z7d6\" (UID: \"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2\") " pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.861154 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bb542bb-fa97-46fa-9c1b-7809b8ae59e2-scripts\") pod \"octavia-healthmanager-9z7d6\" (UID: \"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2\") " pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.864129 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/1bb542bb-fa97-46fa-9c1b-7809b8ae59e2-hm-ports\") pod \"octavia-healthmanager-9z7d6\" (UID: \"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2\") " pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.864410 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/1bb542bb-fa97-46fa-9c1b-7809b8ae59e2-config-data-merged\") pod \"octavia-healthmanager-9z7d6\" (UID: \"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2\") " pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.866855 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/1bb542bb-fa97-46fa-9c1b-7809b8ae59e2-amphora-certs\") pod \"octavia-healthmanager-9z7d6\" (UID: \"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2\") " pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.867129 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bb542bb-fa97-46fa-9c1b-7809b8ae59e2-config-data\") pod \"octavia-healthmanager-9z7d6\" (UID: \"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2\") " pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.868704 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bb542bb-fa97-46fa-9c1b-7809b8ae59e2-scripts\") pod \"octavia-healthmanager-9z7d6\" (UID: \"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2\") " pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.870501 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bb542bb-fa97-46fa-9c1b-7809b8ae59e2-combined-ca-bundle\") pod \"octavia-healthmanager-9z7d6\" (UID: \"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2\") " pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:57:55 crc kubenswrapper[4812]: I1124 20:57:55.945745 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:57:56 crc kubenswrapper[4812]: I1124 20:57:56.629105 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-9z7d6"] Nov 24 20:57:56 crc kubenswrapper[4812]: W1124 20:57:56.633228 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bb542bb_fa97_46fa_9c1b_7809b8ae59e2.slice/crio-60fcd138fef20530a71de7c721d5fec0f13f9753f0da99068f70df4aecdad869 WatchSource:0}: Error finding container 60fcd138fef20530a71de7c721d5fec0f13f9753f0da99068f70df4aecdad869: Status 404 returned error can't find the container with id 60fcd138fef20530a71de7c721d5fec0f13f9753f0da99068f70df4aecdad869 Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.360210 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-b7ndg"] Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.363709 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.367513 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.368932 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.372520 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-b7ndg"] Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.470455 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-9z7d6" event={"ID":"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2","Type":"ContainerStarted","Data":"3260450eb1343eafbb2b11cb27bed67ee40720a1d69b1ef1950c51145e64dec9"} Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.470801 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-9z7d6" event={"ID":"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2","Type":"ContainerStarted","Data":"60fcd138fef20530a71de7c721d5fec0f13f9753f0da99068f70df4aecdad869"} Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.497850 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/735f042b-f017-4e06-8699-518a9853d124-config-data-merged\") pod \"octavia-housekeeping-b7ndg\" (UID: \"735f042b-f017-4e06-8699-518a9853d124\") " pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.497940 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/735f042b-f017-4e06-8699-518a9853d124-hm-ports\") pod \"octavia-housekeeping-b7ndg\" (UID: \"735f042b-f017-4e06-8699-518a9853d124\") " pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.497986 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735f042b-f017-4e06-8699-518a9853d124-config-data\") pod \"octavia-housekeeping-b7ndg\" (UID: \"735f042b-f017-4e06-8699-518a9853d124\") " pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.498015 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/735f042b-f017-4e06-8699-518a9853d124-amphora-certs\") pod \"octavia-housekeeping-b7ndg\" (UID: \"735f042b-f017-4e06-8699-518a9853d124\") " pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.498039 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735f042b-f017-4e06-8699-518a9853d124-combined-ca-bundle\") pod \"octavia-housekeeping-b7ndg\" (UID: \"735f042b-f017-4e06-8699-518a9853d124\") " pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.498071 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735f042b-f017-4e06-8699-518a9853d124-scripts\") pod \"octavia-housekeeping-b7ndg\" (UID: \"735f042b-f017-4e06-8699-518a9853d124\") " pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.600254 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/735f042b-f017-4e06-8699-518a9853d124-hm-ports\") pod \"octavia-housekeeping-b7ndg\" (UID: \"735f042b-f017-4e06-8699-518a9853d124\") " pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.600314 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735f042b-f017-4e06-8699-518a9853d124-config-data\") pod \"octavia-housekeeping-b7ndg\" (UID: \"735f042b-f017-4e06-8699-518a9853d124\") " pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.600407 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/735f042b-f017-4e06-8699-518a9853d124-amphora-certs\") pod \"octavia-housekeeping-b7ndg\" (UID: \"735f042b-f017-4e06-8699-518a9853d124\") " pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.600478 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735f042b-f017-4e06-8699-518a9853d124-combined-ca-bundle\") pod \"octavia-housekeeping-b7ndg\" (UID: \"735f042b-f017-4e06-8699-518a9853d124\") " pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.600553 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735f042b-f017-4e06-8699-518a9853d124-scripts\") pod \"octavia-housekeeping-b7ndg\" (UID: \"735f042b-f017-4e06-8699-518a9853d124\") " pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.601044 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/735f042b-f017-4e06-8699-518a9853d124-config-data-merged\") pod \"octavia-housekeeping-b7ndg\" (UID: \"735f042b-f017-4e06-8699-518a9853d124\") " pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.601200 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/735f042b-f017-4e06-8699-518a9853d124-hm-ports\") pod \"octavia-housekeeping-b7ndg\" (UID: \"735f042b-f017-4e06-8699-518a9853d124\") " pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.601437 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/735f042b-f017-4e06-8699-518a9853d124-config-data-merged\") pod \"octavia-housekeeping-b7ndg\" (UID: \"735f042b-f017-4e06-8699-518a9853d124\") " pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.620710 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735f042b-f017-4e06-8699-518a9853d124-combined-ca-bundle\") pod \"octavia-housekeeping-b7ndg\" (UID: \"735f042b-f017-4e06-8699-518a9853d124\") " pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.621297 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735f042b-f017-4e06-8699-518a9853d124-scripts\") pod \"octavia-housekeeping-b7ndg\" (UID: \"735f042b-f017-4e06-8699-518a9853d124\") " pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.625881 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/735f042b-f017-4e06-8699-518a9853d124-amphora-certs\") pod \"octavia-housekeeping-b7ndg\" (UID: \"735f042b-f017-4e06-8699-518a9853d124\") " pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.626010 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735f042b-f017-4e06-8699-518a9853d124-config-data\") pod \"octavia-housekeeping-b7ndg\" (UID: \"735f042b-f017-4e06-8699-518a9853d124\") " pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:57:57 crc kubenswrapper[4812]: I1124 20:57:57.693227 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.276413 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-6cwpp"] Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.278971 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-6cwpp" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.282374 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.283201 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.288867 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-6cwpp"] Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.360319 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-b7ndg"] Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.427007 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e05256-a3b2-4fed-8f9a-970cdbf2d392-combined-ca-bundle\") pod \"octavia-worker-6cwpp\" (UID: \"41e05256-a3b2-4fed-8f9a-970cdbf2d392\") " pod="openstack/octavia-worker-6cwpp" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.427286 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/41e05256-a3b2-4fed-8f9a-970cdbf2d392-config-data-merged\") pod \"octavia-worker-6cwpp\" (UID: \"41e05256-a3b2-4fed-8f9a-970cdbf2d392\") " pod="openstack/octavia-worker-6cwpp" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.427471 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e05256-a3b2-4fed-8f9a-970cdbf2d392-config-data\") pod \"octavia-worker-6cwpp\" (UID: \"41e05256-a3b2-4fed-8f9a-970cdbf2d392\") " pod="openstack/octavia-worker-6cwpp" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.427557 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41e05256-a3b2-4fed-8f9a-970cdbf2d392-scripts\") pod \"octavia-worker-6cwpp\" (UID: \"41e05256-a3b2-4fed-8f9a-970cdbf2d392\") " pod="openstack/octavia-worker-6cwpp" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.427658 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/41e05256-a3b2-4fed-8f9a-970cdbf2d392-amphora-certs\") pod \"octavia-worker-6cwpp\" (UID: \"41e05256-a3b2-4fed-8f9a-970cdbf2d392\") " pod="openstack/octavia-worker-6cwpp" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.427776 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/41e05256-a3b2-4fed-8f9a-970cdbf2d392-hm-ports\") pod \"octavia-worker-6cwpp\" (UID: \"41e05256-a3b2-4fed-8f9a-970cdbf2d392\") " pod="openstack/octavia-worker-6cwpp" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.482962 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-b7ndg" event={"ID":"735f042b-f017-4e06-8699-518a9853d124","Type":"ContainerStarted","Data":"24d8d6b53bb3795d5b294176acaa3dcaad4f0a7bc8a702b800858205fca39bc0"} Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.529303 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e05256-a3b2-4fed-8f9a-970cdbf2d392-config-data\") pod \"octavia-worker-6cwpp\" (UID: \"41e05256-a3b2-4fed-8f9a-970cdbf2d392\") " pod="openstack/octavia-worker-6cwpp" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.529583 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41e05256-a3b2-4fed-8f9a-970cdbf2d392-scripts\") pod \"octavia-worker-6cwpp\" (UID: \"41e05256-a3b2-4fed-8f9a-970cdbf2d392\") " pod="openstack/octavia-worker-6cwpp" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.529759 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/41e05256-a3b2-4fed-8f9a-970cdbf2d392-amphora-certs\") pod \"octavia-worker-6cwpp\" (UID: \"41e05256-a3b2-4fed-8f9a-970cdbf2d392\") " pod="openstack/octavia-worker-6cwpp" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.529946 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/41e05256-a3b2-4fed-8f9a-970cdbf2d392-hm-ports\") pod \"octavia-worker-6cwpp\" (UID: \"41e05256-a3b2-4fed-8f9a-970cdbf2d392\") " pod="openstack/octavia-worker-6cwpp" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.530115 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e05256-a3b2-4fed-8f9a-970cdbf2d392-combined-ca-bundle\") pod \"octavia-worker-6cwpp\" (UID: \"41e05256-a3b2-4fed-8f9a-970cdbf2d392\") " pod="openstack/octavia-worker-6cwpp" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.530238 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/41e05256-a3b2-4fed-8f9a-970cdbf2d392-config-data-merged\") pod \"octavia-worker-6cwpp\" (UID: \"41e05256-a3b2-4fed-8f9a-970cdbf2d392\") " pod="openstack/octavia-worker-6cwpp" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.530831 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/41e05256-a3b2-4fed-8f9a-970cdbf2d392-config-data-merged\") pod \"octavia-worker-6cwpp\" (UID: \"41e05256-a3b2-4fed-8f9a-970cdbf2d392\") " pod="openstack/octavia-worker-6cwpp" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.532559 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/41e05256-a3b2-4fed-8f9a-970cdbf2d392-hm-ports\") pod \"octavia-worker-6cwpp\" (UID: \"41e05256-a3b2-4fed-8f9a-970cdbf2d392\") " pod="openstack/octavia-worker-6cwpp" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.537856 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e05256-a3b2-4fed-8f9a-970cdbf2d392-config-data\") pod \"octavia-worker-6cwpp\" (UID: \"41e05256-a3b2-4fed-8f9a-970cdbf2d392\") " pod="openstack/octavia-worker-6cwpp" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.538850 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41e05256-a3b2-4fed-8f9a-970cdbf2d392-scripts\") pod \"octavia-worker-6cwpp\" (UID: \"41e05256-a3b2-4fed-8f9a-970cdbf2d392\") " pod="openstack/octavia-worker-6cwpp" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.539076 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/41e05256-a3b2-4fed-8f9a-970cdbf2d392-amphora-certs\") pod \"octavia-worker-6cwpp\" (UID: \"41e05256-a3b2-4fed-8f9a-970cdbf2d392\") " pod="openstack/octavia-worker-6cwpp" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.548365 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e05256-a3b2-4fed-8f9a-970cdbf2d392-combined-ca-bundle\") pod \"octavia-worker-6cwpp\" (UID: \"41e05256-a3b2-4fed-8f9a-970cdbf2d392\") " pod="openstack/octavia-worker-6cwpp" Nov 24 20:57:58 crc kubenswrapper[4812]: I1124 20:57:58.611631 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-6cwpp" Nov 24 20:57:59 crc kubenswrapper[4812]: I1124 20:57:59.239396 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-6cwpp"] Nov 24 20:57:59 crc kubenswrapper[4812]: I1124 20:57:59.495664 4812 generic.go:334] "Generic (PLEG): container finished" podID="1bb542bb-fa97-46fa-9c1b-7809b8ae59e2" containerID="3260450eb1343eafbb2b11cb27bed67ee40720a1d69b1ef1950c51145e64dec9" exitCode=0 Nov 24 20:57:59 crc kubenswrapper[4812]: I1124 20:57:59.495764 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-9z7d6" event={"ID":"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2","Type":"ContainerDied","Data":"3260450eb1343eafbb2b11cb27bed67ee40720a1d69b1ef1950c51145e64dec9"} Nov 24 20:57:59 crc kubenswrapper[4812]: I1124 20:57:59.497677 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-6cwpp" event={"ID":"41e05256-a3b2-4fed-8f9a-970cdbf2d392","Type":"ContainerStarted","Data":"b84888beb58213c53096dfe466a0bb2ba3ebe28f956ca48a7631e0c250a70a14"} Nov 24 20:57:59 crc kubenswrapper[4812]: E1124 20:57:59.933481 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/1e/1e7d5099c0f88ba2188dea06d2913fc74b8114aaa3379316a63473645184b20e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20251124%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20251124T205758Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=7fc7ed7195c7bb87f8888f9819baf83c0453bec224c08820e35d6bc4de1aad5d®ion=us-east-1&namespace=podified-antelope-centos9&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=openstack-octavia-housekeeping&akamai_signature=exp=1764018778~hmac=dfd7cbe4cd47ac9a2f69494c3d422cf0eec12e0f561fa9dcb7880b2c4221c2cc\": remote error: tls: internal error" image="quay.io/podified-antelope-centos9/openstack-octavia-housekeeping@sha256:6154d7cebd7c339afa5b86330262156171743aa5b79c2b78f9a2f378005ed8fb" Nov 24 20:57:59 crc kubenswrapper[4812]: E1124 20:57:59.933980 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping@sha256:6154d7cebd7c339afa5b86330262156171743aa5b79c2b78f9a2f378005ed8fb,Command:[/bin/bash],Args:[-c /usr/local/bin/container-scripts/init.sh octavia-housekeeping],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5fbh9fhchcbh5d6h5cbh8h68fh78hd7hfbh56fh657h5f4hc9h56ch58bh57dh55dh556h64fh5cbh75h555h65dh5fbhbch56hd5h56h5d7h546q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:MGMT_CIDR,Value:172.24.0.0/16,ValueFrom:nil,},EnvVar{Name:MGMT_GATEWAY,Value:172.23.0.150,ValueFrom:nil,},EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-merged,ReadOnly:false,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hm-ports,ReadOnly:true,MountPath:/var/lib/hmports,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-housekeeping-b7ndg_openstack(735f042b-f017-4e06-8699-518a9853d124): ErrImagePull: parsing image configuration: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/1e/1e7d5099c0f88ba2188dea06d2913fc74b8114aaa3379316a63473645184b20e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20251124%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20251124T205758Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=7fc7ed7195c7bb87f8888f9819baf83c0453bec224c08820e35d6bc4de1aad5d®ion=us-east-1&namespace=podified-antelope-centos9&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=openstack-octavia-housekeeping&akamai_signature=exp=1764018778~hmac=dfd7cbe4cd47ac9a2f69494c3d422cf0eec12e0f561fa9dcb7880b2c4221c2cc\": remote error: tls: internal error" logger="UnhandledError" Nov 24 20:57:59 crc kubenswrapper[4812]: E1124 20:57:59.939124 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"parsing image configuration: Get \\\"https://cdn01.quay.io/quayio-production-s3/sha256/1e/1e7d5099c0f88ba2188dea06d2913fc74b8114aaa3379316a63473645184b20e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20251124%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20251124T205758Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=7fc7ed7195c7bb87f8888f9819baf83c0453bec224c08820e35d6bc4de1aad5d®ion=us-east-1&namespace=podified-antelope-centos9&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=openstack-octavia-housekeeping&akamai_signature=exp=1764018778~hmac=dfd7cbe4cd47ac9a2f69494c3d422cf0eec12e0f561fa9dcb7880b2c4221c2cc\\\": remote error: tls: internal error\"" pod="openstack/octavia-housekeeping-b7ndg" podUID="735f042b-f017-4e06-8699-518a9853d124" Nov 24 20:58:00 crc kubenswrapper[4812]: I1124 20:58:00.517829 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-9z7d6" event={"ID":"1bb542bb-fa97-46fa-9c1b-7809b8ae59e2","Type":"ContainerStarted","Data":"e1d7b4e629fea401ee16b42732258ad51602396fecb03a917a5367ca39598646"} Nov 24 20:58:00 crc kubenswrapper[4812]: I1124 20:58:00.518534 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:58:00 crc kubenswrapper[4812]: E1124 20:58:00.525748 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-octavia-housekeeping@sha256:6154d7cebd7c339afa5b86330262156171743aa5b79c2b78f9a2f378005ed8fb\\\"\"" pod="openstack/octavia-housekeeping-b7ndg" podUID="735f042b-f017-4e06-8699-518a9853d124" Nov 24 20:58:00 crc kubenswrapper[4812]: I1124 20:58:00.565968 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-9z7d6" podStartSLOduration=5.56594966 podStartE2EDuration="5.56594966s" podCreationTimestamp="2025-11-24 20:57:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:58:00.553117477 +0000 UTC m=+6074.342069848" watchObservedRunningTime="2025-11-24 20:58:00.56594966 +0000 UTC m=+6074.354902031" Nov 24 20:58:00 crc kubenswrapper[4812]: I1124 20:58:00.966149 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 20:58:00 crc kubenswrapper[4812]: E1124 20:58:00.967148 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:58:01 crc kubenswrapper[4812]: I1124 20:58:01.528742 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-6cwpp" event={"ID":"41e05256-a3b2-4fed-8f9a-970cdbf2d392","Type":"ContainerStarted","Data":"d9df6bc0eb78cd2047ef88044c7a217dc310ba8001e8fe78833a0b902ea51541"} Nov 24 20:58:02 crc kubenswrapper[4812]: I1124 20:58:02.558279 4812 generic.go:334] "Generic (PLEG): container finished" podID="41e05256-a3b2-4fed-8f9a-970cdbf2d392" containerID="d9df6bc0eb78cd2047ef88044c7a217dc310ba8001e8fe78833a0b902ea51541" exitCode=0 Nov 24 20:58:02 crc kubenswrapper[4812]: I1124 20:58:02.558386 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-6cwpp" event={"ID":"41e05256-a3b2-4fed-8f9a-970cdbf2d392","Type":"ContainerDied","Data":"d9df6bc0eb78cd2047ef88044c7a217dc310ba8001e8fe78833a0b902ea51541"} Nov 24 20:58:03 crc kubenswrapper[4812]: I1124 20:58:03.575853 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-6cwpp" event={"ID":"41e05256-a3b2-4fed-8f9a-970cdbf2d392","Type":"ContainerStarted","Data":"feb80e3b858eea458b5ab432e2de8209b8bc1cf3a37b297d4db44b6bc7bd66f6"} Nov 24 20:58:03 crc kubenswrapper[4812]: I1124 20:58:03.579073 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-6cwpp" Nov 24 20:58:03 crc kubenswrapper[4812]: I1124 20:58:03.608197 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-6cwpp" podStartSLOduration=4.364155332 podStartE2EDuration="5.608180796s" podCreationTimestamp="2025-11-24 20:57:58 +0000 UTC" firstStartedPulling="2025-11-24 20:57:59.243220859 +0000 UTC m=+6073.032173250" lastFinishedPulling="2025-11-24 20:58:00.487246343 +0000 UTC m=+6074.276198714" observedRunningTime="2025-11-24 20:58:03.600688124 +0000 UTC m=+6077.389640495" watchObservedRunningTime="2025-11-24 20:58:03.608180796 +0000 UTC m=+6077.397133167" Nov 24 20:58:10 crc kubenswrapper[4812]: I1124 20:58:10.991174 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-9z7d6" Nov 24 20:58:13 crc kubenswrapper[4812]: I1124 20:58:13.659385 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-6cwpp" Nov 24 20:58:13 crc kubenswrapper[4812]: I1124 20:58:13.698297 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-b7ndg" event={"ID":"735f042b-f017-4e06-8699-518a9853d124","Type":"ContainerStarted","Data":"686420903c86c50a5828305af6fd80baee499600e1553f3118b3f731e4109b68"} Nov 24 20:58:13 crc kubenswrapper[4812]: I1124 20:58:13.966263 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 20:58:13 crc kubenswrapper[4812]: E1124 20:58:13.966733 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:58:14 crc kubenswrapper[4812]: I1124 20:58:14.710673 4812 generic.go:334] "Generic (PLEG): container finished" podID="735f042b-f017-4e06-8699-518a9853d124" containerID="686420903c86c50a5828305af6fd80baee499600e1553f3118b3f731e4109b68" exitCode=0 Nov 24 20:58:14 crc kubenswrapper[4812]: I1124 20:58:14.710715 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-b7ndg" event={"ID":"735f042b-f017-4e06-8699-518a9853d124","Type":"ContainerDied","Data":"686420903c86c50a5828305af6fd80baee499600e1553f3118b3f731e4109b68"} Nov 24 20:58:15 crc kubenswrapper[4812]: I1124 20:58:15.740622 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-b7ndg" event={"ID":"735f042b-f017-4e06-8699-518a9853d124","Type":"ContainerStarted","Data":"64d18448dbb0af2eaf1c9195a0cd3bff5d1fc5915cbbbe2087c8667d9fa68c3c"} Nov 24 20:58:15 crc kubenswrapper[4812]: I1124 20:58:15.742480 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:58:26 crc kubenswrapper[4812]: I1124 20:58:26.976479 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 20:58:26 crc kubenswrapper[4812]: E1124 20:58:26.977419 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:58:27 crc kubenswrapper[4812]: I1124 20:58:27.740427 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-b7ndg" Nov 24 20:58:27 crc kubenswrapper[4812]: I1124 20:58:27.777681 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-b7ndg" podStartSLOduration=15.969505543 podStartE2EDuration="30.777655757s" podCreationTimestamp="2025-11-24 20:57:57 +0000 UTC" firstStartedPulling="2025-11-24 20:57:58.352805478 +0000 UTC m=+6072.141757859" lastFinishedPulling="2025-11-24 20:58:13.160955702 +0000 UTC m=+6086.949908073" observedRunningTime="2025-11-24 20:58:15.776027514 +0000 UTC m=+6089.564979965" watchObservedRunningTime="2025-11-24 20:58:27.777655757 +0000 UTC m=+6101.566608138" Nov 24 20:58:41 crc kubenswrapper[4812]: I1124 20:58:41.966012 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 20:58:41 crc kubenswrapper[4812]: E1124 20:58:41.967091 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:58:56 crc kubenswrapper[4812]: I1124 20:58:56.985947 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 20:58:56 crc kubenswrapper[4812]: E1124 20:58:56.987163 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:59:09 crc kubenswrapper[4812]: I1124 20:59:09.966035 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 20:59:09 crc kubenswrapper[4812]: E1124 20:59:09.967118 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:59:11 crc kubenswrapper[4812]: I1124 20:59:11.051708 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-7473-account-create-mg6kt"] Nov 24 20:59:11 crc kubenswrapper[4812]: I1124 20:59:11.063821 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-fl42b"] Nov 24 20:59:11 crc kubenswrapper[4812]: I1124 20:59:11.074769 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-fl42b"] Nov 24 20:59:11 crc kubenswrapper[4812]: I1124 20:59:11.085659 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-7473-account-create-mg6kt"] Nov 24 20:59:12 crc kubenswrapper[4812]: I1124 20:59:12.984930 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1aeaff7-a31a-4a80-9513-2046a638e838" path="/var/lib/kubelet/pods/b1aeaff7-a31a-4a80-9513-2046a638e838/volumes" Nov 24 20:59:12 crc kubenswrapper[4812]: I1124 20:59:12.986245 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcf16335-ccec-4697-ae06-28ee97e365bf" path="/var/lib/kubelet/pods/dcf16335-ccec-4697-ae06-28ee97e365bf/volumes" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.185710 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-769875dd67-7r68m"] Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.187747 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-769875dd67-7r68m" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.190497 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.190708 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.191113 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.198809 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-vwglc" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.207082 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-769875dd67-7r68m"] Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.232131 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.232413 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="38935c29-7af7-4470-8eb2-a752feb40275" containerName="glance-log" containerID="cri-o://9064a24928d30ca1c3ca4873fe3a0e3b7e19158aaf515861322062c22cf08c99" gracePeriod=30 Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.232566 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="38935c29-7af7-4470-8eb2-a752feb40275" containerName="glance-httpd" containerID="cri-o://805b40c140e1f30d927a2501a4fcd6ddb259ee88d15535d5e69940fff038ab45" gracePeriod=30 Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.302935 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.303367 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5151fc6f-3b0d-460e-8c31-08092b0fca85" containerName="glance-log" containerID="cri-o://fc88945011acdb81802d0f1f81a82f9e6fec3cba1b30d5e34202ad175332ef29" gracePeriod=30 Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.303797 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5151fc6f-3b0d-460e-8c31-08092b0fca85" containerName="glance-httpd" containerID="cri-o://17557d078fd00911d446e9cb7f16481c0a94a12726468b56b1a8932b6cacace9" gracePeriod=30 Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.322832 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66c66ddcf5-555wz"] Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.324271 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66c66ddcf5-555wz" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.386164 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-logs\") pod \"horizon-769875dd67-7r68m\" (UID: \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\") " pod="openstack/horizon-769875dd67-7r68m" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.386293 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-scripts\") pod \"horizon-66c66ddcf5-555wz\" (UID: \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\") " pod="openstack/horizon-66c66ddcf5-555wz" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.386394 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-config-data\") pod \"horizon-769875dd67-7r68m\" (UID: \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\") " pod="openstack/horizon-769875dd67-7r68m" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.386566 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-scripts\") pod \"horizon-769875dd67-7r68m\" (UID: \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\") " pod="openstack/horizon-769875dd67-7r68m" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.386606 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-config-data\") pod \"horizon-66c66ddcf5-555wz\" (UID: \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\") " pod="openstack/horizon-66c66ddcf5-555wz" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.386665 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sggsb\" (UniqueName: \"kubernetes.io/projected/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-kube-api-access-sggsb\") pod \"horizon-769875dd67-7r68m\" (UID: \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\") " pod="openstack/horizon-769875dd67-7r68m" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.386714 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-horizon-secret-key\") pod \"horizon-769875dd67-7r68m\" (UID: \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\") " pod="openstack/horizon-769875dd67-7r68m" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.386820 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-horizon-secret-key\") pod \"horizon-66c66ddcf5-555wz\" (UID: \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\") " pod="openstack/horizon-66c66ddcf5-555wz" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.386924 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-logs\") pod \"horizon-66c66ddcf5-555wz\" (UID: \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\") " pod="openstack/horizon-66c66ddcf5-555wz" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.386987 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh85r\" (UniqueName: \"kubernetes.io/projected/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-kube-api-access-lh85r\") pod \"horizon-66c66ddcf5-555wz\" (UID: \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\") " pod="openstack/horizon-66c66ddcf5-555wz" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.398303 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66c66ddcf5-555wz"] Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.488977 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-horizon-secret-key\") pod \"horizon-66c66ddcf5-555wz\" (UID: \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\") " pod="openstack/horizon-66c66ddcf5-555wz" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.489089 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-logs\") pod \"horizon-66c66ddcf5-555wz\" (UID: \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\") " pod="openstack/horizon-66c66ddcf5-555wz" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.489141 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh85r\" (UniqueName: \"kubernetes.io/projected/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-kube-api-access-lh85r\") pod \"horizon-66c66ddcf5-555wz\" (UID: \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\") " pod="openstack/horizon-66c66ddcf5-555wz" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.489186 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-logs\") pod \"horizon-769875dd67-7r68m\" (UID: \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\") " pod="openstack/horizon-769875dd67-7r68m" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.489229 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-scripts\") pod \"horizon-66c66ddcf5-555wz\" (UID: \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\") " pod="openstack/horizon-66c66ddcf5-555wz" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.489267 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-config-data\") pod \"horizon-769875dd67-7r68m\" (UID: \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\") " pod="openstack/horizon-769875dd67-7r68m" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.489352 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-scripts\") pod \"horizon-769875dd67-7r68m\" (UID: \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\") " pod="openstack/horizon-769875dd67-7r68m" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.489374 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-config-data\") pod \"horizon-66c66ddcf5-555wz\" (UID: \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\") " pod="openstack/horizon-66c66ddcf5-555wz" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.489402 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sggsb\" (UniqueName: \"kubernetes.io/projected/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-kube-api-access-sggsb\") pod \"horizon-769875dd67-7r68m\" (UID: \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\") " pod="openstack/horizon-769875dd67-7r68m" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.489429 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-horizon-secret-key\") pod \"horizon-769875dd67-7r68m\" (UID: \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\") " pod="openstack/horizon-769875dd67-7r68m" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.490717 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-scripts\") pod \"horizon-66c66ddcf5-555wz\" (UID: \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\") " pod="openstack/horizon-66c66ddcf5-555wz" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.491027 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-logs\") pod \"horizon-66c66ddcf5-555wz\" (UID: \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\") " pod="openstack/horizon-66c66ddcf5-555wz" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.491596 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-logs\") pod \"horizon-769875dd67-7r68m\" (UID: \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\") " pod="openstack/horizon-769875dd67-7r68m" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.492075 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-scripts\") pod \"horizon-769875dd67-7r68m\" (UID: \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\") " pod="openstack/horizon-769875dd67-7r68m" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.494132 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-config-data\") pod \"horizon-66c66ddcf5-555wz\" (UID: \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\") " pod="openstack/horizon-66c66ddcf5-555wz" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.495119 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-config-data\") pod \"horizon-769875dd67-7r68m\" (UID: \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\") " pod="openstack/horizon-769875dd67-7r68m" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.495191 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-horizon-secret-key\") pod \"horizon-66c66ddcf5-555wz\" (UID: \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\") " pod="openstack/horizon-66c66ddcf5-555wz" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.501229 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-horizon-secret-key\") pod \"horizon-769875dd67-7r68m\" (UID: \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\") " pod="openstack/horizon-769875dd67-7r68m" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.507756 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh85r\" (UniqueName: \"kubernetes.io/projected/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-kube-api-access-lh85r\") pod \"horizon-66c66ddcf5-555wz\" (UID: \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\") " pod="openstack/horizon-66c66ddcf5-555wz" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.512941 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sggsb\" (UniqueName: \"kubernetes.io/projected/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-kube-api-access-sggsb\") pod \"horizon-769875dd67-7r68m\" (UID: \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\") " pod="openstack/horizon-769875dd67-7r68m" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.552467 4812 generic.go:334] "Generic (PLEG): container finished" podID="5151fc6f-3b0d-460e-8c31-08092b0fca85" containerID="fc88945011acdb81802d0f1f81a82f9e6fec3cba1b30d5e34202ad175332ef29" exitCode=143 Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.552552 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5151fc6f-3b0d-460e-8c31-08092b0fca85","Type":"ContainerDied","Data":"fc88945011acdb81802d0f1f81a82f9e6fec3cba1b30d5e34202ad175332ef29"} Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.555857 4812 generic.go:334] "Generic (PLEG): container finished" podID="38935c29-7af7-4470-8eb2-a752feb40275" containerID="9064a24928d30ca1c3ca4873fe3a0e3b7e19158aaf515861322062c22cf08c99" exitCode=143 Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.555909 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"38935c29-7af7-4470-8eb2-a752feb40275","Type":"ContainerDied","Data":"9064a24928d30ca1c3ca4873fe3a0e3b7e19158aaf515861322062c22cf08c99"} Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.698735 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66c66ddcf5-555wz" Nov 24 20:59:16 crc kubenswrapper[4812]: I1124 20:59:16.808218 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-769875dd67-7r68m" Nov 24 20:59:17 crc kubenswrapper[4812]: I1124 20:59:17.170837 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66c66ddcf5-555wz"] Nov 24 20:59:17 crc kubenswrapper[4812]: W1124 20:59:17.293470 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc23ec86d_efe2_4fd9_806f_20c7f6eee8b0.slice/crio-6c75284f31af4c748633a4eb9f6de016a4057e3d775c54e6ae8c05d3801852b0 WatchSource:0}: Error finding container 6c75284f31af4c748633a4eb9f6de016a4057e3d775c54e6ae8c05d3801852b0: Status 404 returned error can't find the container with id 6c75284f31af4c748633a4eb9f6de016a4057e3d775c54e6ae8c05d3801852b0 Nov 24 20:59:17 crc kubenswrapper[4812]: I1124 20:59:17.297112 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-769875dd67-7r68m"] Nov 24 20:59:17 crc kubenswrapper[4812]: I1124 20:59:17.568319 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66c66ddcf5-555wz" event={"ID":"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb","Type":"ContainerStarted","Data":"2b33e7aeda865ad7d382d1feab98aa842ec8887ce734bbd35a1b8864cb8c920b"} Nov 24 20:59:17 crc kubenswrapper[4812]: I1124 20:59:17.569426 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-769875dd67-7r68m" event={"ID":"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0","Type":"ContainerStarted","Data":"6c75284f31af4c748633a4eb9f6de016a4057e3d775c54e6ae8c05d3801852b0"} Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.047035 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-p2j52"] Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.056915 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-p2j52"] Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.446923 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66c66ddcf5-555wz"] Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.512452 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7d6b4c97d4-tx4dh"] Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.514251 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.525401 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.535709 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d6b4c97d4-tx4dh"] Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.610692 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-769875dd67-7r68m"] Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.634897 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c23031c-42fe-41dc-9e20-8eff2922676c-horizon-tls-certs\") pod \"horizon-7d6b4c97d4-tx4dh\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.635166 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c23031c-42fe-41dc-9e20-8eff2922676c-combined-ca-bundle\") pod \"horizon-7d6b4c97d4-tx4dh\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.635246 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27nsj\" (UniqueName: \"kubernetes.io/projected/5c23031c-42fe-41dc-9e20-8eff2922676c-kube-api-access-27nsj\") pod \"horizon-7d6b4c97d4-tx4dh\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.635328 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c23031c-42fe-41dc-9e20-8eff2922676c-config-data\") pod \"horizon-7d6b4c97d4-tx4dh\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.635442 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c23031c-42fe-41dc-9e20-8eff2922676c-scripts\") pod \"horizon-7d6b4c97d4-tx4dh\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.635514 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5c23031c-42fe-41dc-9e20-8eff2922676c-horizon-secret-key\") pod \"horizon-7d6b4c97d4-tx4dh\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.635630 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c23031c-42fe-41dc-9e20-8eff2922676c-logs\") pod \"horizon-7d6b4c97d4-tx4dh\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.645284 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-655bf47886-z4qqq"] Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.675679 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.676531 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-655bf47886-z4qqq"] Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.737646 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28908d79-deac-497f-94c0-dd47c148c6ba-scripts\") pod \"horizon-655bf47886-z4qqq\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.737690 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28908d79-deac-497f-94c0-dd47c148c6ba-logs\") pod \"horizon-655bf47886-z4qqq\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.737715 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28908d79-deac-497f-94c0-dd47c148c6ba-config-data\") pod \"horizon-655bf47886-z4qqq\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.737752 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c23031c-42fe-41dc-9e20-8eff2922676c-horizon-tls-certs\") pod \"horizon-7d6b4c97d4-tx4dh\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.737771 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c23031c-42fe-41dc-9e20-8eff2922676c-combined-ca-bundle\") pod \"horizon-7d6b4c97d4-tx4dh\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.737789 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27nsj\" (UniqueName: \"kubernetes.io/projected/5c23031c-42fe-41dc-9e20-8eff2922676c-kube-api-access-27nsj\") pod \"horizon-7d6b4c97d4-tx4dh\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.737813 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c23031c-42fe-41dc-9e20-8eff2922676c-config-data\") pod \"horizon-7d6b4c97d4-tx4dh\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.737835 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6rgp\" (UniqueName: \"kubernetes.io/projected/28908d79-deac-497f-94c0-dd47c148c6ba-kube-api-access-t6rgp\") pod \"horizon-655bf47886-z4qqq\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.737855 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/28908d79-deac-497f-94c0-dd47c148c6ba-horizon-secret-key\") pod \"horizon-655bf47886-z4qqq\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.737887 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c23031c-42fe-41dc-9e20-8eff2922676c-scripts\") pod \"horizon-7d6b4c97d4-tx4dh\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.737907 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5c23031c-42fe-41dc-9e20-8eff2922676c-horizon-secret-key\") pod \"horizon-7d6b4c97d4-tx4dh\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.737938 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/28908d79-deac-497f-94c0-dd47c148c6ba-horizon-tls-certs\") pod \"horizon-655bf47886-z4qqq\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.737985 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28908d79-deac-497f-94c0-dd47c148c6ba-combined-ca-bundle\") pod \"horizon-655bf47886-z4qqq\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.738003 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c23031c-42fe-41dc-9e20-8eff2922676c-logs\") pod \"horizon-7d6b4c97d4-tx4dh\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.738458 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c23031c-42fe-41dc-9e20-8eff2922676c-logs\") pod \"horizon-7d6b4c97d4-tx4dh\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.739040 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c23031c-42fe-41dc-9e20-8eff2922676c-scripts\") pod \"horizon-7d6b4c97d4-tx4dh\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.739253 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c23031c-42fe-41dc-9e20-8eff2922676c-config-data\") pod \"horizon-7d6b4c97d4-tx4dh\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.744538 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c23031c-42fe-41dc-9e20-8eff2922676c-combined-ca-bundle\") pod \"horizon-7d6b4c97d4-tx4dh\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.746049 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c23031c-42fe-41dc-9e20-8eff2922676c-horizon-tls-certs\") pod \"horizon-7d6b4c97d4-tx4dh\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.762375 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27nsj\" (UniqueName: \"kubernetes.io/projected/5c23031c-42fe-41dc-9e20-8eff2922676c-kube-api-access-27nsj\") pod \"horizon-7d6b4c97d4-tx4dh\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.764801 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5c23031c-42fe-41dc-9e20-8eff2922676c-horizon-secret-key\") pod \"horizon-7d6b4c97d4-tx4dh\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.840893 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28908d79-deac-497f-94c0-dd47c148c6ba-scripts\") pod \"horizon-655bf47886-z4qqq\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.840952 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28908d79-deac-497f-94c0-dd47c148c6ba-logs\") pod \"horizon-655bf47886-z4qqq\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.840975 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28908d79-deac-497f-94c0-dd47c148c6ba-config-data\") pod \"horizon-655bf47886-z4qqq\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.841034 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6rgp\" (UniqueName: \"kubernetes.io/projected/28908d79-deac-497f-94c0-dd47c148c6ba-kube-api-access-t6rgp\") pod \"horizon-655bf47886-z4qqq\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.841056 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/28908d79-deac-497f-94c0-dd47c148c6ba-horizon-secret-key\") pod \"horizon-655bf47886-z4qqq\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.841113 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/28908d79-deac-497f-94c0-dd47c148c6ba-horizon-tls-certs\") pod \"horizon-655bf47886-z4qqq\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.841136 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28908d79-deac-497f-94c0-dd47c148c6ba-combined-ca-bundle\") pod \"horizon-655bf47886-z4qqq\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.841450 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28908d79-deac-497f-94c0-dd47c148c6ba-logs\") pod \"horizon-655bf47886-z4qqq\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.841865 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28908d79-deac-497f-94c0-dd47c148c6ba-scripts\") pod \"horizon-655bf47886-z4qqq\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.842465 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28908d79-deac-497f-94c0-dd47c148c6ba-config-data\") pod \"horizon-655bf47886-z4qqq\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.844474 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28908d79-deac-497f-94c0-dd47c148c6ba-combined-ca-bundle\") pod \"horizon-655bf47886-z4qqq\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.844484 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/28908d79-deac-497f-94c0-dd47c148c6ba-horizon-secret-key\") pod \"horizon-655bf47886-z4qqq\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.849948 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/28908d79-deac-497f-94c0-dd47c148c6ba-horizon-tls-certs\") pod \"horizon-655bf47886-z4qqq\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.861982 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6rgp\" (UniqueName: \"kubernetes.io/projected/28908d79-deac-497f-94c0-dd47c148c6ba-kube-api-access-t6rgp\") pod \"horizon-655bf47886-z4qqq\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.871443 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:18 crc kubenswrapper[4812]: I1124 20:59:18.986172 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19dfd708-252a-4b23-9d6c-2615078ce723" path="/var/lib/kubelet/pods/19dfd708-252a-4b23-9d6c-2615078ce723/volumes" Nov 24 20:59:19 crc kubenswrapper[4812]: I1124 20:59:19.011187 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:19 crc kubenswrapper[4812]: I1124 20:59:19.359792 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d6b4c97d4-tx4dh"] Nov 24 20:59:19 crc kubenswrapper[4812]: W1124 20:59:19.366804 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c23031c_42fe_41dc_9e20_8eff2922676c.slice/crio-d52e981809bc73ca27414cf3af6469983cdb52e0f5586d043614c0b95b890dda WatchSource:0}: Error finding container d52e981809bc73ca27414cf3af6469983cdb52e0f5586d043614c0b95b890dda: Status 404 returned error can't find the container with id d52e981809bc73ca27414cf3af6469983cdb52e0f5586d043614c0b95b890dda Nov 24 20:59:19 crc kubenswrapper[4812]: I1124 20:59:19.465553 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-655bf47886-z4qqq"] Nov 24 20:59:19 crc kubenswrapper[4812]: W1124 20:59:19.479647 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28908d79_deac_497f_94c0_dd47c148c6ba.slice/crio-d06275dbefc94ce8b0de5bf7c435fa2ea1176c2705fbb19432dbe41c8b59e4a8 WatchSource:0}: Error finding container d06275dbefc94ce8b0de5bf7c435fa2ea1176c2705fbb19432dbe41c8b59e4a8: Status 404 returned error can't find the container with id d06275dbefc94ce8b0de5bf7c435fa2ea1176c2705fbb19432dbe41c8b59e4a8 Nov 24 20:59:19 crc kubenswrapper[4812]: I1124 20:59:19.622085 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d6b4c97d4-tx4dh" event={"ID":"5c23031c-42fe-41dc-9e20-8eff2922676c","Type":"ContainerStarted","Data":"d52e981809bc73ca27414cf3af6469983cdb52e0f5586d043614c0b95b890dda"} Nov 24 20:59:19 crc kubenswrapper[4812]: I1124 20:59:19.623574 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-655bf47886-z4qqq" event={"ID":"28908d79-deac-497f-94c0-dd47c148c6ba","Type":"ContainerStarted","Data":"d06275dbefc94ce8b0de5bf7c435fa2ea1176c2705fbb19432dbe41c8b59e4a8"} Nov 24 20:59:19 crc kubenswrapper[4812]: I1124 20:59:19.625706 4812 generic.go:334] "Generic (PLEG): container finished" podID="38935c29-7af7-4470-8eb2-a752feb40275" containerID="805b40c140e1f30d927a2501a4fcd6ddb259ee88d15535d5e69940fff038ab45" exitCode=0 Nov 24 20:59:19 crc kubenswrapper[4812]: I1124 20:59:19.625741 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"38935c29-7af7-4470-8eb2-a752feb40275","Type":"ContainerDied","Data":"805b40c140e1f30d927a2501a4fcd6ddb259ee88d15535d5e69940fff038ab45"} Nov 24 20:59:19 crc kubenswrapper[4812]: I1124 20:59:19.985029 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.171033 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcmfl\" (UniqueName: \"kubernetes.io/projected/38935c29-7af7-4470-8eb2-a752feb40275-kube-api-access-qcmfl\") pod \"38935c29-7af7-4470-8eb2-a752feb40275\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.171080 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-public-tls-certs\") pod \"38935c29-7af7-4470-8eb2-a752feb40275\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.171187 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38935c29-7af7-4470-8eb2-a752feb40275-logs\") pod \"38935c29-7af7-4470-8eb2-a752feb40275\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.171213 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38935c29-7af7-4470-8eb2-a752feb40275-httpd-run\") pod \"38935c29-7af7-4470-8eb2-a752feb40275\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.171243 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-scripts\") pod \"38935c29-7af7-4470-8eb2-a752feb40275\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.171275 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-config-data\") pod \"38935c29-7af7-4470-8eb2-a752feb40275\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.171429 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-combined-ca-bundle\") pod \"38935c29-7af7-4470-8eb2-a752feb40275\" (UID: \"38935c29-7af7-4470-8eb2-a752feb40275\") " Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.171817 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38935c29-7af7-4470-8eb2-a752feb40275-logs" (OuterVolumeSpecName: "logs") pod "38935c29-7af7-4470-8eb2-a752feb40275" (UID: "38935c29-7af7-4470-8eb2-a752feb40275"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.172157 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38935c29-7af7-4470-8eb2-a752feb40275-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "38935c29-7af7-4470-8eb2-a752feb40275" (UID: "38935c29-7af7-4470-8eb2-a752feb40275"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.181046 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38935c29-7af7-4470-8eb2-a752feb40275-kube-api-access-qcmfl" (OuterVolumeSpecName: "kube-api-access-qcmfl") pod "38935c29-7af7-4470-8eb2-a752feb40275" (UID: "38935c29-7af7-4470-8eb2-a752feb40275"). InnerVolumeSpecName "kube-api-access-qcmfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.182514 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-scripts" (OuterVolumeSpecName: "scripts") pod "38935c29-7af7-4470-8eb2-a752feb40275" (UID: "38935c29-7af7-4470-8eb2-a752feb40275"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.232595 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38935c29-7af7-4470-8eb2-a752feb40275" (UID: "38935c29-7af7-4470-8eb2-a752feb40275"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.253498 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-config-data" (OuterVolumeSpecName: "config-data") pod "38935c29-7af7-4470-8eb2-a752feb40275" (UID: "38935c29-7af7-4470-8eb2-a752feb40275"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.274035 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.274070 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcmfl\" (UniqueName: \"kubernetes.io/projected/38935c29-7af7-4470-8eb2-a752feb40275-kube-api-access-qcmfl\") on node \"crc\" DevicePath \"\"" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.274083 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38935c29-7af7-4470-8eb2-a752feb40275-logs\") on node \"crc\" DevicePath \"\"" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.274095 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38935c29-7af7-4470-8eb2-a752feb40275-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.274107 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.274117 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.281099 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "38935c29-7af7-4470-8eb2-a752feb40275" (UID: "38935c29-7af7-4470-8eb2-a752feb40275"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.291819 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.377397 4812 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/38935c29-7af7-4470-8eb2-a752feb40275-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.479049 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-scripts\") pod \"5151fc6f-3b0d-460e-8c31-08092b0fca85\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.479491 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5151fc6f-3b0d-460e-8c31-08092b0fca85-httpd-run\") pod \"5151fc6f-3b0d-460e-8c31-08092b0fca85\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.479557 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-862mb\" (UniqueName: \"kubernetes.io/projected/5151fc6f-3b0d-460e-8c31-08092b0fca85-kube-api-access-862mb\") pod \"5151fc6f-3b0d-460e-8c31-08092b0fca85\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.479653 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5151fc6f-3b0d-460e-8c31-08092b0fca85-logs\") pod \"5151fc6f-3b0d-460e-8c31-08092b0fca85\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.479706 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-internal-tls-certs\") pod \"5151fc6f-3b0d-460e-8c31-08092b0fca85\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.479780 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-combined-ca-bundle\") pod \"5151fc6f-3b0d-460e-8c31-08092b0fca85\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.479835 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-config-data\") pod \"5151fc6f-3b0d-460e-8c31-08092b0fca85\" (UID: \"5151fc6f-3b0d-460e-8c31-08092b0fca85\") " Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.482000 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5151fc6f-3b0d-460e-8c31-08092b0fca85-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5151fc6f-3b0d-460e-8c31-08092b0fca85" (UID: "5151fc6f-3b0d-460e-8c31-08092b0fca85"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.482037 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5151fc6f-3b0d-460e-8c31-08092b0fca85-logs" (OuterVolumeSpecName: "logs") pod "5151fc6f-3b0d-460e-8c31-08092b0fca85" (UID: "5151fc6f-3b0d-460e-8c31-08092b0fca85"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.484995 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5151fc6f-3b0d-460e-8c31-08092b0fca85-kube-api-access-862mb" (OuterVolumeSpecName: "kube-api-access-862mb") pod "5151fc6f-3b0d-460e-8c31-08092b0fca85" (UID: "5151fc6f-3b0d-460e-8c31-08092b0fca85"). InnerVolumeSpecName "kube-api-access-862mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.491168 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-scripts" (OuterVolumeSpecName: "scripts") pod "5151fc6f-3b0d-460e-8c31-08092b0fca85" (UID: "5151fc6f-3b0d-460e-8c31-08092b0fca85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.532675 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5151fc6f-3b0d-460e-8c31-08092b0fca85" (UID: "5151fc6f-3b0d-460e-8c31-08092b0fca85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.558878 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5151fc6f-3b0d-460e-8c31-08092b0fca85" (UID: "5151fc6f-3b0d-460e-8c31-08092b0fca85"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.559784 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-config-data" (OuterVolumeSpecName: "config-data") pod "5151fc6f-3b0d-460e-8c31-08092b0fca85" (UID: "5151fc6f-3b0d-460e-8c31-08092b0fca85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.582034 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.582063 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.582073 4812 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5151fc6f-3b0d-460e-8c31-08092b0fca85-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.582082 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-862mb\" (UniqueName: \"kubernetes.io/projected/5151fc6f-3b0d-460e-8c31-08092b0fca85-kube-api-access-862mb\") on node \"crc\" DevicePath \"\"" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.582095 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5151fc6f-3b0d-460e-8c31-08092b0fca85-logs\") on node \"crc\" DevicePath \"\"" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.582104 4812 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.582112 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5151fc6f-3b0d-460e-8c31-08092b0fca85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.638625 4812 generic.go:334] "Generic (PLEG): container finished" podID="5151fc6f-3b0d-460e-8c31-08092b0fca85" containerID="17557d078fd00911d446e9cb7f16481c0a94a12726468b56b1a8932b6cacace9" exitCode=0 Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.638673 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.638701 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5151fc6f-3b0d-460e-8c31-08092b0fca85","Type":"ContainerDied","Data":"17557d078fd00911d446e9cb7f16481c0a94a12726468b56b1a8932b6cacace9"} Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.638750 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5151fc6f-3b0d-460e-8c31-08092b0fca85","Type":"ContainerDied","Data":"386ef997536b56f1296628a9b070fb3041e9eae4b9a5b3132c5220043b3b7004"} Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.638817 4812 scope.go:117] "RemoveContainer" containerID="17557d078fd00911d446e9cb7f16481c0a94a12726468b56b1a8932b6cacace9" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.642418 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"38935c29-7af7-4470-8eb2-a752feb40275","Type":"ContainerDied","Data":"b2e1ea1d4ceb8b86bbc5634a5cedbb47c45da0486497480ed102b5fcb6a63ce3"} Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.642517 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.676280 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.696039 4812 scope.go:117] "RemoveContainer" containerID="fc88945011acdb81802d0f1f81a82f9e6fec3cba1b30d5e34202ad175332ef29" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.757071 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.757362 4812 scope.go:117] "RemoveContainer" containerID="17557d078fd00911d446e9cb7f16481c0a94a12726468b56b1a8932b6cacace9" Nov 24 20:59:20 crc kubenswrapper[4812]: E1124 20:59:20.758485 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17557d078fd00911d446e9cb7f16481c0a94a12726468b56b1a8932b6cacace9\": container with ID starting with 17557d078fd00911d446e9cb7f16481c0a94a12726468b56b1a8932b6cacace9 not found: ID does not exist" containerID="17557d078fd00911d446e9cb7f16481c0a94a12726468b56b1a8932b6cacace9" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.758581 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17557d078fd00911d446e9cb7f16481c0a94a12726468b56b1a8932b6cacace9"} err="failed to get container status \"17557d078fd00911d446e9cb7f16481c0a94a12726468b56b1a8932b6cacace9\": rpc error: code = NotFound desc = could not find container \"17557d078fd00911d446e9cb7f16481c0a94a12726468b56b1a8932b6cacace9\": container with ID starting with 17557d078fd00911d446e9cb7f16481c0a94a12726468b56b1a8932b6cacace9 not found: ID does not exist" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.758605 4812 scope.go:117] "RemoveContainer" containerID="fc88945011acdb81802d0f1f81a82f9e6fec3cba1b30d5e34202ad175332ef29" Nov 24 20:59:20 crc kubenswrapper[4812]: E1124 20:59:20.759431 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc88945011acdb81802d0f1f81a82f9e6fec3cba1b30d5e34202ad175332ef29\": container with ID starting with fc88945011acdb81802d0f1f81a82f9e6fec3cba1b30d5e34202ad175332ef29 not found: ID does not exist" containerID="fc88945011acdb81802d0f1f81a82f9e6fec3cba1b30d5e34202ad175332ef29" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.759462 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc88945011acdb81802d0f1f81a82f9e6fec3cba1b30d5e34202ad175332ef29"} err="failed to get container status \"fc88945011acdb81802d0f1f81a82f9e6fec3cba1b30d5e34202ad175332ef29\": rpc error: code = NotFound desc = could not find container \"fc88945011acdb81802d0f1f81a82f9e6fec3cba1b30d5e34202ad175332ef29\": container with ID starting with fc88945011acdb81802d0f1f81a82f9e6fec3cba1b30d5e34202ad175332ef29 not found: ID does not exist" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.759478 4812 scope.go:117] "RemoveContainer" containerID="805b40c140e1f30d927a2501a4fcd6ddb259ee88d15535d5e69940fff038ab45" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.794383 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.806846 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.817934 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 20:59:20 crc kubenswrapper[4812]: E1124 20:59:20.818703 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5151fc6f-3b0d-460e-8c31-08092b0fca85" containerName="glance-log" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.818734 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5151fc6f-3b0d-460e-8c31-08092b0fca85" containerName="glance-log" Nov 24 20:59:20 crc kubenswrapper[4812]: E1124 20:59:20.818771 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38935c29-7af7-4470-8eb2-a752feb40275" containerName="glance-httpd" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.818779 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="38935c29-7af7-4470-8eb2-a752feb40275" containerName="glance-httpd" Nov 24 20:59:20 crc kubenswrapper[4812]: E1124 20:59:20.818812 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5151fc6f-3b0d-460e-8c31-08092b0fca85" containerName="glance-httpd" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.818819 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5151fc6f-3b0d-460e-8c31-08092b0fca85" containerName="glance-httpd" Nov 24 20:59:20 crc kubenswrapper[4812]: E1124 20:59:20.818843 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38935c29-7af7-4470-8eb2-a752feb40275" containerName="glance-log" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.818849 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="38935c29-7af7-4470-8eb2-a752feb40275" containerName="glance-log" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.819065 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="5151fc6f-3b0d-460e-8c31-08092b0fca85" containerName="glance-httpd" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.819088 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="38935c29-7af7-4470-8eb2-a752feb40275" containerName="glance-httpd" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.819101 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="5151fc6f-3b0d-460e-8c31-08092b0fca85" containerName="glance-log" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.819115 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="38935c29-7af7-4470-8eb2-a752feb40275" containerName="glance-log" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.823196 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.828973 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.829547 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-d6xxb" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.829642 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.829811 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.836849 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.840361 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.846735 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/725bf128-3f54-4775-8fbc-f12987560eb7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"725bf128-3f54-4775-8fbc-f12987560eb7\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.846777 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/725bf128-3f54-4775-8fbc-f12987560eb7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"725bf128-3f54-4775-8fbc-f12987560eb7\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.846806 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/725bf128-3f54-4775-8fbc-f12987560eb7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"725bf128-3f54-4775-8fbc-f12987560eb7\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.846828 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/725bf128-3f54-4775-8fbc-f12987560eb7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"725bf128-3f54-4775-8fbc-f12987560eb7\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.846850 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/725bf128-3f54-4775-8fbc-f12987560eb7-logs\") pod \"glance-default-internal-api-0\" (UID: \"725bf128-3f54-4775-8fbc-f12987560eb7\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.846931 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/725bf128-3f54-4775-8fbc-f12987560eb7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"725bf128-3f54-4775-8fbc-f12987560eb7\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.846987 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8bx8\" (UniqueName: \"kubernetes.io/projected/725bf128-3f54-4775-8fbc-f12987560eb7-kube-api-access-m8bx8\") pod \"glance-default-internal-api-0\" (UID: \"725bf128-3f54-4775-8fbc-f12987560eb7\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.853378 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.853790 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.861788 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.879288 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.885522 4812 scope.go:117] "RemoveContainer" containerID="9064a24928d30ca1c3ca4873fe3a0e3b7e19158aaf515861322062c22cf08c99" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.948687 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbdef179-c841-482d-a2dd-9486caaf1339-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bbdef179-c841-482d-a2dd-9486caaf1339\") " pod="openstack/glance-default-external-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.948754 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/725bf128-3f54-4775-8fbc-f12987560eb7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"725bf128-3f54-4775-8fbc-f12987560eb7\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.948777 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbdef179-c841-482d-a2dd-9486caaf1339-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bbdef179-c841-482d-a2dd-9486caaf1339\") " pod="openstack/glance-default-external-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.948830 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/725bf128-3f54-4775-8fbc-f12987560eb7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"725bf128-3f54-4775-8fbc-f12987560eb7\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.948847 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/725bf128-3f54-4775-8fbc-f12987560eb7-logs\") pod \"glance-default-internal-api-0\" (UID: \"725bf128-3f54-4775-8fbc-f12987560eb7\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.948925 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/725bf128-3f54-4775-8fbc-f12987560eb7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"725bf128-3f54-4775-8fbc-f12987560eb7\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.948956 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbdef179-c841-482d-a2dd-9486caaf1339-logs\") pod \"glance-default-external-api-0\" (UID: \"bbdef179-c841-482d-a2dd-9486caaf1339\") " pod="openstack/glance-default-external-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.949003 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbdef179-c841-482d-a2dd-9486caaf1339-scripts\") pod \"glance-default-external-api-0\" (UID: \"bbdef179-c841-482d-a2dd-9486caaf1339\") " pod="openstack/glance-default-external-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.949032 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8bx8\" (UniqueName: \"kubernetes.io/projected/725bf128-3f54-4775-8fbc-f12987560eb7-kube-api-access-m8bx8\") pod \"glance-default-internal-api-0\" (UID: \"725bf128-3f54-4775-8fbc-f12987560eb7\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.949059 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbdef179-c841-482d-a2dd-9486caaf1339-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bbdef179-c841-482d-a2dd-9486caaf1339\") " pod="openstack/glance-default-external-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.949103 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbdef179-c841-482d-a2dd-9486caaf1339-config-data\") pod \"glance-default-external-api-0\" (UID: \"bbdef179-c841-482d-a2dd-9486caaf1339\") " pod="openstack/glance-default-external-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.949131 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p27jg\" (UniqueName: \"kubernetes.io/projected/bbdef179-c841-482d-a2dd-9486caaf1339-kube-api-access-p27jg\") pod \"glance-default-external-api-0\" (UID: \"bbdef179-c841-482d-a2dd-9486caaf1339\") " pod="openstack/glance-default-external-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.949656 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/725bf128-3f54-4775-8fbc-f12987560eb7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"725bf128-3f54-4775-8fbc-f12987560eb7\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.949693 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/725bf128-3f54-4775-8fbc-f12987560eb7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"725bf128-3f54-4775-8fbc-f12987560eb7\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.949788 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/725bf128-3f54-4775-8fbc-f12987560eb7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"725bf128-3f54-4775-8fbc-f12987560eb7\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.949817 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/725bf128-3f54-4775-8fbc-f12987560eb7-logs\") pod \"glance-default-internal-api-0\" (UID: \"725bf128-3f54-4775-8fbc-f12987560eb7\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.964672 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/725bf128-3f54-4775-8fbc-f12987560eb7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"725bf128-3f54-4775-8fbc-f12987560eb7\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.966017 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/725bf128-3f54-4775-8fbc-f12987560eb7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"725bf128-3f54-4775-8fbc-f12987560eb7\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.970270 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/725bf128-3f54-4775-8fbc-f12987560eb7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"725bf128-3f54-4775-8fbc-f12987560eb7\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.973946 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/725bf128-3f54-4775-8fbc-f12987560eb7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"725bf128-3f54-4775-8fbc-f12987560eb7\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.981986 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8bx8\" (UniqueName: \"kubernetes.io/projected/725bf128-3f54-4775-8fbc-f12987560eb7-kube-api-access-m8bx8\") pod \"glance-default-internal-api-0\" (UID: \"725bf128-3f54-4775-8fbc-f12987560eb7\") " pod="openstack/glance-default-internal-api-0" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.989152 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38935c29-7af7-4470-8eb2-a752feb40275" path="/var/lib/kubelet/pods/38935c29-7af7-4470-8eb2-a752feb40275/volumes" Nov 24 20:59:20 crc kubenswrapper[4812]: I1124 20:59:20.989881 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5151fc6f-3b0d-460e-8c31-08092b0fca85" path="/var/lib/kubelet/pods/5151fc6f-3b0d-460e-8c31-08092b0fca85/volumes" Nov 24 20:59:21 crc kubenswrapper[4812]: I1124 20:59:21.051752 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbdef179-c841-482d-a2dd-9486caaf1339-scripts\") pod \"glance-default-external-api-0\" (UID: \"bbdef179-c841-482d-a2dd-9486caaf1339\") " pod="openstack/glance-default-external-api-0" Nov 24 20:59:21 crc kubenswrapper[4812]: I1124 20:59:21.051822 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbdef179-c841-482d-a2dd-9486caaf1339-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bbdef179-c841-482d-a2dd-9486caaf1339\") " pod="openstack/glance-default-external-api-0" Nov 24 20:59:21 crc kubenswrapper[4812]: I1124 20:59:21.051857 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbdef179-c841-482d-a2dd-9486caaf1339-config-data\") pod \"glance-default-external-api-0\" (UID: \"bbdef179-c841-482d-a2dd-9486caaf1339\") " pod="openstack/glance-default-external-api-0" Nov 24 20:59:21 crc kubenswrapper[4812]: I1124 20:59:21.051882 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p27jg\" (UniqueName: \"kubernetes.io/projected/bbdef179-c841-482d-a2dd-9486caaf1339-kube-api-access-p27jg\") pod \"glance-default-external-api-0\" (UID: \"bbdef179-c841-482d-a2dd-9486caaf1339\") " pod="openstack/glance-default-external-api-0" Nov 24 20:59:21 crc kubenswrapper[4812]: I1124 20:59:21.051988 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbdef179-c841-482d-a2dd-9486caaf1339-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bbdef179-c841-482d-a2dd-9486caaf1339\") " pod="openstack/glance-default-external-api-0" Nov 24 20:59:21 crc kubenswrapper[4812]: I1124 20:59:21.052007 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbdef179-c841-482d-a2dd-9486caaf1339-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bbdef179-c841-482d-a2dd-9486caaf1339\") " pod="openstack/glance-default-external-api-0" Nov 24 20:59:21 crc kubenswrapper[4812]: I1124 20:59:21.052102 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbdef179-c841-482d-a2dd-9486caaf1339-logs\") pod \"glance-default-external-api-0\" (UID: \"bbdef179-c841-482d-a2dd-9486caaf1339\") " pod="openstack/glance-default-external-api-0" Nov 24 20:59:21 crc kubenswrapper[4812]: I1124 20:59:21.052839 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbdef179-c841-482d-a2dd-9486caaf1339-logs\") pod \"glance-default-external-api-0\" (UID: \"bbdef179-c841-482d-a2dd-9486caaf1339\") " pod="openstack/glance-default-external-api-0" Nov 24 20:59:21 crc kubenswrapper[4812]: I1124 20:59:21.053884 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bbdef179-c841-482d-a2dd-9486caaf1339-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bbdef179-c841-482d-a2dd-9486caaf1339\") " pod="openstack/glance-default-external-api-0" Nov 24 20:59:21 crc kubenswrapper[4812]: I1124 20:59:21.060081 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbdef179-c841-482d-a2dd-9486caaf1339-config-data\") pod \"glance-default-external-api-0\" (UID: \"bbdef179-c841-482d-a2dd-9486caaf1339\") " pod="openstack/glance-default-external-api-0" Nov 24 20:59:21 crc kubenswrapper[4812]: I1124 20:59:21.062178 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbdef179-c841-482d-a2dd-9486caaf1339-scripts\") pod \"glance-default-external-api-0\" (UID: \"bbdef179-c841-482d-a2dd-9486caaf1339\") " pod="openstack/glance-default-external-api-0" Nov 24 20:59:21 crc kubenswrapper[4812]: I1124 20:59:21.069843 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbdef179-c841-482d-a2dd-9486caaf1339-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bbdef179-c841-482d-a2dd-9486caaf1339\") " pod="openstack/glance-default-external-api-0" Nov 24 20:59:21 crc kubenswrapper[4812]: I1124 20:59:21.072653 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbdef179-c841-482d-a2dd-9486caaf1339-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bbdef179-c841-482d-a2dd-9486caaf1339\") " pod="openstack/glance-default-external-api-0" Nov 24 20:59:21 crc kubenswrapper[4812]: I1124 20:59:21.084470 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p27jg\" (UniqueName: \"kubernetes.io/projected/bbdef179-c841-482d-a2dd-9486caaf1339-kube-api-access-p27jg\") pod \"glance-default-external-api-0\" (UID: \"bbdef179-c841-482d-a2dd-9486caaf1339\") " pod="openstack/glance-default-external-api-0" Nov 24 20:59:21 crc kubenswrapper[4812]: I1124 20:59:21.188246 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 20:59:21 crc kubenswrapper[4812]: I1124 20:59:21.201011 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 20:59:23 crc kubenswrapper[4812]: I1124 20:59:23.966867 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 20:59:23 crc kubenswrapper[4812]: E1124 20:59:23.967464 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:59:27 crc kubenswrapper[4812]: I1124 20:59:27.531217 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 20:59:27 crc kubenswrapper[4812]: I1124 20:59:27.744826 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bbdef179-c841-482d-a2dd-9486caaf1339","Type":"ContainerStarted","Data":"6fd9a34d76c84a5d48cb75a568c59c3339f26d068482cd06baf68220c2903c92"} Nov 24 20:59:27 crc kubenswrapper[4812]: I1124 20:59:27.770703 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 20:59:27 crc kubenswrapper[4812]: W1124 20:59:27.776504 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod725bf128_3f54_4775_8fbc_f12987560eb7.slice/crio-453dd971da8245a1e1868de9c1253f4489357429661e8ed8c8c34459598ebc10 WatchSource:0}: Error finding container 453dd971da8245a1e1868de9c1253f4489357429661e8ed8c8c34459598ebc10: Status 404 returned error can't find the container with id 453dd971da8245a1e1868de9c1253f4489357429661e8ed8c8c34459598ebc10 Nov 24 20:59:28 crc kubenswrapper[4812]: I1124 20:59:28.757951 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"725bf128-3f54-4775-8fbc-f12987560eb7","Type":"ContainerStarted","Data":"453dd971da8245a1e1868de9c1253f4489357429661e8ed8c8c34459598ebc10"} Nov 24 20:59:29 crc kubenswrapper[4812]: I1124 20:59:29.774700 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bbdef179-c841-482d-a2dd-9486caaf1339","Type":"ContainerStarted","Data":"dfda13122d57c557b0327c577385768d402dc3c845a5b7f649b11e777ec8893e"} Nov 24 20:59:29 crc kubenswrapper[4812]: I1124 20:59:29.778702 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"725bf128-3f54-4775-8fbc-f12987560eb7","Type":"ContainerStarted","Data":"8aa5e92c64bd79545dd2ab1045b8e7192b85180e8196d8466d27ebf38cb26740"} Nov 24 20:59:30 crc kubenswrapper[4812]: I1124 20:59:30.790060 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"725bf128-3f54-4775-8fbc-f12987560eb7","Type":"ContainerStarted","Data":"1b114e8188394fde9d431f5cc7eb1d8a4f16a328b113ceb4db94873651db0157"} Nov 24 20:59:30 crc kubenswrapper[4812]: I1124 20:59:30.792521 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bbdef179-c841-482d-a2dd-9486caaf1339","Type":"ContainerStarted","Data":"d11ca17a11088e2a48cc48415c9e35750da8039997bacf9f409f45fc3dd95a66"} Nov 24 20:59:30 crc kubenswrapper[4812]: I1124 20:59:30.826451 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=10.826405477 podStartE2EDuration="10.826405477s" podCreationTimestamp="2025-11-24 20:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:59:30.819664366 +0000 UTC m=+6164.608616757" watchObservedRunningTime="2025-11-24 20:59:30.826405477 +0000 UTC m=+6164.615357858" Nov 24 20:59:30 crc kubenswrapper[4812]: I1124 20:59:30.852538 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.852519036 podStartE2EDuration="10.852519036s" podCreationTimestamp="2025-11-24 20:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 20:59:30.84098239 +0000 UTC m=+6164.629934761" watchObservedRunningTime="2025-11-24 20:59:30.852519036 +0000 UTC m=+6164.641471407" Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.189198 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.189238 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.202676 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.202717 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.226325 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.230835 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.237684 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.256882 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.809363 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66c66ddcf5-555wz" event={"ID":"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb","Type":"ContainerStarted","Data":"2047478c07a0f014b476b6576a7e04bf5a4f3d4c86b9b43b9ea4da6deb102522"} Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.809645 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66c66ddcf5-555wz" event={"ID":"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb","Type":"ContainerStarted","Data":"1b88f69d44dcb6008fe0dcf8cb3b3a0e835129d80285782c06470a11de762b8b"} Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.809692 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66c66ddcf5-555wz" podUID="27d498fa-8f45-44cc-9bda-63f0e5e0f7bb" containerName="horizon-log" containerID="cri-o://1b88f69d44dcb6008fe0dcf8cb3b3a0e835129d80285782c06470a11de762b8b" gracePeriod=30 Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.809830 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66c66ddcf5-555wz" podUID="27d498fa-8f45-44cc-9bda-63f0e5e0f7bb" containerName="horizon" containerID="cri-o://2047478c07a0f014b476b6576a7e04bf5a4f3d4c86b9b43b9ea4da6deb102522" gracePeriod=30 Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.814245 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-769875dd67-7r68m" event={"ID":"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0","Type":"ContainerStarted","Data":"9e2b421122c946d39df45fd2a4ecc560ee5cf8cd721ce334fb94831d1a0a0a29"} Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.814304 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-769875dd67-7r68m" event={"ID":"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0","Type":"ContainerStarted","Data":"84de78fe6d1ef39563d15b7bab839d06e315573517f9768e7e1d417f22f37af9"} Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.814450 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-769875dd67-7r68m" podUID="c23ec86d-efe2-4fd9-806f-20c7f6eee8b0" containerName="horizon-log" containerID="cri-o://84de78fe6d1ef39563d15b7bab839d06e315573517f9768e7e1d417f22f37af9" gracePeriod=30 Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.814476 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-769875dd67-7r68m" podUID="c23ec86d-efe2-4fd9-806f-20c7f6eee8b0" containerName="horizon" containerID="cri-o://9e2b421122c946d39df45fd2a4ecc560ee5cf8cd721ce334fb94831d1a0a0a29" gracePeriod=30 Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.820307 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d6b4c97d4-tx4dh" event={"ID":"5c23031c-42fe-41dc-9e20-8eff2922676c","Type":"ContainerStarted","Data":"db3f056d73a44e874d6d38286ca3d275882f892feabc2014cfa9905f1d851669"} Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.820361 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d6b4c97d4-tx4dh" event={"ID":"5c23031c-42fe-41dc-9e20-8eff2922676c","Type":"ContainerStarted","Data":"882c3c013d44502977dd9824296a174a7e3bfa56048f049ac798e50846fa47b2"} Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.823282 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-655bf47886-z4qqq" event={"ID":"28908d79-deac-497f-94c0-dd47c148c6ba","Type":"ContainerStarted","Data":"36d8decc746379e90c829e25f954e81c7eebcdc68e71035283157b86b3d07e36"} Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.823364 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-655bf47886-z4qqq" event={"ID":"28908d79-deac-497f-94c0-dd47c148c6ba","Type":"ContainerStarted","Data":"649ee3af869868644eca5d7c0e968148d978e2206bb43fcbfd899f5f607c9c55"} Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.823534 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.823567 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.824649 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.824693 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.844067 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-66c66ddcf5-555wz" podStartSLOduration=2.650519943 podStartE2EDuration="15.844044667s" podCreationTimestamp="2025-11-24 20:59:16 +0000 UTC" firstStartedPulling="2025-11-24 20:59:17.174478845 +0000 UTC m=+6150.963431216" lastFinishedPulling="2025-11-24 20:59:30.368003569 +0000 UTC m=+6164.156955940" observedRunningTime="2025-11-24 20:59:31.832776868 +0000 UTC m=+6165.621729239" watchObservedRunningTime="2025-11-24 20:59:31.844044667 +0000 UTC m=+6165.632997048" Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.865801 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-655bf47886-z4qqq" podStartSLOduration=2.93102553 podStartE2EDuration="13.865782722s" podCreationTimestamp="2025-11-24 20:59:18 +0000 UTC" firstStartedPulling="2025-11-24 20:59:19.484968001 +0000 UTC m=+6153.273920372" lastFinishedPulling="2025-11-24 20:59:30.419725193 +0000 UTC m=+6164.208677564" observedRunningTime="2025-11-24 20:59:31.851891589 +0000 UTC m=+6165.640843990" watchObservedRunningTime="2025-11-24 20:59:31.865782722 +0000 UTC m=+6165.654735093" Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.878975 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7d6b4c97d4-tx4dh" podStartSLOduration=2.874896433 podStartE2EDuration="13.878952625s" podCreationTimestamp="2025-11-24 20:59:18 +0000 UTC" firstStartedPulling="2025-11-24 20:59:19.369850864 +0000 UTC m=+6153.158803255" lastFinishedPulling="2025-11-24 20:59:30.373907076 +0000 UTC m=+6164.162859447" observedRunningTime="2025-11-24 20:59:31.869766985 +0000 UTC m=+6165.658719366" watchObservedRunningTime="2025-11-24 20:59:31.878952625 +0000 UTC m=+6165.667904996" Nov 24 20:59:31 crc kubenswrapper[4812]: I1124 20:59:31.898151 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-769875dd67-7r68m" podStartSLOduration=2.777017882 podStartE2EDuration="15.898120857s" podCreationTimestamp="2025-11-24 20:59:16 +0000 UTC" firstStartedPulling="2025-11-24 20:59:17.299565874 +0000 UTC m=+6151.088518255" lastFinishedPulling="2025-11-24 20:59:30.420668859 +0000 UTC m=+6164.209621230" observedRunningTime="2025-11-24 20:59:31.891941372 +0000 UTC m=+6165.680893743" watchObservedRunningTime="2025-11-24 20:59:31.898120857 +0000 UTC m=+6165.687073248" Nov 24 20:59:35 crc kubenswrapper[4812]: I1124 20:59:35.965282 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 20:59:35 crc kubenswrapper[4812]: E1124 20:59:35.966826 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:59:36 crc kubenswrapper[4812]: I1124 20:59:36.241190 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 20:59:36 crc kubenswrapper[4812]: I1124 20:59:36.698959 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66c66ddcf5-555wz" Nov 24 20:59:36 crc kubenswrapper[4812]: I1124 20:59:36.808804 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-769875dd67-7r68m" Nov 24 20:59:38 crc kubenswrapper[4812]: I1124 20:59:38.326802 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 20:59:38 crc kubenswrapper[4812]: I1124 20:59:38.872560 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:38 crc kubenswrapper[4812]: I1124 20:59:38.872622 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 20:59:39 crc kubenswrapper[4812]: I1124 20:59:39.011396 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:39 crc kubenswrapper[4812]: I1124 20:59:39.011498 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-655bf47886-z4qqq" Nov 24 20:59:44 crc kubenswrapper[4812]: I1124 20:59:44.053874 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-188a-account-create-8jxrl"] Nov 24 20:59:44 crc kubenswrapper[4812]: I1124 20:59:44.063436 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-9dp74"] Nov 24 20:59:44 crc kubenswrapper[4812]: I1124 20:59:44.077221 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-9dp74"] Nov 24 20:59:44 crc kubenswrapper[4812]: I1124 20:59:44.086075 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-188a-account-create-8jxrl"] Nov 24 20:59:44 crc kubenswrapper[4812]: I1124 20:59:44.984262 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="310ff154-913c-4c61-b953-08c203c81f73" path="/var/lib/kubelet/pods/310ff154-913c-4c61-b953-08c203c81f73/volumes" Nov 24 20:59:44 crc kubenswrapper[4812]: I1124 20:59:44.985528 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5173d2d-26b0-4d31-86f8-80442713d33c" path="/var/lib/kubelet/pods/a5173d2d-26b0-4d31-86f8-80442713d33c/volumes" Nov 24 20:59:45 crc kubenswrapper[4812]: I1124 20:59:45.427922 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lhkqd"] Nov 24 20:59:45 crc kubenswrapper[4812]: I1124 20:59:45.430330 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lhkqd" Nov 24 20:59:45 crc kubenswrapper[4812]: I1124 20:59:45.461222 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lhkqd"] Nov 24 20:59:45 crc kubenswrapper[4812]: I1124 20:59:45.526050 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftpwc\" (UniqueName: \"kubernetes.io/projected/02040e3a-0ad4-4442-a91f-4237f9974382-kube-api-access-ftpwc\") pod \"certified-operators-lhkqd\" (UID: \"02040e3a-0ad4-4442-a91f-4237f9974382\") " pod="openshift-marketplace/certified-operators-lhkqd" Nov 24 20:59:45 crc kubenswrapper[4812]: I1124 20:59:45.526244 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02040e3a-0ad4-4442-a91f-4237f9974382-utilities\") pod \"certified-operators-lhkqd\" (UID: \"02040e3a-0ad4-4442-a91f-4237f9974382\") " pod="openshift-marketplace/certified-operators-lhkqd" Nov 24 20:59:45 crc kubenswrapper[4812]: I1124 20:59:45.526272 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02040e3a-0ad4-4442-a91f-4237f9974382-catalog-content\") pod \"certified-operators-lhkqd\" (UID: \"02040e3a-0ad4-4442-a91f-4237f9974382\") " pod="openshift-marketplace/certified-operators-lhkqd" Nov 24 20:59:45 crc kubenswrapper[4812]: I1124 20:59:45.628035 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02040e3a-0ad4-4442-a91f-4237f9974382-utilities\") pod \"certified-operators-lhkqd\" (UID: \"02040e3a-0ad4-4442-a91f-4237f9974382\") " pod="openshift-marketplace/certified-operators-lhkqd" Nov 24 20:59:45 crc kubenswrapper[4812]: I1124 20:59:45.628081 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02040e3a-0ad4-4442-a91f-4237f9974382-catalog-content\") pod \"certified-operators-lhkqd\" (UID: \"02040e3a-0ad4-4442-a91f-4237f9974382\") " pod="openshift-marketplace/certified-operators-lhkqd" Nov 24 20:59:45 crc kubenswrapper[4812]: I1124 20:59:45.628122 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftpwc\" (UniqueName: \"kubernetes.io/projected/02040e3a-0ad4-4442-a91f-4237f9974382-kube-api-access-ftpwc\") pod \"certified-operators-lhkqd\" (UID: \"02040e3a-0ad4-4442-a91f-4237f9974382\") " pod="openshift-marketplace/certified-operators-lhkqd" Nov 24 20:59:45 crc kubenswrapper[4812]: I1124 20:59:45.628649 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02040e3a-0ad4-4442-a91f-4237f9974382-utilities\") pod \"certified-operators-lhkqd\" (UID: \"02040e3a-0ad4-4442-a91f-4237f9974382\") " pod="openshift-marketplace/certified-operators-lhkqd" Nov 24 20:59:45 crc kubenswrapper[4812]: I1124 20:59:45.628663 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02040e3a-0ad4-4442-a91f-4237f9974382-catalog-content\") pod \"certified-operators-lhkqd\" (UID: \"02040e3a-0ad4-4442-a91f-4237f9974382\") " pod="openshift-marketplace/certified-operators-lhkqd" Nov 24 20:59:45 crc kubenswrapper[4812]: I1124 20:59:45.649352 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftpwc\" (UniqueName: \"kubernetes.io/projected/02040e3a-0ad4-4442-a91f-4237f9974382-kube-api-access-ftpwc\") pod \"certified-operators-lhkqd\" (UID: \"02040e3a-0ad4-4442-a91f-4237f9974382\") " pod="openshift-marketplace/certified-operators-lhkqd" Nov 24 20:59:45 crc kubenswrapper[4812]: I1124 20:59:45.753556 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lhkqd" Nov 24 20:59:46 crc kubenswrapper[4812]: I1124 20:59:46.284401 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lhkqd"] Nov 24 20:59:46 crc kubenswrapper[4812]: W1124 20:59:46.285016 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02040e3a_0ad4_4442_a91f_4237f9974382.slice/crio-e2795d7345ded0ad3e76fcdcc082df560b71a345140598222067b538d79edbd7 WatchSource:0}: Error finding container e2795d7345ded0ad3e76fcdcc082df560b71a345140598222067b538d79edbd7: Status 404 returned error can't find the container with id e2795d7345ded0ad3e76fcdcc082df560b71a345140598222067b538d79edbd7 Nov 24 20:59:47 crc kubenswrapper[4812]: I1124 20:59:47.027551 4812 generic.go:334] "Generic (PLEG): container finished" podID="02040e3a-0ad4-4442-a91f-4237f9974382" containerID="3b9c0edd13569de46fbfff0e268d775c32b301b30ff97ac1ac835b3a8156690d" exitCode=0 Nov 24 20:59:47 crc kubenswrapper[4812]: I1124 20:59:47.029078 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhkqd" event={"ID":"02040e3a-0ad4-4442-a91f-4237f9974382","Type":"ContainerDied","Data":"3b9c0edd13569de46fbfff0e268d775c32b301b30ff97ac1ac835b3a8156690d"} Nov 24 20:59:47 crc kubenswrapper[4812]: I1124 20:59:47.039142 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhkqd" event={"ID":"02040e3a-0ad4-4442-a91f-4237f9974382","Type":"ContainerStarted","Data":"e2795d7345ded0ad3e76fcdcc082df560b71a345140598222067b538d79edbd7"} Nov 24 20:59:48 crc kubenswrapper[4812]: I1124 20:59:48.088144 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhkqd" event={"ID":"02040e3a-0ad4-4442-a91f-4237f9974382","Type":"ContainerStarted","Data":"d8a369f9bbe526772ada5eb1e84e48dddcee60a906f0d8fc3e63823361bb93fd"} Nov 24 20:59:48 crc kubenswrapper[4812]: I1124 20:59:48.874788 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7d6b4c97d4-tx4dh" podUID="5c23031c-42fe-41dc-9e20-8eff2922676c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.121:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.121:8443: connect: connection refused" Nov 24 20:59:48 crc kubenswrapper[4812]: I1124 20:59:48.968700 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 20:59:48 crc kubenswrapper[4812]: E1124 20:59:48.968975 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 20:59:49 crc kubenswrapper[4812]: I1124 20:59:49.016672 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-655bf47886-z4qqq" podUID="28908d79-deac-497f-94c0-dd47c148c6ba" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.122:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.122:8443: connect: connection refused" Nov 24 20:59:50 crc kubenswrapper[4812]: I1124 20:59:50.122425 4812 generic.go:334] "Generic (PLEG): container finished" podID="02040e3a-0ad4-4442-a91f-4237f9974382" containerID="d8a369f9bbe526772ada5eb1e84e48dddcee60a906f0d8fc3e63823361bb93fd" exitCode=0 Nov 24 20:59:50 crc kubenswrapper[4812]: I1124 20:59:50.122526 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhkqd" event={"ID":"02040e3a-0ad4-4442-a91f-4237f9974382","Type":"ContainerDied","Data":"d8a369f9bbe526772ada5eb1e84e48dddcee60a906f0d8fc3e63823361bb93fd"} Nov 24 20:59:51 crc kubenswrapper[4812]: I1124 20:59:51.136212 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhkqd" event={"ID":"02040e3a-0ad4-4442-a91f-4237f9974382","Type":"ContainerStarted","Data":"c71111bb02cf818537ac3e3fdf0cc33107066eddbc8010475c30da4acb905df1"} Nov 24 20:59:52 crc kubenswrapper[4812]: I1124 20:59:52.157800 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lhkqd" podStartSLOduration=3.334142013 podStartE2EDuration="7.157784486s" podCreationTimestamp="2025-11-24 20:59:45 +0000 UTC" firstStartedPulling="2025-11-24 20:59:47.034577197 +0000 UTC m=+6180.823529568" lastFinishedPulling="2025-11-24 20:59:50.85821965 +0000 UTC m=+6184.647172041" observedRunningTime="2025-11-24 20:59:52.157478157 +0000 UTC m=+6185.946430528" watchObservedRunningTime="2025-11-24 20:59:52.157784486 +0000 UTC m=+6185.946736857" Nov 24 20:59:53 crc kubenswrapper[4812]: I1124 20:59:53.029358 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-r64jf"] Nov 24 20:59:53 crc kubenswrapper[4812]: I1124 20:59:53.050420 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-r64jf"] Nov 24 20:59:53 crc kubenswrapper[4812]: I1124 20:59:53.575534 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 20:59:53 crc kubenswrapper[4812]: I1124 20:59:53.655225 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 20:59:54 crc kubenswrapper[4812]: I1124 20:59:54.979659 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7ae9954-3d82-47b9-9a6d-399d05351de2" path="/var/lib/kubelet/pods/e7ae9954-3d82-47b9-9a6d-399d05351de2/volumes" Nov 24 20:59:55 crc kubenswrapper[4812]: I1124 20:59:55.149954 4812 scope.go:117] "RemoveContainer" containerID="d668797aa2af3e67c45edb9f290a38ebbdb63c3961304a7c7d91e4acf729b422" Nov 24 20:59:55 crc kubenswrapper[4812]: I1124 20:59:55.178306 4812 scope.go:117] "RemoveContainer" containerID="c402721e22973a18d9629a69d828e7794fb82e4df24161c5af42d0d888ff6ac3" Nov 24 20:59:55 crc kubenswrapper[4812]: I1124 20:59:55.270020 4812 scope.go:117] "RemoveContainer" containerID="d163a649335407618204991a880eba9f773f354555673b85f6ec510bcfec42f0" Nov 24 20:59:55 crc kubenswrapper[4812]: I1124 20:59:55.302911 4812 scope.go:117] "RemoveContainer" containerID="d060d02abe36f376bd30a5079deeb543d51ff3987deecf587ed94cc5d088bc43" Nov 24 20:59:55 crc kubenswrapper[4812]: I1124 20:59:55.342822 4812 scope.go:117] "RemoveContainer" containerID="df79cedf09ece862001ebe0dfbd60dc36249348554236f7d8317bfa8c50c2f32" Nov 24 20:59:55 crc kubenswrapper[4812]: I1124 20:59:55.410516 4812 scope.go:117] "RemoveContainer" containerID="99c13bfbd6fd38a15e53664b0a7db990f438998c4dce2b2a497511290a653e8a" Nov 24 20:59:57 crc kubenswrapper[4812]: I1124 20:59:55.753817 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lhkqd" Nov 24 20:59:57 crc kubenswrapper[4812]: I1124 20:59:55.753849 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lhkqd" Nov 24 20:59:57 crc kubenswrapper[4812]: I1124 20:59:55.824623 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lhkqd" Nov 24 20:59:57 crc kubenswrapper[4812]: I1124 20:59:56.257499 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lhkqd" Nov 24 20:59:57 crc kubenswrapper[4812]: I1124 20:59:56.302294 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lhkqd"] Nov 24 20:59:58 crc kubenswrapper[4812]: I1124 20:59:58.231303 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lhkqd" podUID="02040e3a-0ad4-4442-a91f-4237f9974382" containerName="registry-server" containerID="cri-o://c71111bb02cf818537ac3e3fdf0cc33107066eddbc8010475c30da4acb905df1" gracePeriod=2 Nov 24 20:59:58 crc kubenswrapper[4812]: I1124 20:59:58.763527 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lhkqd" Nov 24 20:59:58 crc kubenswrapper[4812]: I1124 20:59:58.801556 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftpwc\" (UniqueName: \"kubernetes.io/projected/02040e3a-0ad4-4442-a91f-4237f9974382-kube-api-access-ftpwc\") pod \"02040e3a-0ad4-4442-a91f-4237f9974382\" (UID: \"02040e3a-0ad4-4442-a91f-4237f9974382\") " Nov 24 20:59:58 crc kubenswrapper[4812]: I1124 20:59:58.802285 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02040e3a-0ad4-4442-a91f-4237f9974382-catalog-content\") pod \"02040e3a-0ad4-4442-a91f-4237f9974382\" (UID: \"02040e3a-0ad4-4442-a91f-4237f9974382\") " Nov 24 20:59:58 crc kubenswrapper[4812]: I1124 20:59:58.802361 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02040e3a-0ad4-4442-a91f-4237f9974382-utilities\") pod \"02040e3a-0ad4-4442-a91f-4237f9974382\" (UID: \"02040e3a-0ad4-4442-a91f-4237f9974382\") " Nov 24 20:59:58 crc kubenswrapper[4812]: I1124 20:59:58.806273 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02040e3a-0ad4-4442-a91f-4237f9974382-utilities" (OuterVolumeSpecName: "utilities") pod "02040e3a-0ad4-4442-a91f-4237f9974382" (UID: "02040e3a-0ad4-4442-a91f-4237f9974382"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:59:58 crc kubenswrapper[4812]: I1124 20:59:58.827633 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02040e3a-0ad4-4442-a91f-4237f9974382-kube-api-access-ftpwc" (OuterVolumeSpecName: "kube-api-access-ftpwc") pod "02040e3a-0ad4-4442-a91f-4237f9974382" (UID: "02040e3a-0ad4-4442-a91f-4237f9974382"). InnerVolumeSpecName "kube-api-access-ftpwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 20:59:58 crc kubenswrapper[4812]: I1124 20:59:58.857534 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02040e3a-0ad4-4442-a91f-4237f9974382-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02040e3a-0ad4-4442-a91f-4237f9974382" (UID: "02040e3a-0ad4-4442-a91f-4237f9974382"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 20:59:58 crc kubenswrapper[4812]: I1124 20:59:58.905409 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftpwc\" (UniqueName: \"kubernetes.io/projected/02040e3a-0ad4-4442-a91f-4237f9974382-kube-api-access-ftpwc\") on node \"crc\" DevicePath \"\"" Nov 24 20:59:58 crc kubenswrapper[4812]: I1124 20:59:58.905593 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02040e3a-0ad4-4442-a91f-4237f9974382-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 20:59:58 crc kubenswrapper[4812]: I1124 20:59:58.905695 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02040e3a-0ad4-4442-a91f-4237f9974382-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 20:59:59 crc kubenswrapper[4812]: I1124 20:59:59.247565 4812 generic.go:334] "Generic (PLEG): container finished" podID="02040e3a-0ad4-4442-a91f-4237f9974382" containerID="c71111bb02cf818537ac3e3fdf0cc33107066eddbc8010475c30da4acb905df1" exitCode=0 Nov 24 20:59:59 crc kubenswrapper[4812]: I1124 20:59:59.247616 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lhkqd" Nov 24 20:59:59 crc kubenswrapper[4812]: I1124 20:59:59.247644 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhkqd" event={"ID":"02040e3a-0ad4-4442-a91f-4237f9974382","Type":"ContainerDied","Data":"c71111bb02cf818537ac3e3fdf0cc33107066eddbc8010475c30da4acb905df1"} Nov 24 20:59:59 crc kubenswrapper[4812]: I1124 20:59:59.247714 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhkqd" event={"ID":"02040e3a-0ad4-4442-a91f-4237f9974382","Type":"ContainerDied","Data":"e2795d7345ded0ad3e76fcdcc082df560b71a345140598222067b538d79edbd7"} Nov 24 20:59:59 crc kubenswrapper[4812]: I1124 20:59:59.247754 4812 scope.go:117] "RemoveContainer" containerID="c71111bb02cf818537ac3e3fdf0cc33107066eddbc8010475c30da4acb905df1" Nov 24 20:59:59 crc kubenswrapper[4812]: I1124 20:59:59.293434 4812 scope.go:117] "RemoveContainer" containerID="d8a369f9bbe526772ada5eb1e84e48dddcee60a906f0d8fc3e63823361bb93fd" Nov 24 20:59:59 crc kubenswrapper[4812]: I1124 20:59:59.293977 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lhkqd"] Nov 24 20:59:59 crc kubenswrapper[4812]: I1124 20:59:59.316236 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lhkqd"] Nov 24 20:59:59 crc kubenswrapper[4812]: I1124 20:59:59.355909 4812 scope.go:117] "RemoveContainer" containerID="3b9c0edd13569de46fbfff0e268d775c32b301b30ff97ac1ac835b3a8156690d" Nov 24 20:59:59 crc kubenswrapper[4812]: I1124 20:59:59.415871 4812 scope.go:117] "RemoveContainer" containerID="c71111bb02cf818537ac3e3fdf0cc33107066eddbc8010475c30da4acb905df1" Nov 24 20:59:59 crc kubenswrapper[4812]: E1124 20:59:59.416844 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c71111bb02cf818537ac3e3fdf0cc33107066eddbc8010475c30da4acb905df1\": container with ID starting with c71111bb02cf818537ac3e3fdf0cc33107066eddbc8010475c30da4acb905df1 not found: ID does not exist" containerID="c71111bb02cf818537ac3e3fdf0cc33107066eddbc8010475c30da4acb905df1" Nov 24 20:59:59 crc kubenswrapper[4812]: I1124 20:59:59.416881 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c71111bb02cf818537ac3e3fdf0cc33107066eddbc8010475c30da4acb905df1"} err="failed to get container status \"c71111bb02cf818537ac3e3fdf0cc33107066eddbc8010475c30da4acb905df1\": rpc error: code = NotFound desc = could not find container \"c71111bb02cf818537ac3e3fdf0cc33107066eddbc8010475c30da4acb905df1\": container with ID starting with c71111bb02cf818537ac3e3fdf0cc33107066eddbc8010475c30da4acb905df1 not found: ID does not exist" Nov 24 20:59:59 crc kubenswrapper[4812]: I1124 20:59:59.416907 4812 scope.go:117] "RemoveContainer" containerID="d8a369f9bbe526772ada5eb1e84e48dddcee60a906f0d8fc3e63823361bb93fd" Nov 24 20:59:59 crc kubenswrapper[4812]: E1124 20:59:59.417407 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8a369f9bbe526772ada5eb1e84e48dddcee60a906f0d8fc3e63823361bb93fd\": container with ID starting with d8a369f9bbe526772ada5eb1e84e48dddcee60a906f0d8fc3e63823361bb93fd not found: ID does not exist" containerID="d8a369f9bbe526772ada5eb1e84e48dddcee60a906f0d8fc3e63823361bb93fd" Nov 24 20:59:59 crc kubenswrapper[4812]: I1124 20:59:59.417437 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8a369f9bbe526772ada5eb1e84e48dddcee60a906f0d8fc3e63823361bb93fd"} err="failed to get container status \"d8a369f9bbe526772ada5eb1e84e48dddcee60a906f0d8fc3e63823361bb93fd\": rpc error: code = NotFound desc = could not find container \"d8a369f9bbe526772ada5eb1e84e48dddcee60a906f0d8fc3e63823361bb93fd\": container with ID starting with d8a369f9bbe526772ada5eb1e84e48dddcee60a906f0d8fc3e63823361bb93fd not found: ID does not exist" Nov 24 20:59:59 crc kubenswrapper[4812]: I1124 20:59:59.417452 4812 scope.go:117] "RemoveContainer" containerID="3b9c0edd13569de46fbfff0e268d775c32b301b30ff97ac1ac835b3a8156690d" Nov 24 20:59:59 crc kubenswrapper[4812]: E1124 20:59:59.417897 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b9c0edd13569de46fbfff0e268d775c32b301b30ff97ac1ac835b3a8156690d\": container with ID starting with 3b9c0edd13569de46fbfff0e268d775c32b301b30ff97ac1ac835b3a8156690d not found: ID does not exist" containerID="3b9c0edd13569de46fbfff0e268d775c32b301b30ff97ac1ac835b3a8156690d" Nov 24 20:59:59 crc kubenswrapper[4812]: I1124 20:59:59.417920 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b9c0edd13569de46fbfff0e268d775c32b301b30ff97ac1ac835b3a8156690d"} err="failed to get container status \"3b9c0edd13569de46fbfff0e268d775c32b301b30ff97ac1ac835b3a8156690d\": rpc error: code = NotFound desc = could not find container \"3b9c0edd13569de46fbfff0e268d775c32b301b30ff97ac1ac835b3a8156690d\": container with ID starting with 3b9c0edd13569de46fbfff0e268d775c32b301b30ff97ac1ac835b3a8156690d not found: ID does not exist" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.156086 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn"] Nov 24 21:00:00 crc kubenswrapper[4812]: E1124 21:00:00.157229 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02040e3a-0ad4-4442-a91f-4237f9974382" containerName="extract-content" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.157377 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="02040e3a-0ad4-4442-a91f-4237f9974382" containerName="extract-content" Nov 24 21:00:00 crc kubenswrapper[4812]: E1124 21:00:00.157542 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02040e3a-0ad4-4442-a91f-4237f9974382" containerName="registry-server" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.157628 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="02040e3a-0ad4-4442-a91f-4237f9974382" containerName="registry-server" Nov 24 21:00:00 crc kubenswrapper[4812]: E1124 21:00:00.157700 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02040e3a-0ad4-4442-a91f-4237f9974382" containerName="extract-utilities" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.157773 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="02040e3a-0ad4-4442-a91f-4237f9974382" containerName="extract-utilities" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.158079 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="02040e3a-0ad4-4442-a91f-4237f9974382" containerName="registry-server" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.159098 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.161497 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.161497 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.165427 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn"] Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.234224 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb5383a9-1cd1-478f-88d5-fe697f9dfe4d-config-volume\") pod \"collect-profiles-29400300-4tcpn\" (UID: \"fb5383a9-1cd1-478f-88d5-fe697f9dfe4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.234368 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb5383a9-1cd1-478f-88d5-fe697f9dfe4d-secret-volume\") pod \"collect-profiles-29400300-4tcpn\" (UID: \"fb5383a9-1cd1-478f-88d5-fe697f9dfe4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.234436 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bztxf\" (UniqueName: \"kubernetes.io/projected/fb5383a9-1cd1-478f-88d5-fe697f9dfe4d-kube-api-access-bztxf\") pod \"collect-profiles-29400300-4tcpn\" (UID: \"fb5383a9-1cd1-478f-88d5-fe697f9dfe4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.336838 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb5383a9-1cd1-478f-88d5-fe697f9dfe4d-secret-volume\") pod \"collect-profiles-29400300-4tcpn\" (UID: \"fb5383a9-1cd1-478f-88d5-fe697f9dfe4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.336945 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bztxf\" (UniqueName: \"kubernetes.io/projected/fb5383a9-1cd1-478f-88d5-fe697f9dfe4d-kube-api-access-bztxf\") pod \"collect-profiles-29400300-4tcpn\" (UID: \"fb5383a9-1cd1-478f-88d5-fe697f9dfe4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.337250 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb5383a9-1cd1-478f-88d5-fe697f9dfe4d-config-volume\") pod \"collect-profiles-29400300-4tcpn\" (UID: \"fb5383a9-1cd1-478f-88d5-fe697f9dfe4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.341950 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb5383a9-1cd1-478f-88d5-fe697f9dfe4d-config-volume\") pod \"collect-profiles-29400300-4tcpn\" (UID: \"fb5383a9-1cd1-478f-88d5-fe697f9dfe4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.353940 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb5383a9-1cd1-478f-88d5-fe697f9dfe4d-secret-volume\") pod \"collect-profiles-29400300-4tcpn\" (UID: \"fb5383a9-1cd1-478f-88d5-fe697f9dfe4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.363936 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bztxf\" (UniqueName: \"kubernetes.io/projected/fb5383a9-1cd1-478f-88d5-fe697f9dfe4d-kube-api-access-bztxf\") pod \"collect-profiles-29400300-4tcpn\" (UID: \"fb5383a9-1cd1-478f-88d5-fe697f9dfe4d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.480541 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.773875 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.803782 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-655bf47886-z4qqq" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.962591 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn"] Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.965433 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 21:00:00 crc kubenswrapper[4812]: E1124 21:00:00.965750 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:00:00 crc kubenswrapper[4812]: I1124 21:00:00.982420 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02040e3a-0ad4-4442-a91f-4237f9974382" path="/var/lib/kubelet/pods/02040e3a-0ad4-4442-a91f-4237f9974382/volumes" Nov 24 21:00:01 crc kubenswrapper[4812]: I1124 21:00:01.283998 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn" event={"ID":"fb5383a9-1cd1-478f-88d5-fe697f9dfe4d","Type":"ContainerStarted","Data":"93efdf1f7c1d67048fb9f244bb3f895fbf3a3f9b97b2534cdb480eb31402bfc7"} Nov 24 21:00:01 crc kubenswrapper[4812]: I1124 21:00:01.284422 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn" event={"ID":"fb5383a9-1cd1-478f-88d5-fe697f9dfe4d","Type":"ContainerStarted","Data":"bce1ba909aa64d3a74bfb9c302eb656e1561a26c8373873b9f3cde618b409858"} Nov 24 21:00:01 crc kubenswrapper[4812]: I1124 21:00:01.324696 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn" podStartSLOduration=1.3246768740000001 podStartE2EDuration="1.324676874s" podCreationTimestamp="2025-11-24 21:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:00:01.31110222 +0000 UTC m=+6195.100054611" watchObservedRunningTime="2025-11-24 21:00:01.324676874 +0000 UTC m=+6195.113629245" Nov 24 21:00:01 crc kubenswrapper[4812]: W1124 21:00:01.877868 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02040e3a_0ad4_4442_a91f_4237f9974382.slice/crio-d8a369f9bbe526772ada5eb1e84e48dddcee60a906f0d8fc3e63823361bb93fd.scope WatchSource:0}: Error finding container d8a369f9bbe526772ada5eb1e84e48dddcee60a906f0d8fc3e63823361bb93fd: Status 404 returned error can't find the container with id d8a369f9bbe526772ada5eb1e84e48dddcee60a906f0d8fc3e63823361bb93fd Nov 24 21:00:01 crc kubenswrapper[4812]: W1124 21:00:01.882041 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02040e3a_0ad4_4442_a91f_4237f9974382.slice/crio-c71111bb02cf818537ac3e3fdf0cc33107066eddbc8010475c30da4acb905df1.scope WatchSource:0}: Error finding container c71111bb02cf818537ac3e3fdf0cc33107066eddbc8010475c30da4acb905df1: Status 404 returned error can't find the container with id c71111bb02cf818537ac3e3fdf0cc33107066eddbc8010475c30da4acb905df1 Nov 24 21:00:02 crc kubenswrapper[4812]: E1124 21:00:02.179217 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27d498fa_8f45_44cc_9bda_63f0e5e0f7bb.slice/crio-2047478c07a0f014b476b6576a7e04bf5a4f3d4c86b9b43b9ea4da6deb102522.scope\": RecentStats: unable to find data in memory cache]" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.267798 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-769875dd67-7r68m" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.314924 4812 generic.go:334] "Generic (PLEG): container finished" podID="c23ec86d-efe2-4fd9-806f-20c7f6eee8b0" containerID="9e2b421122c946d39df45fd2a4ecc560ee5cf8cd721ce334fb94831d1a0a0a29" exitCode=137 Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.314976 4812 generic.go:334] "Generic (PLEG): container finished" podID="c23ec86d-efe2-4fd9-806f-20c7f6eee8b0" containerID="84de78fe6d1ef39563d15b7bab839d06e315573517f9768e7e1d417f22f37af9" exitCode=137 Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.315065 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-769875dd67-7r68m" event={"ID":"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0","Type":"ContainerDied","Data":"9e2b421122c946d39df45fd2a4ecc560ee5cf8cd721ce334fb94831d1a0a0a29"} Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.315092 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-769875dd67-7r68m" event={"ID":"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0","Type":"ContainerDied","Data":"84de78fe6d1ef39563d15b7bab839d06e315573517f9768e7e1d417f22f37af9"} Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.315103 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-769875dd67-7r68m" event={"ID":"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0","Type":"ContainerDied","Data":"6c75284f31af4c748633a4eb9f6de016a4057e3d775c54e6ae8c05d3801852b0"} Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.315134 4812 scope.go:117] "RemoveContainer" containerID="9e2b421122c946d39df45fd2a4ecc560ee5cf8cd721ce334fb94831d1a0a0a29" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.315310 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-769875dd67-7r68m" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.331486 4812 generic.go:334] "Generic (PLEG): container finished" podID="fb5383a9-1cd1-478f-88d5-fe697f9dfe4d" containerID="93efdf1f7c1d67048fb9f244bb3f895fbf3a3f9b97b2534cdb480eb31402bfc7" exitCode=0 Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.331559 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn" event={"ID":"fb5383a9-1cd1-478f-88d5-fe697f9dfe4d","Type":"ContainerDied","Data":"93efdf1f7c1d67048fb9f244bb3f895fbf3a3f9b97b2534cdb480eb31402bfc7"} Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.335601 4812 generic.go:334] "Generic (PLEG): container finished" podID="27d498fa-8f45-44cc-9bda-63f0e5e0f7bb" containerID="2047478c07a0f014b476b6576a7e04bf5a4f3d4c86b9b43b9ea4da6deb102522" exitCode=137 Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.335625 4812 generic.go:334] "Generic (PLEG): container finished" podID="27d498fa-8f45-44cc-9bda-63f0e5e0f7bb" containerID="1b88f69d44dcb6008fe0dcf8cb3b3a0e835129d80285782c06470a11de762b8b" exitCode=137 Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.335642 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66c66ddcf5-555wz" event={"ID":"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb","Type":"ContainerDied","Data":"2047478c07a0f014b476b6576a7e04bf5a4f3d4c86b9b43b9ea4da6deb102522"} Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.335662 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66c66ddcf5-555wz" event={"ID":"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb","Type":"ContainerDied","Data":"1b88f69d44dcb6008fe0dcf8cb3b3a0e835129d80285782c06470a11de762b8b"} Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.385939 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-scripts\") pod \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\" (UID: \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\") " Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.386028 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-horizon-secret-key\") pod \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\" (UID: \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\") " Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.386079 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-logs\") pod \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\" (UID: \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\") " Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.386136 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sggsb\" (UniqueName: \"kubernetes.io/projected/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-kube-api-access-sggsb\") pod \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\" (UID: \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\") " Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.386362 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-config-data\") pod \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\" (UID: \"c23ec86d-efe2-4fd9-806f-20c7f6eee8b0\") " Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.390064 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-logs" (OuterVolumeSpecName: "logs") pod "c23ec86d-efe2-4fd9-806f-20c7f6eee8b0" (UID: "c23ec86d-efe2-4fd9-806f-20c7f6eee8b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.398564 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c23ec86d-efe2-4fd9-806f-20c7f6eee8b0" (UID: "c23ec86d-efe2-4fd9-806f-20c7f6eee8b0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.423870 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66c66ddcf5-555wz" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.432463 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-kube-api-access-sggsb" (OuterVolumeSpecName: "kube-api-access-sggsb") pod "c23ec86d-efe2-4fd9-806f-20c7f6eee8b0" (UID: "c23ec86d-efe2-4fd9-806f-20c7f6eee8b0"). InnerVolumeSpecName "kube-api-access-sggsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.461672 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-scripts" (OuterVolumeSpecName: "scripts") pod "c23ec86d-efe2-4fd9-806f-20c7f6eee8b0" (UID: "c23ec86d-efe2-4fd9-806f-20c7f6eee8b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.474646 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-config-data" (OuterVolumeSpecName: "config-data") pod "c23ec86d-efe2-4fd9-806f-20c7f6eee8b0" (UID: "c23ec86d-efe2-4fd9-806f-20c7f6eee8b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.488015 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh85r\" (UniqueName: \"kubernetes.io/projected/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-kube-api-access-lh85r\") pod \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\" (UID: \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\") " Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.488169 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-logs\") pod \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\" (UID: \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\") " Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.488271 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-config-data\") pod \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\" (UID: \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\") " Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.488392 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-scripts\") pod \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\" (UID: \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\") " Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.488434 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-horizon-secret-key\") pod \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\" (UID: \"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb\") " Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.488853 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.488870 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.488879 4812 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.488889 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.488899 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sggsb\" (UniqueName: \"kubernetes.io/projected/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0-kube-api-access-sggsb\") on node \"crc\" DevicePath \"\"" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.491039 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-logs" (OuterVolumeSpecName: "logs") pod "27d498fa-8f45-44cc-9bda-63f0e5e0f7bb" (UID: "27d498fa-8f45-44cc-9bda-63f0e5e0f7bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.491232 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "27d498fa-8f45-44cc-9bda-63f0e5e0f7bb" (UID: "27d498fa-8f45-44cc-9bda-63f0e5e0f7bb"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.493218 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-kube-api-access-lh85r" (OuterVolumeSpecName: "kube-api-access-lh85r") pod "27d498fa-8f45-44cc-9bda-63f0e5e0f7bb" (UID: "27d498fa-8f45-44cc-9bda-63f0e5e0f7bb"). InnerVolumeSpecName "kube-api-access-lh85r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.518085 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-config-data" (OuterVolumeSpecName: "config-data") pod "27d498fa-8f45-44cc-9bda-63f0e5e0f7bb" (UID: "27d498fa-8f45-44cc-9bda-63f0e5e0f7bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.528222 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-scripts" (OuterVolumeSpecName: "scripts") pod "27d498fa-8f45-44cc-9bda-63f0e5e0f7bb" (UID: "27d498fa-8f45-44cc-9bda-63f0e5e0f7bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.589928 4812 scope.go:117] "RemoveContainer" containerID="84de78fe6d1ef39563d15b7bab839d06e315573517f9768e7e1d417f22f37af9" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.591679 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh85r\" (UniqueName: \"kubernetes.io/projected/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-kube-api-access-lh85r\") on node \"crc\" DevicePath \"\"" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.591713 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.591726 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.591736 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.591744 4812 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.626394 4812 scope.go:117] "RemoveContainer" containerID="9e2b421122c946d39df45fd2a4ecc560ee5cf8cd721ce334fb94831d1a0a0a29" Nov 24 21:00:02 crc kubenswrapper[4812]: E1124 21:00:02.629684 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e2b421122c946d39df45fd2a4ecc560ee5cf8cd721ce334fb94831d1a0a0a29\": container with ID starting with 9e2b421122c946d39df45fd2a4ecc560ee5cf8cd721ce334fb94831d1a0a0a29 not found: ID does not exist" containerID="9e2b421122c946d39df45fd2a4ecc560ee5cf8cd721ce334fb94831d1a0a0a29" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.629717 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e2b421122c946d39df45fd2a4ecc560ee5cf8cd721ce334fb94831d1a0a0a29"} err="failed to get container status \"9e2b421122c946d39df45fd2a4ecc560ee5cf8cd721ce334fb94831d1a0a0a29\": rpc error: code = NotFound desc = could not find container \"9e2b421122c946d39df45fd2a4ecc560ee5cf8cd721ce334fb94831d1a0a0a29\": container with ID starting with 9e2b421122c946d39df45fd2a4ecc560ee5cf8cd721ce334fb94831d1a0a0a29 not found: ID does not exist" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.629738 4812 scope.go:117] "RemoveContainer" containerID="84de78fe6d1ef39563d15b7bab839d06e315573517f9768e7e1d417f22f37af9" Nov 24 21:00:02 crc kubenswrapper[4812]: E1124 21:00:02.633599 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84de78fe6d1ef39563d15b7bab839d06e315573517f9768e7e1d417f22f37af9\": container with ID starting with 84de78fe6d1ef39563d15b7bab839d06e315573517f9768e7e1d417f22f37af9 not found: ID does not exist" containerID="84de78fe6d1ef39563d15b7bab839d06e315573517f9768e7e1d417f22f37af9" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.633626 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84de78fe6d1ef39563d15b7bab839d06e315573517f9768e7e1d417f22f37af9"} err="failed to get container status \"84de78fe6d1ef39563d15b7bab839d06e315573517f9768e7e1d417f22f37af9\": rpc error: code = NotFound desc = could not find container \"84de78fe6d1ef39563d15b7bab839d06e315573517f9768e7e1d417f22f37af9\": container with ID starting with 84de78fe6d1ef39563d15b7bab839d06e315573517f9768e7e1d417f22f37af9 not found: ID does not exist" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.633644 4812 scope.go:117] "RemoveContainer" containerID="9e2b421122c946d39df45fd2a4ecc560ee5cf8cd721ce334fb94831d1a0a0a29" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.634016 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e2b421122c946d39df45fd2a4ecc560ee5cf8cd721ce334fb94831d1a0a0a29"} err="failed to get container status \"9e2b421122c946d39df45fd2a4ecc560ee5cf8cd721ce334fb94831d1a0a0a29\": rpc error: code = NotFound desc = could not find container \"9e2b421122c946d39df45fd2a4ecc560ee5cf8cd721ce334fb94831d1a0a0a29\": container with ID starting with 9e2b421122c946d39df45fd2a4ecc560ee5cf8cd721ce334fb94831d1a0a0a29 not found: ID does not exist" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.634047 4812 scope.go:117] "RemoveContainer" containerID="84de78fe6d1ef39563d15b7bab839d06e315573517f9768e7e1d417f22f37af9" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.634700 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84de78fe6d1ef39563d15b7bab839d06e315573517f9768e7e1d417f22f37af9"} err="failed to get container status \"84de78fe6d1ef39563d15b7bab839d06e315573517f9768e7e1d417f22f37af9\": rpc error: code = NotFound desc = could not find container \"84de78fe6d1ef39563d15b7bab839d06e315573517f9768e7e1d417f22f37af9\": container with ID starting with 84de78fe6d1ef39563d15b7bab839d06e315573517f9768e7e1d417f22f37af9 not found: ID does not exist" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.655461 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-769875dd67-7r68m"] Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.668024 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-769875dd67-7r68m"] Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.757839 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.934645 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-655bf47886-z4qqq" Nov 24 21:00:02 crc kubenswrapper[4812]: I1124 21:00:02.981912 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c23ec86d-efe2-4fd9-806f-20c7f6eee8b0" path="/var/lib/kubelet/pods/c23ec86d-efe2-4fd9-806f-20c7f6eee8b0/volumes" Nov 24 21:00:03 crc kubenswrapper[4812]: I1124 21:00:03.000776 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d6b4c97d4-tx4dh"] Nov 24 21:00:03 crc kubenswrapper[4812]: I1124 21:00:03.344980 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66c66ddcf5-555wz" event={"ID":"27d498fa-8f45-44cc-9bda-63f0e5e0f7bb","Type":"ContainerDied","Data":"2b33e7aeda865ad7d382d1feab98aa842ec8887ce734bbd35a1b8864cb8c920b"} Nov 24 21:00:03 crc kubenswrapper[4812]: I1124 21:00:03.345038 4812 scope.go:117] "RemoveContainer" containerID="2047478c07a0f014b476b6576a7e04bf5a4f3d4c86b9b43b9ea4da6deb102522" Nov 24 21:00:03 crc kubenswrapper[4812]: I1124 21:00:03.345063 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66c66ddcf5-555wz" Nov 24 21:00:03 crc kubenswrapper[4812]: I1124 21:00:03.347282 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7d6b4c97d4-tx4dh" podUID="5c23031c-42fe-41dc-9e20-8eff2922676c" containerName="horizon-log" containerID="cri-o://882c3c013d44502977dd9824296a174a7e3bfa56048f049ac798e50846fa47b2" gracePeriod=30 Nov 24 21:00:03 crc kubenswrapper[4812]: I1124 21:00:03.347375 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7d6b4c97d4-tx4dh" podUID="5c23031c-42fe-41dc-9e20-8eff2922676c" containerName="horizon" containerID="cri-o://db3f056d73a44e874d6d38286ca3d275882f892feabc2014cfa9905f1d851669" gracePeriod=30 Nov 24 21:00:03 crc kubenswrapper[4812]: I1124 21:00:03.377993 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66c66ddcf5-555wz"] Nov 24 21:00:03 crc kubenswrapper[4812]: I1124 21:00:03.394098 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-66c66ddcf5-555wz"] Nov 24 21:00:03 crc kubenswrapper[4812]: I1124 21:00:03.582089 4812 scope.go:117] "RemoveContainer" containerID="1b88f69d44dcb6008fe0dcf8cb3b3a0e835129d80285782c06470a11de762b8b" Nov 24 21:00:03 crc kubenswrapper[4812]: I1124 21:00:03.717329 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn" Nov 24 21:00:03 crc kubenswrapper[4812]: I1124 21:00:03.817975 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb5383a9-1cd1-478f-88d5-fe697f9dfe4d-secret-volume\") pod \"fb5383a9-1cd1-478f-88d5-fe697f9dfe4d\" (UID: \"fb5383a9-1cd1-478f-88d5-fe697f9dfe4d\") " Nov 24 21:00:03 crc kubenswrapper[4812]: I1124 21:00:03.818104 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bztxf\" (UniqueName: \"kubernetes.io/projected/fb5383a9-1cd1-478f-88d5-fe697f9dfe4d-kube-api-access-bztxf\") pod \"fb5383a9-1cd1-478f-88d5-fe697f9dfe4d\" (UID: \"fb5383a9-1cd1-478f-88d5-fe697f9dfe4d\") " Nov 24 21:00:03 crc kubenswrapper[4812]: I1124 21:00:03.818268 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb5383a9-1cd1-478f-88d5-fe697f9dfe4d-config-volume\") pod \"fb5383a9-1cd1-478f-88d5-fe697f9dfe4d\" (UID: \"fb5383a9-1cd1-478f-88d5-fe697f9dfe4d\") " Nov 24 21:00:03 crc kubenswrapper[4812]: I1124 21:00:03.819291 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb5383a9-1cd1-478f-88d5-fe697f9dfe4d-config-volume" (OuterVolumeSpecName: "config-volume") pod "fb5383a9-1cd1-478f-88d5-fe697f9dfe4d" (UID: "fb5383a9-1cd1-478f-88d5-fe697f9dfe4d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:00:03 crc kubenswrapper[4812]: I1124 21:00:03.822431 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb5383a9-1cd1-478f-88d5-fe697f9dfe4d-kube-api-access-bztxf" (OuterVolumeSpecName: "kube-api-access-bztxf") pod "fb5383a9-1cd1-478f-88d5-fe697f9dfe4d" (UID: "fb5383a9-1cd1-478f-88d5-fe697f9dfe4d"). InnerVolumeSpecName "kube-api-access-bztxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:00:03 crc kubenswrapper[4812]: I1124 21:00:03.835779 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb5383a9-1cd1-478f-88d5-fe697f9dfe4d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fb5383a9-1cd1-478f-88d5-fe697f9dfe4d" (UID: "fb5383a9-1cd1-478f-88d5-fe697f9dfe4d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:00:03 crc kubenswrapper[4812]: I1124 21:00:03.927825 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fb5383a9-1cd1-478f-88d5-fe697f9dfe4d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:00:03 crc kubenswrapper[4812]: I1124 21:00:03.928649 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bztxf\" (UniqueName: \"kubernetes.io/projected/fb5383a9-1cd1-478f-88d5-fe697f9dfe4d-kube-api-access-bztxf\") on node \"crc\" DevicePath \"\"" Nov 24 21:00:03 crc kubenswrapper[4812]: I1124 21:00:03.928691 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb5383a9-1cd1-478f-88d5-fe697f9dfe4d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:00:04 crc kubenswrapper[4812]: I1124 21:00:04.366610 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn" event={"ID":"fb5383a9-1cd1-478f-88d5-fe697f9dfe4d","Type":"ContainerDied","Data":"bce1ba909aa64d3a74bfb9c302eb656e1561a26c8373873b9f3cde618b409858"} Nov 24 21:00:04 crc kubenswrapper[4812]: I1124 21:00:04.366655 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn" Nov 24 21:00:04 crc kubenswrapper[4812]: I1124 21:00:04.366668 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bce1ba909aa64d3a74bfb9c302eb656e1561a26c8373873b9f3cde618b409858" Nov 24 21:00:04 crc kubenswrapper[4812]: I1124 21:00:04.391255 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400255-jzgzn"] Nov 24 21:00:04 crc kubenswrapper[4812]: I1124 21:00:04.402514 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400255-jzgzn"] Nov 24 21:00:04 crc kubenswrapper[4812]: I1124 21:00:04.979960 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27d498fa-8f45-44cc-9bda-63f0e5e0f7bb" path="/var/lib/kubelet/pods/27d498fa-8f45-44cc-9bda-63f0e5e0f7bb/volumes" Nov 24 21:00:04 crc kubenswrapper[4812]: I1124 21:00:04.981382 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="636f9036-4397-4aa2-a431-cd8253bf87c3" path="/var/lib/kubelet/pods/636f9036-4397-4aa2-a431-cd8253bf87c3/volumes" Nov 24 21:00:07 crc kubenswrapper[4812]: I1124 21:00:07.421637 4812 generic.go:334] "Generic (PLEG): container finished" podID="5c23031c-42fe-41dc-9e20-8eff2922676c" containerID="db3f056d73a44e874d6d38286ca3d275882f892feabc2014cfa9905f1d851669" exitCode=0 Nov 24 21:00:07 crc kubenswrapper[4812]: I1124 21:00:07.421767 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d6b4c97d4-tx4dh" event={"ID":"5c23031c-42fe-41dc-9e20-8eff2922676c","Type":"ContainerDied","Data":"db3f056d73a44e874d6d38286ca3d275882f892feabc2014cfa9905f1d851669"} Nov 24 21:00:08 crc kubenswrapper[4812]: I1124 21:00:08.872679 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7d6b4c97d4-tx4dh" podUID="5c23031c-42fe-41dc-9e20-8eff2922676c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.121:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.121:8443: connect: connection refused" Nov 24 21:00:15 crc kubenswrapper[4812]: I1124 21:00:15.967193 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 21:00:15 crc kubenswrapper[4812]: E1124 21:00:15.968615 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:00:18 crc kubenswrapper[4812]: I1124 21:00:18.872810 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7d6b4c97d4-tx4dh" podUID="5c23031c-42fe-41dc-9e20-8eff2922676c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.121:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.121:8443: connect: connection refused" Nov 24 21:00:28 crc kubenswrapper[4812]: I1124 21:00:28.872571 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7d6b4c97d4-tx4dh" podUID="5c23031c-42fe-41dc-9e20-8eff2922676c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.121:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.121:8443: connect: connection refused" Nov 24 21:00:28 crc kubenswrapper[4812]: I1124 21:00:28.873752 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 21:00:28 crc kubenswrapper[4812]: I1124 21:00:28.967237 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 21:00:28 crc kubenswrapper[4812]: E1124 21:00:28.967726 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.737859 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.810082 4812 generic.go:334] "Generic (PLEG): container finished" podID="5c23031c-42fe-41dc-9e20-8eff2922676c" containerID="882c3c013d44502977dd9824296a174a7e3bfa56048f049ac798e50846fa47b2" exitCode=137 Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.810153 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d6b4c97d4-tx4dh" event={"ID":"5c23031c-42fe-41dc-9e20-8eff2922676c","Type":"ContainerDied","Data":"882c3c013d44502977dd9824296a174a7e3bfa56048f049ac798e50846fa47b2"} Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.810190 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d6b4c97d4-tx4dh" event={"ID":"5c23031c-42fe-41dc-9e20-8eff2922676c","Type":"ContainerDied","Data":"d52e981809bc73ca27414cf3af6469983cdb52e0f5586d043614c0b95b890dda"} Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.810208 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d6b4c97d4-tx4dh" Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.810229 4812 scope.go:117] "RemoveContainer" containerID="db3f056d73a44e874d6d38286ca3d275882f892feabc2014cfa9905f1d851669" Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.853142 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c23031c-42fe-41dc-9e20-8eff2922676c-combined-ca-bundle\") pod \"5c23031c-42fe-41dc-9e20-8eff2922676c\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.853254 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c23031c-42fe-41dc-9e20-8eff2922676c-scripts\") pod \"5c23031c-42fe-41dc-9e20-8eff2922676c\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.853508 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c23031c-42fe-41dc-9e20-8eff2922676c-logs\") pod \"5c23031c-42fe-41dc-9e20-8eff2922676c\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.853544 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27nsj\" (UniqueName: \"kubernetes.io/projected/5c23031c-42fe-41dc-9e20-8eff2922676c-kube-api-access-27nsj\") pod \"5c23031c-42fe-41dc-9e20-8eff2922676c\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.853623 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5c23031c-42fe-41dc-9e20-8eff2922676c-horizon-secret-key\") pod \"5c23031c-42fe-41dc-9e20-8eff2922676c\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.853720 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c23031c-42fe-41dc-9e20-8eff2922676c-config-data\") pod \"5c23031c-42fe-41dc-9e20-8eff2922676c\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.853811 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c23031c-42fe-41dc-9e20-8eff2922676c-horizon-tls-certs\") pod \"5c23031c-42fe-41dc-9e20-8eff2922676c\" (UID: \"5c23031c-42fe-41dc-9e20-8eff2922676c\") " Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.854239 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c23031c-42fe-41dc-9e20-8eff2922676c-logs" (OuterVolumeSpecName: "logs") pod "5c23031c-42fe-41dc-9e20-8eff2922676c" (UID: "5c23031c-42fe-41dc-9e20-8eff2922676c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.854496 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c23031c-42fe-41dc-9e20-8eff2922676c-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.858714 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c23031c-42fe-41dc-9e20-8eff2922676c-kube-api-access-27nsj" (OuterVolumeSpecName: "kube-api-access-27nsj") pod "5c23031c-42fe-41dc-9e20-8eff2922676c" (UID: "5c23031c-42fe-41dc-9e20-8eff2922676c"). InnerVolumeSpecName "kube-api-access-27nsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.859407 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c23031c-42fe-41dc-9e20-8eff2922676c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5c23031c-42fe-41dc-9e20-8eff2922676c" (UID: "5c23031c-42fe-41dc-9e20-8eff2922676c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.878527 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c23031c-42fe-41dc-9e20-8eff2922676c-scripts" (OuterVolumeSpecName: "scripts") pod "5c23031c-42fe-41dc-9e20-8eff2922676c" (UID: "5c23031c-42fe-41dc-9e20-8eff2922676c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.889601 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c23031c-42fe-41dc-9e20-8eff2922676c-config-data" (OuterVolumeSpecName: "config-data") pod "5c23031c-42fe-41dc-9e20-8eff2922676c" (UID: "5c23031c-42fe-41dc-9e20-8eff2922676c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.920411 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c23031c-42fe-41dc-9e20-8eff2922676c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c23031c-42fe-41dc-9e20-8eff2922676c" (UID: "5c23031c-42fe-41dc-9e20-8eff2922676c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.928655 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c23031c-42fe-41dc-9e20-8eff2922676c-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "5c23031c-42fe-41dc-9e20-8eff2922676c" (UID: "5c23031c-42fe-41dc-9e20-8eff2922676c"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.957167 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27nsj\" (UniqueName: \"kubernetes.io/projected/5c23031c-42fe-41dc-9e20-8eff2922676c-kube-api-access-27nsj\") on node \"crc\" DevicePath \"\"" Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.957680 4812 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5c23031c-42fe-41dc-9e20-8eff2922676c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.957696 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5c23031c-42fe-41dc-9e20-8eff2922676c-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.957711 4812 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c23031c-42fe-41dc-9e20-8eff2922676c-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.957743 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c23031c-42fe-41dc-9e20-8eff2922676c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:00:33 crc kubenswrapper[4812]: I1124 21:00:33.957757 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c23031c-42fe-41dc-9e20-8eff2922676c-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:00:34 crc kubenswrapper[4812]: I1124 21:00:34.011131 4812 scope.go:117] "RemoveContainer" containerID="882c3c013d44502977dd9824296a174a7e3bfa56048f049ac798e50846fa47b2" Nov 24 21:00:34 crc kubenswrapper[4812]: I1124 21:00:34.044392 4812 scope.go:117] "RemoveContainer" containerID="db3f056d73a44e874d6d38286ca3d275882f892feabc2014cfa9905f1d851669" Nov 24 21:00:34 crc kubenswrapper[4812]: E1124 21:00:34.045819 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db3f056d73a44e874d6d38286ca3d275882f892feabc2014cfa9905f1d851669\": container with ID starting with db3f056d73a44e874d6d38286ca3d275882f892feabc2014cfa9905f1d851669 not found: ID does not exist" containerID="db3f056d73a44e874d6d38286ca3d275882f892feabc2014cfa9905f1d851669" Nov 24 21:00:34 crc kubenswrapper[4812]: I1124 21:00:34.045889 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db3f056d73a44e874d6d38286ca3d275882f892feabc2014cfa9905f1d851669"} err="failed to get container status \"db3f056d73a44e874d6d38286ca3d275882f892feabc2014cfa9905f1d851669\": rpc error: code = NotFound desc = could not find container \"db3f056d73a44e874d6d38286ca3d275882f892feabc2014cfa9905f1d851669\": container with ID starting with db3f056d73a44e874d6d38286ca3d275882f892feabc2014cfa9905f1d851669 not found: ID does not exist" Nov 24 21:00:34 crc kubenswrapper[4812]: I1124 21:00:34.045930 4812 scope.go:117] "RemoveContainer" containerID="882c3c013d44502977dd9824296a174a7e3bfa56048f049ac798e50846fa47b2" Nov 24 21:00:34 crc kubenswrapper[4812]: E1124 21:00:34.048622 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"882c3c013d44502977dd9824296a174a7e3bfa56048f049ac798e50846fa47b2\": container with ID starting with 882c3c013d44502977dd9824296a174a7e3bfa56048f049ac798e50846fa47b2 not found: ID does not exist" containerID="882c3c013d44502977dd9824296a174a7e3bfa56048f049ac798e50846fa47b2" Nov 24 21:00:34 crc kubenswrapper[4812]: I1124 21:00:34.048704 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"882c3c013d44502977dd9824296a174a7e3bfa56048f049ac798e50846fa47b2"} err="failed to get container status \"882c3c013d44502977dd9824296a174a7e3bfa56048f049ac798e50846fa47b2\": rpc error: code = NotFound desc = could not find container \"882c3c013d44502977dd9824296a174a7e3bfa56048f049ac798e50846fa47b2\": container with ID starting with 882c3c013d44502977dd9824296a174a7e3bfa56048f049ac798e50846fa47b2 not found: ID does not exist" Nov 24 21:00:34 crc kubenswrapper[4812]: I1124 21:00:34.165525 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d6b4c97d4-tx4dh"] Nov 24 21:00:34 crc kubenswrapper[4812]: I1124 21:00:34.183101 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7d6b4c97d4-tx4dh"] Nov 24 21:00:34 crc kubenswrapper[4812]: I1124 21:00:34.985690 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c23031c-42fe-41dc-9e20-8eff2922676c" path="/var/lib/kubelet/pods/5c23031c-42fe-41dc-9e20-8eff2922676c/volumes" Nov 24 21:00:43 crc kubenswrapper[4812]: I1124 21:00:43.965677 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 21:00:43 crc kubenswrapper[4812]: E1124 21:00:43.967075 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:00:55 crc kubenswrapper[4812]: I1124 21:00:55.684751 4812 scope.go:117] "RemoveContainer" containerID="7edfe1b23988d9624d9486f64692f8085d7282753ae3f37f628397ebeed32afc" Nov 24 21:00:55 crc kubenswrapper[4812]: I1124 21:00:55.730631 4812 scope.go:117] "RemoveContainer" containerID="80f76c14a785e7460005aca193011ab9c16732b2c6318b05149fd30316046a1e" Nov 24 21:00:55 crc kubenswrapper[4812]: I1124 21:00:55.798463 4812 scope.go:117] "RemoveContainer" containerID="27751295ee487f4ab8aecf1d392858b8cde1cfd3a2359d90e6ddfb21dd506cd0" Nov 24 21:00:55 crc kubenswrapper[4812]: I1124 21:00:55.846849 4812 scope.go:117] "RemoveContainer" containerID="b9b63e3c2b07b898841ec8c6233df14a97c5c4f400d55e1bec86a0fe028a507b" Nov 24 21:00:55 crc kubenswrapper[4812]: I1124 21:00:55.897556 4812 scope.go:117] "RemoveContainer" containerID="ea444804ddbcda422bbdb2c08e90b7b647c36494c5e8205396bea8271173de08" Nov 24 21:00:58 crc kubenswrapper[4812]: I1124 21:00:58.967731 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 21:00:58 crc kubenswrapper[4812]: E1124 21:00:58.968728 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.167911 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29400301-8fvbw"] Nov 24 21:01:00 crc kubenswrapper[4812]: E1124 21:01:00.168368 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27d498fa-8f45-44cc-9bda-63f0e5e0f7bb" containerName="horizon" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.168384 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="27d498fa-8f45-44cc-9bda-63f0e5e0f7bb" containerName="horizon" Nov 24 21:01:00 crc kubenswrapper[4812]: E1124 21:01:00.168402 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27d498fa-8f45-44cc-9bda-63f0e5e0f7bb" containerName="horizon-log" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.168410 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="27d498fa-8f45-44cc-9bda-63f0e5e0f7bb" containerName="horizon-log" Nov 24 21:01:00 crc kubenswrapper[4812]: E1124 21:01:00.168437 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c23031c-42fe-41dc-9e20-8eff2922676c" containerName="horizon-log" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.168446 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c23031c-42fe-41dc-9e20-8eff2922676c" containerName="horizon-log" Nov 24 21:01:00 crc kubenswrapper[4812]: E1124 21:01:00.168459 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c23031c-42fe-41dc-9e20-8eff2922676c" containerName="horizon" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.168466 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c23031c-42fe-41dc-9e20-8eff2922676c" containerName="horizon" Nov 24 21:01:00 crc kubenswrapper[4812]: E1124 21:01:00.168487 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb5383a9-1cd1-478f-88d5-fe697f9dfe4d" containerName="collect-profiles" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.168494 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb5383a9-1cd1-478f-88d5-fe697f9dfe4d" containerName="collect-profiles" Nov 24 21:01:00 crc kubenswrapper[4812]: E1124 21:01:00.168513 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c23ec86d-efe2-4fd9-806f-20c7f6eee8b0" containerName="horizon-log" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.168520 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c23ec86d-efe2-4fd9-806f-20c7f6eee8b0" containerName="horizon-log" Nov 24 21:01:00 crc kubenswrapper[4812]: E1124 21:01:00.168539 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c23ec86d-efe2-4fd9-806f-20c7f6eee8b0" containerName="horizon" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.168547 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c23ec86d-efe2-4fd9-806f-20c7f6eee8b0" containerName="horizon" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.168782 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="27d498fa-8f45-44cc-9bda-63f0e5e0f7bb" containerName="horizon-log" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.168806 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c23031c-42fe-41dc-9e20-8eff2922676c" containerName="horizon-log" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.168820 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c23031c-42fe-41dc-9e20-8eff2922676c" containerName="horizon" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.168839 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb5383a9-1cd1-478f-88d5-fe697f9dfe4d" containerName="collect-profiles" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.168857 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c23ec86d-efe2-4fd9-806f-20c7f6eee8b0" containerName="horizon" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.168871 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c23ec86d-efe2-4fd9-806f-20c7f6eee8b0" containerName="horizon-log" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.168889 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="27d498fa-8f45-44cc-9bda-63f0e5e0f7bb" containerName="horizon" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.169748 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400301-8fvbw" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.197615 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29400301-8fvbw"] Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.296548 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d891f15c-5fab-4e30-b131-437a574c5c6e-fernet-keys\") pod \"keystone-cron-29400301-8fvbw\" (UID: \"d891f15c-5fab-4e30-b131-437a574c5c6e\") " pod="openstack/keystone-cron-29400301-8fvbw" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.296853 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spcvm\" (UniqueName: \"kubernetes.io/projected/d891f15c-5fab-4e30-b131-437a574c5c6e-kube-api-access-spcvm\") pod \"keystone-cron-29400301-8fvbw\" (UID: \"d891f15c-5fab-4e30-b131-437a574c5c6e\") " pod="openstack/keystone-cron-29400301-8fvbw" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.296996 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d891f15c-5fab-4e30-b131-437a574c5c6e-config-data\") pod \"keystone-cron-29400301-8fvbw\" (UID: \"d891f15c-5fab-4e30-b131-437a574c5c6e\") " pod="openstack/keystone-cron-29400301-8fvbw" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.297246 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d891f15c-5fab-4e30-b131-437a574c5c6e-combined-ca-bundle\") pod \"keystone-cron-29400301-8fvbw\" (UID: \"d891f15c-5fab-4e30-b131-437a574c5c6e\") " pod="openstack/keystone-cron-29400301-8fvbw" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.399360 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d891f15c-5fab-4e30-b131-437a574c5c6e-fernet-keys\") pod \"keystone-cron-29400301-8fvbw\" (UID: \"d891f15c-5fab-4e30-b131-437a574c5c6e\") " pod="openstack/keystone-cron-29400301-8fvbw" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.399900 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spcvm\" (UniqueName: \"kubernetes.io/projected/d891f15c-5fab-4e30-b131-437a574c5c6e-kube-api-access-spcvm\") pod \"keystone-cron-29400301-8fvbw\" (UID: \"d891f15c-5fab-4e30-b131-437a574c5c6e\") " pod="openstack/keystone-cron-29400301-8fvbw" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.400010 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d891f15c-5fab-4e30-b131-437a574c5c6e-config-data\") pod \"keystone-cron-29400301-8fvbw\" (UID: \"d891f15c-5fab-4e30-b131-437a574c5c6e\") " pod="openstack/keystone-cron-29400301-8fvbw" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.400091 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d891f15c-5fab-4e30-b131-437a574c5c6e-combined-ca-bundle\") pod \"keystone-cron-29400301-8fvbw\" (UID: \"d891f15c-5fab-4e30-b131-437a574c5c6e\") " pod="openstack/keystone-cron-29400301-8fvbw" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.409841 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d891f15c-5fab-4e30-b131-437a574c5c6e-fernet-keys\") pod \"keystone-cron-29400301-8fvbw\" (UID: \"d891f15c-5fab-4e30-b131-437a574c5c6e\") " pod="openstack/keystone-cron-29400301-8fvbw" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.409935 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d891f15c-5fab-4e30-b131-437a574c5c6e-combined-ca-bundle\") pod \"keystone-cron-29400301-8fvbw\" (UID: \"d891f15c-5fab-4e30-b131-437a574c5c6e\") " pod="openstack/keystone-cron-29400301-8fvbw" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.409966 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d891f15c-5fab-4e30-b131-437a574c5c6e-config-data\") pod \"keystone-cron-29400301-8fvbw\" (UID: \"d891f15c-5fab-4e30-b131-437a574c5c6e\") " pod="openstack/keystone-cron-29400301-8fvbw" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.423434 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spcvm\" (UniqueName: \"kubernetes.io/projected/d891f15c-5fab-4e30-b131-437a574c5c6e-kube-api-access-spcvm\") pod \"keystone-cron-29400301-8fvbw\" (UID: \"d891f15c-5fab-4e30-b131-437a574c5c6e\") " pod="openstack/keystone-cron-29400301-8fvbw" Nov 24 21:01:00 crc kubenswrapper[4812]: I1124 21:01:00.515744 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400301-8fvbw" Nov 24 21:01:01 crc kubenswrapper[4812]: I1124 21:01:01.022525 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29400301-8fvbw"] Nov 24 21:01:01 crc kubenswrapper[4812]: I1124 21:01:01.178881 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400301-8fvbw" event={"ID":"d891f15c-5fab-4e30-b131-437a574c5c6e","Type":"ContainerStarted","Data":"48976a910c5d3770fcf1f042af71765ac875a6985e35ecc7e52b08a5074df4d1"} Nov 24 21:01:02 crc kubenswrapper[4812]: I1124 21:01:02.193002 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400301-8fvbw" event={"ID":"d891f15c-5fab-4e30-b131-437a574c5c6e","Type":"ContainerStarted","Data":"b6a5bf10f64dbc67d8d8329b7f728db5a206a9cdb79cd38fa5fdf542e2082f84"} Nov 24 21:01:02 crc kubenswrapper[4812]: I1124 21:01:02.235394 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29400301-8fvbw" podStartSLOduration=2.235368586 podStartE2EDuration="2.235368586s" podCreationTimestamp="2025-11-24 21:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:01:02.218591602 +0000 UTC m=+6256.007544013" watchObservedRunningTime="2025-11-24 21:01:02.235368586 +0000 UTC m=+6256.024320997" Nov 24 21:01:03 crc kubenswrapper[4812]: I1124 21:01:03.210430 4812 generic.go:334] "Generic (PLEG): container finished" podID="d891f15c-5fab-4e30-b131-437a574c5c6e" containerID="b6a5bf10f64dbc67d8d8329b7f728db5a206a9cdb79cd38fa5fdf542e2082f84" exitCode=0 Nov 24 21:01:03 crc kubenswrapper[4812]: I1124 21:01:03.210881 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400301-8fvbw" event={"ID":"d891f15c-5fab-4e30-b131-437a574c5c6e","Type":"ContainerDied","Data":"b6a5bf10f64dbc67d8d8329b7f728db5a206a9cdb79cd38fa5fdf542e2082f84"} Nov 24 21:01:04 crc kubenswrapper[4812]: I1124 21:01:04.642205 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400301-8fvbw" Nov 24 21:01:04 crc kubenswrapper[4812]: I1124 21:01:04.718502 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d891f15c-5fab-4e30-b131-437a574c5c6e-combined-ca-bundle\") pod \"d891f15c-5fab-4e30-b131-437a574c5c6e\" (UID: \"d891f15c-5fab-4e30-b131-437a574c5c6e\") " Nov 24 21:01:04 crc kubenswrapper[4812]: I1124 21:01:04.753812 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d891f15c-5fab-4e30-b131-437a574c5c6e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d891f15c-5fab-4e30-b131-437a574c5c6e" (UID: "d891f15c-5fab-4e30-b131-437a574c5c6e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:01:04 crc kubenswrapper[4812]: I1124 21:01:04.820082 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spcvm\" (UniqueName: \"kubernetes.io/projected/d891f15c-5fab-4e30-b131-437a574c5c6e-kube-api-access-spcvm\") pod \"d891f15c-5fab-4e30-b131-437a574c5c6e\" (UID: \"d891f15c-5fab-4e30-b131-437a574c5c6e\") " Nov 24 21:01:04 crc kubenswrapper[4812]: I1124 21:01:04.820423 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d891f15c-5fab-4e30-b131-437a574c5c6e-config-data\") pod \"d891f15c-5fab-4e30-b131-437a574c5c6e\" (UID: \"d891f15c-5fab-4e30-b131-437a574c5c6e\") " Nov 24 21:01:04 crc kubenswrapper[4812]: I1124 21:01:04.820571 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d891f15c-5fab-4e30-b131-437a574c5c6e-fernet-keys\") pod \"d891f15c-5fab-4e30-b131-437a574c5c6e\" (UID: \"d891f15c-5fab-4e30-b131-437a574c5c6e\") " Nov 24 21:01:04 crc kubenswrapper[4812]: I1124 21:01:04.821248 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d891f15c-5fab-4e30-b131-437a574c5c6e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:04 crc kubenswrapper[4812]: I1124 21:01:04.825768 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d891f15c-5fab-4e30-b131-437a574c5c6e-kube-api-access-spcvm" (OuterVolumeSpecName: "kube-api-access-spcvm") pod "d891f15c-5fab-4e30-b131-437a574c5c6e" (UID: "d891f15c-5fab-4e30-b131-437a574c5c6e"). InnerVolumeSpecName "kube-api-access-spcvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:01:04 crc kubenswrapper[4812]: I1124 21:01:04.826676 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d891f15c-5fab-4e30-b131-437a574c5c6e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d891f15c-5fab-4e30-b131-437a574c5c6e" (UID: "d891f15c-5fab-4e30-b131-437a574c5c6e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:01:04 crc kubenswrapper[4812]: I1124 21:01:04.916971 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d891f15c-5fab-4e30-b131-437a574c5c6e-config-data" (OuterVolumeSpecName: "config-data") pod "d891f15c-5fab-4e30-b131-437a574c5c6e" (UID: "d891f15c-5fab-4e30-b131-437a574c5c6e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:01:04 crc kubenswrapper[4812]: I1124 21:01:04.924028 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d891f15c-5fab-4e30-b131-437a574c5c6e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:04 crc kubenswrapper[4812]: I1124 21:01:04.924067 4812 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d891f15c-5fab-4e30-b131-437a574c5c6e-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:04 crc kubenswrapper[4812]: I1124 21:01:04.924087 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spcvm\" (UniqueName: \"kubernetes.io/projected/d891f15c-5fab-4e30-b131-437a574c5c6e-kube-api-access-spcvm\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:05 crc kubenswrapper[4812]: I1124 21:01:05.242050 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400301-8fvbw" event={"ID":"d891f15c-5fab-4e30-b131-437a574c5c6e","Type":"ContainerDied","Data":"48976a910c5d3770fcf1f042af71765ac875a6985e35ecc7e52b08a5074df4d1"} Nov 24 21:01:05 crc kubenswrapper[4812]: I1124 21:01:05.242685 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48976a910c5d3770fcf1f042af71765ac875a6985e35ecc7e52b08a5074df4d1" Nov 24 21:01:05 crc kubenswrapper[4812]: I1124 21:01:05.242716 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400301-8fvbw" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.072557 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-746cdf7854-2hgwq"] Nov 24 21:01:09 crc kubenswrapper[4812]: E1124 21:01:09.073251 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d891f15c-5fab-4e30-b131-437a574c5c6e" containerName="keystone-cron" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.073268 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d891f15c-5fab-4e30-b131-437a574c5c6e" containerName="keystone-cron" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.073525 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d891f15c-5fab-4e30-b131-437a574c5c6e" containerName="keystone-cron" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.074854 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.088374 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-746cdf7854-2hgwq"] Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.224841 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5840494b-6708-4ad4-b279-183e4d30dc5d-horizon-secret-key\") pod \"horizon-746cdf7854-2hgwq\" (UID: \"5840494b-6708-4ad4-b279-183e4d30dc5d\") " pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.224915 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5840494b-6708-4ad4-b279-183e4d30dc5d-horizon-tls-certs\") pod \"horizon-746cdf7854-2hgwq\" (UID: \"5840494b-6708-4ad4-b279-183e4d30dc5d\") " pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.225055 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5840494b-6708-4ad4-b279-183e4d30dc5d-combined-ca-bundle\") pod \"horizon-746cdf7854-2hgwq\" (UID: \"5840494b-6708-4ad4-b279-183e4d30dc5d\") " pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.225097 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkgpw\" (UniqueName: \"kubernetes.io/projected/5840494b-6708-4ad4-b279-183e4d30dc5d-kube-api-access-xkgpw\") pod \"horizon-746cdf7854-2hgwq\" (UID: \"5840494b-6708-4ad4-b279-183e4d30dc5d\") " pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.225220 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5840494b-6708-4ad4-b279-183e4d30dc5d-scripts\") pod \"horizon-746cdf7854-2hgwq\" (UID: \"5840494b-6708-4ad4-b279-183e4d30dc5d\") " pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.225280 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5840494b-6708-4ad4-b279-183e4d30dc5d-logs\") pod \"horizon-746cdf7854-2hgwq\" (UID: \"5840494b-6708-4ad4-b279-183e4d30dc5d\") " pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.225373 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5840494b-6708-4ad4-b279-183e4d30dc5d-config-data\") pod \"horizon-746cdf7854-2hgwq\" (UID: \"5840494b-6708-4ad4-b279-183e4d30dc5d\") " pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.327091 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5840494b-6708-4ad4-b279-183e4d30dc5d-combined-ca-bundle\") pod \"horizon-746cdf7854-2hgwq\" (UID: \"5840494b-6708-4ad4-b279-183e4d30dc5d\") " pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.327144 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkgpw\" (UniqueName: \"kubernetes.io/projected/5840494b-6708-4ad4-b279-183e4d30dc5d-kube-api-access-xkgpw\") pod \"horizon-746cdf7854-2hgwq\" (UID: \"5840494b-6708-4ad4-b279-183e4d30dc5d\") " pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.327193 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5840494b-6708-4ad4-b279-183e4d30dc5d-scripts\") pod \"horizon-746cdf7854-2hgwq\" (UID: \"5840494b-6708-4ad4-b279-183e4d30dc5d\") " pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.327227 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5840494b-6708-4ad4-b279-183e4d30dc5d-logs\") pod \"horizon-746cdf7854-2hgwq\" (UID: \"5840494b-6708-4ad4-b279-183e4d30dc5d\") " pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.327256 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5840494b-6708-4ad4-b279-183e4d30dc5d-config-data\") pod \"horizon-746cdf7854-2hgwq\" (UID: \"5840494b-6708-4ad4-b279-183e4d30dc5d\") " pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.327430 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5840494b-6708-4ad4-b279-183e4d30dc5d-horizon-secret-key\") pod \"horizon-746cdf7854-2hgwq\" (UID: \"5840494b-6708-4ad4-b279-183e4d30dc5d\") " pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.327455 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5840494b-6708-4ad4-b279-183e4d30dc5d-horizon-tls-certs\") pod \"horizon-746cdf7854-2hgwq\" (UID: \"5840494b-6708-4ad4-b279-183e4d30dc5d\") " pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.328728 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5840494b-6708-4ad4-b279-183e4d30dc5d-logs\") pod \"horizon-746cdf7854-2hgwq\" (UID: \"5840494b-6708-4ad4-b279-183e4d30dc5d\") " pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.329625 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5840494b-6708-4ad4-b279-183e4d30dc5d-scripts\") pod \"horizon-746cdf7854-2hgwq\" (UID: \"5840494b-6708-4ad4-b279-183e4d30dc5d\") " pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.331030 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5840494b-6708-4ad4-b279-183e4d30dc5d-config-data\") pod \"horizon-746cdf7854-2hgwq\" (UID: \"5840494b-6708-4ad4-b279-183e4d30dc5d\") " pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.333518 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5840494b-6708-4ad4-b279-183e4d30dc5d-horizon-secret-key\") pod \"horizon-746cdf7854-2hgwq\" (UID: \"5840494b-6708-4ad4-b279-183e4d30dc5d\") " pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.333631 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5840494b-6708-4ad4-b279-183e4d30dc5d-horizon-tls-certs\") pod \"horizon-746cdf7854-2hgwq\" (UID: \"5840494b-6708-4ad4-b279-183e4d30dc5d\") " pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.346003 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkgpw\" (UniqueName: \"kubernetes.io/projected/5840494b-6708-4ad4-b279-183e4d30dc5d-kube-api-access-xkgpw\") pod \"horizon-746cdf7854-2hgwq\" (UID: \"5840494b-6708-4ad4-b279-183e4d30dc5d\") " pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.352569 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5840494b-6708-4ad4-b279-183e4d30dc5d-combined-ca-bundle\") pod \"horizon-746cdf7854-2hgwq\" (UID: \"5840494b-6708-4ad4-b279-183e4d30dc5d\") " pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.401211 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:09 crc kubenswrapper[4812]: I1124 21:01:09.896278 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-746cdf7854-2hgwq"] Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.304979 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-746cdf7854-2hgwq" event={"ID":"5840494b-6708-4ad4-b279-183e4d30dc5d","Type":"ContainerStarted","Data":"6cae9caa6d6da7bbb0c9c1a750fcd1f8e7f2c6c92338b1036925de03e227eb3f"} Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.305272 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-746cdf7854-2hgwq" event={"ID":"5840494b-6708-4ad4-b279-183e4d30dc5d","Type":"ContainerStarted","Data":"bdd7f87be62cf05866ee3051cce1f37c2766adc09569a86e17d8ad59d53960ac"} Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.438026 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-9qzj8"] Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.443724 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-9qzj8" Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.457990 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-9qzj8"] Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.531933 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-3ae3-account-create-hzs8w"] Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.533265 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3ae3-account-create-hzs8w" Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.535908 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.543273 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3ae3-account-create-hzs8w"] Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.550783 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dce7cffc-90f1-4d13-ace8-057618426139-operator-scripts\") pod \"heat-db-create-9qzj8\" (UID: \"dce7cffc-90f1-4d13-ace8-057618426139\") " pod="openstack/heat-db-create-9qzj8" Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.550854 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn8dd\" (UniqueName: \"kubernetes.io/projected/dce7cffc-90f1-4d13-ace8-057618426139-kube-api-access-nn8dd\") pod \"heat-db-create-9qzj8\" (UID: \"dce7cffc-90f1-4d13-ace8-057618426139\") " pod="openstack/heat-db-create-9qzj8" Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.652941 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvmlf\" (UniqueName: \"kubernetes.io/projected/2e8ab6b8-4ac3-479d-b176-30105aba1c8c-kube-api-access-cvmlf\") pod \"heat-3ae3-account-create-hzs8w\" (UID: \"2e8ab6b8-4ac3-479d-b176-30105aba1c8c\") " pod="openstack/heat-3ae3-account-create-hzs8w" Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.653017 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dce7cffc-90f1-4d13-ace8-057618426139-operator-scripts\") pod \"heat-db-create-9qzj8\" (UID: \"dce7cffc-90f1-4d13-ace8-057618426139\") " pod="openstack/heat-db-create-9qzj8" Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.653097 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn8dd\" (UniqueName: \"kubernetes.io/projected/dce7cffc-90f1-4d13-ace8-057618426139-kube-api-access-nn8dd\") pod \"heat-db-create-9qzj8\" (UID: \"dce7cffc-90f1-4d13-ace8-057618426139\") " pod="openstack/heat-db-create-9qzj8" Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.653140 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e8ab6b8-4ac3-479d-b176-30105aba1c8c-operator-scripts\") pod \"heat-3ae3-account-create-hzs8w\" (UID: \"2e8ab6b8-4ac3-479d-b176-30105aba1c8c\") " pod="openstack/heat-3ae3-account-create-hzs8w" Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.653729 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dce7cffc-90f1-4d13-ace8-057618426139-operator-scripts\") pod \"heat-db-create-9qzj8\" (UID: \"dce7cffc-90f1-4d13-ace8-057618426139\") " pod="openstack/heat-db-create-9qzj8" Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.669800 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn8dd\" (UniqueName: \"kubernetes.io/projected/dce7cffc-90f1-4d13-ace8-057618426139-kube-api-access-nn8dd\") pod \"heat-db-create-9qzj8\" (UID: \"dce7cffc-90f1-4d13-ace8-057618426139\") " pod="openstack/heat-db-create-9qzj8" Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.755256 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e8ab6b8-4ac3-479d-b176-30105aba1c8c-operator-scripts\") pod \"heat-3ae3-account-create-hzs8w\" (UID: \"2e8ab6b8-4ac3-479d-b176-30105aba1c8c\") " pod="openstack/heat-3ae3-account-create-hzs8w" Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.755852 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvmlf\" (UniqueName: \"kubernetes.io/projected/2e8ab6b8-4ac3-479d-b176-30105aba1c8c-kube-api-access-cvmlf\") pod \"heat-3ae3-account-create-hzs8w\" (UID: \"2e8ab6b8-4ac3-479d-b176-30105aba1c8c\") " pod="openstack/heat-3ae3-account-create-hzs8w" Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.756259 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e8ab6b8-4ac3-479d-b176-30105aba1c8c-operator-scripts\") pod \"heat-3ae3-account-create-hzs8w\" (UID: \"2e8ab6b8-4ac3-479d-b176-30105aba1c8c\") " pod="openstack/heat-3ae3-account-create-hzs8w" Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.769399 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-9qzj8" Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.782985 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvmlf\" (UniqueName: \"kubernetes.io/projected/2e8ab6b8-4ac3-479d-b176-30105aba1c8c-kube-api-access-cvmlf\") pod \"heat-3ae3-account-create-hzs8w\" (UID: \"2e8ab6b8-4ac3-479d-b176-30105aba1c8c\") " pod="openstack/heat-3ae3-account-create-hzs8w" Nov 24 21:01:10 crc kubenswrapper[4812]: I1124 21:01:10.852123 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3ae3-account-create-hzs8w" Nov 24 21:01:11 crc kubenswrapper[4812]: I1124 21:01:11.068411 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9816-account-create-5vd9g"] Nov 24 21:01:11 crc kubenswrapper[4812]: I1124 21:01:11.076937 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-xxqg8"] Nov 24 21:01:11 crc kubenswrapper[4812]: I1124 21:01:11.086095 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-xxqg8"] Nov 24 21:01:11 crc kubenswrapper[4812]: I1124 21:01:11.091230 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-9816-account-create-5vd9g"] Nov 24 21:01:11 crc kubenswrapper[4812]: I1124 21:01:11.305540 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-9qzj8"] Nov 24 21:01:11 crc kubenswrapper[4812]: I1124 21:01:11.314758 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-746cdf7854-2hgwq" event={"ID":"5840494b-6708-4ad4-b279-183e4d30dc5d","Type":"ContainerStarted","Data":"bd987aa3b0a7dbc1fe4716ceed48a98c8dbdd89113a8860dd33023e0cb879e9e"} Nov 24 21:01:11 crc kubenswrapper[4812]: I1124 21:01:11.354715 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-746cdf7854-2hgwq" podStartSLOduration=2.354688987 podStartE2EDuration="2.354688987s" podCreationTimestamp="2025-11-24 21:01:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:01:11.351791585 +0000 UTC m=+6265.140743976" watchObservedRunningTime="2025-11-24 21:01:11.354688987 +0000 UTC m=+6265.143641398" Nov 24 21:01:11 crc kubenswrapper[4812]: I1124 21:01:11.437573 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3ae3-account-create-hzs8w"] Nov 24 21:01:11 crc kubenswrapper[4812]: W1124 21:01:11.439744 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e8ab6b8_4ac3_479d_b176_30105aba1c8c.slice/crio-3d6f171986dec0b50c7cb0d0b43cf2608e6f5fa35bd4ba08abf6b44c6eac43d4 WatchSource:0}: Error finding container 3d6f171986dec0b50c7cb0d0b43cf2608e6f5fa35bd4ba08abf6b44c6eac43d4: Status 404 returned error can't find the container with id 3d6f171986dec0b50c7cb0d0b43cf2608e6f5fa35bd4ba08abf6b44c6eac43d4 Nov 24 21:01:12 crc kubenswrapper[4812]: I1124 21:01:12.324224 4812 generic.go:334] "Generic (PLEG): container finished" podID="dce7cffc-90f1-4d13-ace8-057618426139" containerID="df80eb27e968167ecb1cdf8606ae99171329318e7ea207545a295bd01ea8cc0a" exitCode=0 Nov 24 21:01:12 crc kubenswrapper[4812]: I1124 21:01:12.324324 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-9qzj8" event={"ID":"dce7cffc-90f1-4d13-ace8-057618426139","Type":"ContainerDied","Data":"df80eb27e968167ecb1cdf8606ae99171329318e7ea207545a295bd01ea8cc0a"} Nov 24 21:01:12 crc kubenswrapper[4812]: I1124 21:01:12.324604 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-9qzj8" event={"ID":"dce7cffc-90f1-4d13-ace8-057618426139","Type":"ContainerStarted","Data":"73400a5bfe6f7dae71e8743d2cf3e55eacf8a26c8014bcdc00df0a86a2b8f913"} Nov 24 21:01:12 crc kubenswrapper[4812]: I1124 21:01:12.326667 4812 generic.go:334] "Generic (PLEG): container finished" podID="2e8ab6b8-4ac3-479d-b176-30105aba1c8c" containerID="b985c65af0c23ef4cc6998a0180bfdf3f35b9232a7859be420f5e3e8eee96b77" exitCode=0 Nov 24 21:01:12 crc kubenswrapper[4812]: I1124 21:01:12.326762 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3ae3-account-create-hzs8w" event={"ID":"2e8ab6b8-4ac3-479d-b176-30105aba1c8c","Type":"ContainerDied","Data":"b985c65af0c23ef4cc6998a0180bfdf3f35b9232a7859be420f5e3e8eee96b77"} Nov 24 21:01:12 crc kubenswrapper[4812]: I1124 21:01:12.326792 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3ae3-account-create-hzs8w" event={"ID":"2e8ab6b8-4ac3-479d-b176-30105aba1c8c","Type":"ContainerStarted","Data":"3d6f171986dec0b50c7cb0d0b43cf2608e6f5fa35bd4ba08abf6b44c6eac43d4"} Nov 24 21:01:12 crc kubenswrapper[4812]: I1124 21:01:12.978176 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a481c6e-8ec6-4c42-892b-346fdca3aace" path="/var/lib/kubelet/pods/1a481c6e-8ec6-4c42-892b-346fdca3aace/volumes" Nov 24 21:01:12 crc kubenswrapper[4812]: I1124 21:01:12.978816 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c982f0e-64c7-4e8e-999c-39c42e012f5c" path="/var/lib/kubelet/pods/1c982f0e-64c7-4e8e-999c-39c42e012f5c/volumes" Nov 24 21:01:13 crc kubenswrapper[4812]: I1124 21:01:13.878376 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-9qzj8" Nov 24 21:01:13 crc kubenswrapper[4812]: I1124 21:01:13.885821 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3ae3-account-create-hzs8w" Nov 24 21:01:13 crc kubenswrapper[4812]: I1124 21:01:13.966183 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 21:01:13 crc kubenswrapper[4812]: E1124 21:01:13.966425 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:01:14 crc kubenswrapper[4812]: I1124 21:01:14.026424 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvmlf\" (UniqueName: \"kubernetes.io/projected/2e8ab6b8-4ac3-479d-b176-30105aba1c8c-kube-api-access-cvmlf\") pod \"2e8ab6b8-4ac3-479d-b176-30105aba1c8c\" (UID: \"2e8ab6b8-4ac3-479d-b176-30105aba1c8c\") " Nov 24 21:01:14 crc kubenswrapper[4812]: I1124 21:01:14.026522 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn8dd\" (UniqueName: \"kubernetes.io/projected/dce7cffc-90f1-4d13-ace8-057618426139-kube-api-access-nn8dd\") pod \"dce7cffc-90f1-4d13-ace8-057618426139\" (UID: \"dce7cffc-90f1-4d13-ace8-057618426139\") " Nov 24 21:01:14 crc kubenswrapper[4812]: I1124 21:01:14.027486 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dce7cffc-90f1-4d13-ace8-057618426139-operator-scripts\") pod \"dce7cffc-90f1-4d13-ace8-057618426139\" (UID: \"dce7cffc-90f1-4d13-ace8-057618426139\") " Nov 24 21:01:14 crc kubenswrapper[4812]: I1124 21:01:14.027596 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e8ab6b8-4ac3-479d-b176-30105aba1c8c-operator-scripts\") pod \"2e8ab6b8-4ac3-479d-b176-30105aba1c8c\" (UID: \"2e8ab6b8-4ac3-479d-b176-30105aba1c8c\") " Nov 24 21:01:14 crc kubenswrapper[4812]: I1124 21:01:14.028471 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e8ab6b8-4ac3-479d-b176-30105aba1c8c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e8ab6b8-4ac3-479d-b176-30105aba1c8c" (UID: "2e8ab6b8-4ac3-479d-b176-30105aba1c8c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:01:14 crc kubenswrapper[4812]: I1124 21:01:14.028642 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dce7cffc-90f1-4d13-ace8-057618426139-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dce7cffc-90f1-4d13-ace8-057618426139" (UID: "dce7cffc-90f1-4d13-ace8-057618426139"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:01:14 crc kubenswrapper[4812]: I1124 21:01:14.032687 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e8ab6b8-4ac3-479d-b176-30105aba1c8c-kube-api-access-cvmlf" (OuterVolumeSpecName: "kube-api-access-cvmlf") pod "2e8ab6b8-4ac3-479d-b176-30105aba1c8c" (UID: "2e8ab6b8-4ac3-479d-b176-30105aba1c8c"). InnerVolumeSpecName "kube-api-access-cvmlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:01:14 crc kubenswrapper[4812]: I1124 21:01:14.045825 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dce7cffc-90f1-4d13-ace8-057618426139-kube-api-access-nn8dd" (OuterVolumeSpecName: "kube-api-access-nn8dd") pod "dce7cffc-90f1-4d13-ace8-057618426139" (UID: "dce7cffc-90f1-4d13-ace8-057618426139"). InnerVolumeSpecName "kube-api-access-nn8dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:01:14 crc kubenswrapper[4812]: I1124 21:01:14.129583 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvmlf\" (UniqueName: \"kubernetes.io/projected/2e8ab6b8-4ac3-479d-b176-30105aba1c8c-kube-api-access-cvmlf\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:14 crc kubenswrapper[4812]: I1124 21:01:14.129619 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn8dd\" (UniqueName: \"kubernetes.io/projected/dce7cffc-90f1-4d13-ace8-057618426139-kube-api-access-nn8dd\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:14 crc kubenswrapper[4812]: I1124 21:01:14.129628 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dce7cffc-90f1-4d13-ace8-057618426139-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:14 crc kubenswrapper[4812]: I1124 21:01:14.129637 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e8ab6b8-4ac3-479d-b176-30105aba1c8c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:14 crc kubenswrapper[4812]: I1124 21:01:14.347538 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3ae3-account-create-hzs8w" event={"ID":"2e8ab6b8-4ac3-479d-b176-30105aba1c8c","Type":"ContainerDied","Data":"3d6f171986dec0b50c7cb0d0b43cf2608e6f5fa35bd4ba08abf6b44c6eac43d4"} Nov 24 21:01:14 crc kubenswrapper[4812]: I1124 21:01:14.347593 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d6f171986dec0b50c7cb0d0b43cf2608e6f5fa35bd4ba08abf6b44c6eac43d4" Nov 24 21:01:14 crc kubenswrapper[4812]: I1124 21:01:14.347625 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3ae3-account-create-hzs8w" Nov 24 21:01:14 crc kubenswrapper[4812]: I1124 21:01:14.349238 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-9qzj8" event={"ID":"dce7cffc-90f1-4d13-ace8-057618426139","Type":"ContainerDied","Data":"73400a5bfe6f7dae71e8743d2cf3e55eacf8a26c8014bcdc00df0a86a2b8f913"} Nov 24 21:01:14 crc kubenswrapper[4812]: I1124 21:01:14.349282 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73400a5bfe6f7dae71e8743d2cf3e55eacf8a26c8014bcdc00df0a86a2b8f913" Nov 24 21:01:14 crc kubenswrapper[4812]: I1124 21:01:14.349294 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-9qzj8" Nov 24 21:01:15 crc kubenswrapper[4812]: I1124 21:01:15.617276 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-jdv9d"] Nov 24 21:01:15 crc kubenswrapper[4812]: E1124 21:01:15.618040 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce7cffc-90f1-4d13-ace8-057618426139" containerName="mariadb-database-create" Nov 24 21:01:15 crc kubenswrapper[4812]: I1124 21:01:15.618057 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce7cffc-90f1-4d13-ace8-057618426139" containerName="mariadb-database-create" Nov 24 21:01:15 crc kubenswrapper[4812]: E1124 21:01:15.618097 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e8ab6b8-4ac3-479d-b176-30105aba1c8c" containerName="mariadb-account-create" Nov 24 21:01:15 crc kubenswrapper[4812]: I1124 21:01:15.618105 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e8ab6b8-4ac3-479d-b176-30105aba1c8c" containerName="mariadb-account-create" Nov 24 21:01:15 crc kubenswrapper[4812]: I1124 21:01:15.618300 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e8ab6b8-4ac3-479d-b176-30105aba1c8c" containerName="mariadb-account-create" Nov 24 21:01:15 crc kubenswrapper[4812]: I1124 21:01:15.618323 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce7cffc-90f1-4d13-ace8-057618426139" containerName="mariadb-database-create" Nov 24 21:01:15 crc kubenswrapper[4812]: I1124 21:01:15.618974 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jdv9d" Nov 24 21:01:15 crc kubenswrapper[4812]: I1124 21:01:15.621048 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-85tv7" Nov 24 21:01:15 crc kubenswrapper[4812]: I1124 21:01:15.621994 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 24 21:01:15 crc kubenswrapper[4812]: I1124 21:01:15.635218 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jdv9d"] Nov 24 21:01:15 crc kubenswrapper[4812]: I1124 21:01:15.763722 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1176b595-2448-400c-9e8c-ac98aef730fb-config-data\") pod \"heat-db-sync-jdv9d\" (UID: \"1176b595-2448-400c-9e8c-ac98aef730fb\") " pod="openstack/heat-db-sync-jdv9d" Nov 24 21:01:15 crc kubenswrapper[4812]: I1124 21:01:15.764133 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc2jp\" (UniqueName: \"kubernetes.io/projected/1176b595-2448-400c-9e8c-ac98aef730fb-kube-api-access-bc2jp\") pod \"heat-db-sync-jdv9d\" (UID: \"1176b595-2448-400c-9e8c-ac98aef730fb\") " pod="openstack/heat-db-sync-jdv9d" Nov 24 21:01:15 crc kubenswrapper[4812]: I1124 21:01:15.764299 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1176b595-2448-400c-9e8c-ac98aef730fb-combined-ca-bundle\") pod \"heat-db-sync-jdv9d\" (UID: \"1176b595-2448-400c-9e8c-ac98aef730fb\") " pod="openstack/heat-db-sync-jdv9d" Nov 24 21:01:15 crc kubenswrapper[4812]: I1124 21:01:15.866542 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1176b595-2448-400c-9e8c-ac98aef730fb-combined-ca-bundle\") pod \"heat-db-sync-jdv9d\" (UID: \"1176b595-2448-400c-9e8c-ac98aef730fb\") " pod="openstack/heat-db-sync-jdv9d" Nov 24 21:01:15 crc kubenswrapper[4812]: I1124 21:01:15.866745 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1176b595-2448-400c-9e8c-ac98aef730fb-config-data\") pod \"heat-db-sync-jdv9d\" (UID: \"1176b595-2448-400c-9e8c-ac98aef730fb\") " pod="openstack/heat-db-sync-jdv9d" Nov 24 21:01:15 crc kubenswrapper[4812]: I1124 21:01:15.866890 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc2jp\" (UniqueName: \"kubernetes.io/projected/1176b595-2448-400c-9e8c-ac98aef730fb-kube-api-access-bc2jp\") pod \"heat-db-sync-jdv9d\" (UID: \"1176b595-2448-400c-9e8c-ac98aef730fb\") " pod="openstack/heat-db-sync-jdv9d" Nov 24 21:01:15 crc kubenswrapper[4812]: I1124 21:01:15.873996 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1176b595-2448-400c-9e8c-ac98aef730fb-config-data\") pod \"heat-db-sync-jdv9d\" (UID: \"1176b595-2448-400c-9e8c-ac98aef730fb\") " pod="openstack/heat-db-sync-jdv9d" Nov 24 21:01:15 crc kubenswrapper[4812]: I1124 21:01:15.884049 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1176b595-2448-400c-9e8c-ac98aef730fb-combined-ca-bundle\") pod \"heat-db-sync-jdv9d\" (UID: \"1176b595-2448-400c-9e8c-ac98aef730fb\") " pod="openstack/heat-db-sync-jdv9d" Nov 24 21:01:15 crc kubenswrapper[4812]: I1124 21:01:15.889025 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc2jp\" (UniqueName: \"kubernetes.io/projected/1176b595-2448-400c-9e8c-ac98aef730fb-kube-api-access-bc2jp\") pod \"heat-db-sync-jdv9d\" (UID: \"1176b595-2448-400c-9e8c-ac98aef730fb\") " pod="openstack/heat-db-sync-jdv9d" Nov 24 21:01:15 crc kubenswrapper[4812]: I1124 21:01:15.981384 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jdv9d" Nov 24 21:01:16 crc kubenswrapper[4812]: I1124 21:01:16.478716 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-jdv9d"] Nov 24 21:01:16 crc kubenswrapper[4812]: W1124 21:01:16.493256 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1176b595_2448_400c_9e8c_ac98aef730fb.slice/crio-6cb0b615b5c9729870f68e85a95b97ba648b2cb112bf29ccda0ac291e1b9feee WatchSource:0}: Error finding container 6cb0b615b5c9729870f68e85a95b97ba648b2cb112bf29ccda0ac291e1b9feee: Status 404 returned error can't find the container with id 6cb0b615b5c9729870f68e85a95b97ba648b2cb112bf29ccda0ac291e1b9feee Nov 24 21:01:17 crc kubenswrapper[4812]: I1124 21:01:17.089108 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-xhdqg"] Nov 24 21:01:17 crc kubenswrapper[4812]: I1124 21:01:17.097595 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-xhdqg"] Nov 24 21:01:17 crc kubenswrapper[4812]: I1124 21:01:17.383037 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jdv9d" event={"ID":"1176b595-2448-400c-9e8c-ac98aef730fb","Type":"ContainerStarted","Data":"6cb0b615b5c9729870f68e85a95b97ba648b2cb112bf29ccda0ac291e1b9feee"} Nov 24 21:01:18 crc kubenswrapper[4812]: I1124 21:01:18.978140 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d00a581c-f73c-4e31-9fb4-a469e26fe3a3" path="/var/lib/kubelet/pods/d00a581c-f73c-4e31-9fb4-a469e26fe3a3/volumes" Nov 24 21:01:19 crc kubenswrapper[4812]: I1124 21:01:19.402120 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:19 crc kubenswrapper[4812]: I1124 21:01:19.402466 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:25 crc kubenswrapper[4812]: I1124 21:01:25.485490 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jdv9d" event={"ID":"1176b595-2448-400c-9e8c-ac98aef730fb","Type":"ContainerStarted","Data":"4eb33002975bea1e1d4aa9cafbc75a19af5549c53720350eaa8f43a286d95d0e"} Nov 24 21:01:25 crc kubenswrapper[4812]: I1124 21:01:25.530797 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-jdv9d" podStartSLOduration=2.614293675 podStartE2EDuration="10.530778537s" podCreationTimestamp="2025-11-24 21:01:15 +0000 UTC" firstStartedPulling="2025-11-24 21:01:16.493895959 +0000 UTC m=+6270.282848320" lastFinishedPulling="2025-11-24 21:01:24.410380801 +0000 UTC m=+6278.199333182" observedRunningTime="2025-11-24 21:01:25.524791738 +0000 UTC m=+6279.313744109" watchObservedRunningTime="2025-11-24 21:01:25.530778537 +0000 UTC m=+6279.319730908" Nov 24 21:01:27 crc kubenswrapper[4812]: I1124 21:01:27.511462 4812 generic.go:334] "Generic (PLEG): container finished" podID="1176b595-2448-400c-9e8c-ac98aef730fb" containerID="4eb33002975bea1e1d4aa9cafbc75a19af5549c53720350eaa8f43a286d95d0e" exitCode=0 Nov 24 21:01:27 crc kubenswrapper[4812]: I1124 21:01:27.511555 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jdv9d" event={"ID":"1176b595-2448-400c-9e8c-ac98aef730fb","Type":"ContainerDied","Data":"4eb33002975bea1e1d4aa9cafbc75a19af5549c53720350eaa8f43a286d95d0e"} Nov 24 21:01:27 crc kubenswrapper[4812]: I1124 21:01:27.966684 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 21:01:27 crc kubenswrapper[4812]: E1124 21:01:27.967542 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:01:28 crc kubenswrapper[4812]: I1124 21:01:28.834361 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jdv9d" Nov 24 21:01:28 crc kubenswrapper[4812]: I1124 21:01:28.972908 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc2jp\" (UniqueName: \"kubernetes.io/projected/1176b595-2448-400c-9e8c-ac98aef730fb-kube-api-access-bc2jp\") pod \"1176b595-2448-400c-9e8c-ac98aef730fb\" (UID: \"1176b595-2448-400c-9e8c-ac98aef730fb\") " Nov 24 21:01:28 crc kubenswrapper[4812]: I1124 21:01:28.973161 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1176b595-2448-400c-9e8c-ac98aef730fb-combined-ca-bundle\") pod \"1176b595-2448-400c-9e8c-ac98aef730fb\" (UID: \"1176b595-2448-400c-9e8c-ac98aef730fb\") " Nov 24 21:01:28 crc kubenswrapper[4812]: I1124 21:01:28.973202 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1176b595-2448-400c-9e8c-ac98aef730fb-config-data\") pod \"1176b595-2448-400c-9e8c-ac98aef730fb\" (UID: \"1176b595-2448-400c-9e8c-ac98aef730fb\") " Nov 24 21:01:28 crc kubenswrapper[4812]: I1124 21:01:28.979594 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1176b595-2448-400c-9e8c-ac98aef730fb-kube-api-access-bc2jp" (OuterVolumeSpecName: "kube-api-access-bc2jp") pod "1176b595-2448-400c-9e8c-ac98aef730fb" (UID: "1176b595-2448-400c-9e8c-ac98aef730fb"). InnerVolumeSpecName "kube-api-access-bc2jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:01:29 crc kubenswrapper[4812]: I1124 21:01:29.076979 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc2jp\" (UniqueName: \"kubernetes.io/projected/1176b595-2448-400c-9e8c-ac98aef730fb-kube-api-access-bc2jp\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:29 crc kubenswrapper[4812]: I1124 21:01:29.160328 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1176b595-2448-400c-9e8c-ac98aef730fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1176b595-2448-400c-9e8c-ac98aef730fb" (UID: "1176b595-2448-400c-9e8c-ac98aef730fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:01:29 crc kubenswrapper[4812]: I1124 21:01:29.180182 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1176b595-2448-400c-9e8c-ac98aef730fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:29 crc kubenswrapper[4812]: I1124 21:01:29.235176 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1176b595-2448-400c-9e8c-ac98aef730fb-config-data" (OuterVolumeSpecName: "config-data") pod "1176b595-2448-400c-9e8c-ac98aef730fb" (UID: "1176b595-2448-400c-9e8c-ac98aef730fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:01:29 crc kubenswrapper[4812]: I1124 21:01:29.282121 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1176b595-2448-400c-9e8c-ac98aef730fb-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:29 crc kubenswrapper[4812]: I1124 21:01:29.553010 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-jdv9d" event={"ID":"1176b595-2448-400c-9e8c-ac98aef730fb","Type":"ContainerDied","Data":"6cb0b615b5c9729870f68e85a95b97ba648b2cb112bf29ccda0ac291e1b9feee"} Nov 24 21:01:29 crc kubenswrapper[4812]: I1124 21:01:29.553052 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cb0b615b5c9729870f68e85a95b97ba648b2cb112bf29ccda0ac291e1b9feee" Nov 24 21:01:29 crc kubenswrapper[4812]: I1124 21:01:29.553095 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-jdv9d" Nov 24 21:01:30 crc kubenswrapper[4812]: I1124 21:01:30.832156 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7fb88df497-nxz6w"] Nov 24 21:01:30 crc kubenswrapper[4812]: E1124 21:01:30.832873 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1176b595-2448-400c-9e8c-ac98aef730fb" containerName="heat-db-sync" Nov 24 21:01:30 crc kubenswrapper[4812]: I1124 21:01:30.832890 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1176b595-2448-400c-9e8c-ac98aef730fb" containerName="heat-db-sync" Nov 24 21:01:30 crc kubenswrapper[4812]: I1124 21:01:30.835533 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1176b595-2448-400c-9e8c-ac98aef730fb" containerName="heat-db-sync" Nov 24 21:01:30 crc kubenswrapper[4812]: I1124 21:01:30.836479 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7fb88df497-nxz6w" Nov 24 21:01:30 crc kubenswrapper[4812]: I1124 21:01:30.843831 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-85tv7" Nov 24 21:01:30 crc kubenswrapper[4812]: I1124 21:01:30.844526 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 24 21:01:30 crc kubenswrapper[4812]: I1124 21:01:30.844777 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 24 21:01:30 crc kubenswrapper[4812]: I1124 21:01:30.854409 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7fb88df497-nxz6w"] Nov 24 21:01:30 crc kubenswrapper[4812]: I1124 21:01:30.939187 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7d99fbd676-58btb"] Nov 24 21:01:30 crc kubenswrapper[4812]: I1124 21:01:30.940523 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d99fbd676-58btb" Nov 24 21:01:30 crc kubenswrapper[4812]: I1124 21:01:30.946947 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 24 21:01:30 crc kubenswrapper[4812]: I1124 21:01:30.952369 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-77885f85fc-q7crr"] Nov 24 21:01:30 crc kubenswrapper[4812]: I1124 21:01:30.953694 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77885f85fc-q7crr" Nov 24 21:01:30 crc kubenswrapper[4812]: I1124 21:01:30.955639 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 24 21:01:30 crc kubenswrapper[4812]: I1124 21:01:30.962346 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7d99fbd676-58btb"] Nov 24 21:01:30 crc kubenswrapper[4812]: I1124 21:01:30.980104 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-77885f85fc-q7crr"] Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.023582 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ce7c567-be13-4d0c-bf12-fb472b0e8661-combined-ca-bundle\") pod \"heat-engine-7fb88df497-nxz6w\" (UID: \"1ce7c567-be13-4d0c-bf12-fb472b0e8661\") " pod="openstack/heat-engine-7fb88df497-nxz6w" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.023716 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ce7c567-be13-4d0c-bf12-fb472b0e8661-config-data\") pod \"heat-engine-7fb88df497-nxz6w\" (UID: \"1ce7c567-be13-4d0c-bf12-fb472b0e8661\") " pod="openstack/heat-engine-7fb88df497-nxz6w" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.023755 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnc6f\" (UniqueName: \"kubernetes.io/projected/1ce7c567-be13-4d0c-bf12-fb472b0e8661-kube-api-access-cnc6f\") pod \"heat-engine-7fb88df497-nxz6w\" (UID: \"1ce7c567-be13-4d0c-bf12-fb472b0e8661\") " pod="openstack/heat-engine-7fb88df497-nxz6w" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.023774 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ce7c567-be13-4d0c-bf12-fb472b0e8661-config-data-custom\") pod \"heat-engine-7fb88df497-nxz6w\" (UID: \"1ce7c567-be13-4d0c-bf12-fb472b0e8661\") " pod="openstack/heat-engine-7fb88df497-nxz6w" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.125870 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35738265-6edb-4fdd-9dcf-c76a7097a0c2-combined-ca-bundle\") pod \"heat-api-7d99fbd676-58btb\" (UID: \"35738265-6edb-4fdd-9dcf-c76a7097a0c2\") " pod="openstack/heat-api-7d99fbd676-58btb" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.126258 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ce7c567-be13-4d0c-bf12-fb472b0e8661-combined-ca-bundle\") pod \"heat-engine-7fb88df497-nxz6w\" (UID: \"1ce7c567-be13-4d0c-bf12-fb472b0e8661\") " pod="openstack/heat-engine-7fb88df497-nxz6w" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.126387 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-config-data\") pod \"heat-cfnapi-77885f85fc-q7crr\" (UID: \"c11d0d38-cbc2-42e5-bbc8-8bdd61756602\") " pod="openstack/heat-cfnapi-77885f85fc-q7crr" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.126496 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-config-data-custom\") pod \"heat-cfnapi-77885f85fc-q7crr\" (UID: \"c11d0d38-cbc2-42e5-bbc8-8bdd61756602\") " pod="openstack/heat-cfnapi-77885f85fc-q7crr" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.126670 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35738265-6edb-4fdd-9dcf-c76a7097a0c2-config-data\") pod \"heat-api-7d99fbd676-58btb\" (UID: \"35738265-6edb-4fdd-9dcf-c76a7097a0c2\") " pod="openstack/heat-api-7d99fbd676-58btb" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.126814 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ce7c567-be13-4d0c-bf12-fb472b0e8661-config-data\") pod \"heat-engine-7fb88df497-nxz6w\" (UID: \"1ce7c567-be13-4d0c-bf12-fb472b0e8661\") " pod="openstack/heat-engine-7fb88df497-nxz6w" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.126920 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-combined-ca-bundle\") pod \"heat-cfnapi-77885f85fc-q7crr\" (UID: \"c11d0d38-cbc2-42e5-bbc8-8bdd61756602\") " pod="openstack/heat-cfnapi-77885f85fc-q7crr" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.127015 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35738265-6edb-4fdd-9dcf-c76a7097a0c2-config-data-custom\") pod \"heat-api-7d99fbd676-58btb\" (UID: \"35738265-6edb-4fdd-9dcf-c76a7097a0c2\") " pod="openstack/heat-api-7d99fbd676-58btb" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.127100 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfdgf\" (UniqueName: \"kubernetes.io/projected/35738265-6edb-4fdd-9dcf-c76a7097a0c2-kube-api-access-wfdgf\") pod \"heat-api-7d99fbd676-58btb\" (UID: \"35738265-6edb-4fdd-9dcf-c76a7097a0c2\") " pod="openstack/heat-api-7d99fbd676-58btb" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.127199 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxqkr\" (UniqueName: \"kubernetes.io/projected/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-kube-api-access-jxqkr\") pod \"heat-cfnapi-77885f85fc-q7crr\" (UID: \"c11d0d38-cbc2-42e5-bbc8-8bdd61756602\") " pod="openstack/heat-cfnapi-77885f85fc-q7crr" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.127290 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnc6f\" (UniqueName: \"kubernetes.io/projected/1ce7c567-be13-4d0c-bf12-fb472b0e8661-kube-api-access-cnc6f\") pod \"heat-engine-7fb88df497-nxz6w\" (UID: \"1ce7c567-be13-4d0c-bf12-fb472b0e8661\") " pod="openstack/heat-engine-7fb88df497-nxz6w" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.127406 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ce7c567-be13-4d0c-bf12-fb472b0e8661-config-data-custom\") pod \"heat-engine-7fb88df497-nxz6w\" (UID: \"1ce7c567-be13-4d0c-bf12-fb472b0e8661\") " pod="openstack/heat-engine-7fb88df497-nxz6w" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.133369 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ce7c567-be13-4d0c-bf12-fb472b0e8661-combined-ca-bundle\") pod \"heat-engine-7fb88df497-nxz6w\" (UID: \"1ce7c567-be13-4d0c-bf12-fb472b0e8661\") " pod="openstack/heat-engine-7fb88df497-nxz6w" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.133498 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ce7c567-be13-4d0c-bf12-fb472b0e8661-config-data\") pod \"heat-engine-7fb88df497-nxz6w\" (UID: \"1ce7c567-be13-4d0c-bf12-fb472b0e8661\") " pod="openstack/heat-engine-7fb88df497-nxz6w" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.134428 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ce7c567-be13-4d0c-bf12-fb472b0e8661-config-data-custom\") pod \"heat-engine-7fb88df497-nxz6w\" (UID: \"1ce7c567-be13-4d0c-bf12-fb472b0e8661\") " pod="openstack/heat-engine-7fb88df497-nxz6w" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.147410 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnc6f\" (UniqueName: \"kubernetes.io/projected/1ce7c567-be13-4d0c-bf12-fb472b0e8661-kube-api-access-cnc6f\") pod \"heat-engine-7fb88df497-nxz6w\" (UID: \"1ce7c567-be13-4d0c-bf12-fb472b0e8661\") " pod="openstack/heat-engine-7fb88df497-nxz6w" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.181957 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7fb88df497-nxz6w" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.229648 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-config-data\") pod \"heat-cfnapi-77885f85fc-q7crr\" (UID: \"c11d0d38-cbc2-42e5-bbc8-8bdd61756602\") " pod="openstack/heat-cfnapi-77885f85fc-q7crr" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.229703 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-config-data-custom\") pod \"heat-cfnapi-77885f85fc-q7crr\" (UID: \"c11d0d38-cbc2-42e5-bbc8-8bdd61756602\") " pod="openstack/heat-cfnapi-77885f85fc-q7crr" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.229775 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35738265-6edb-4fdd-9dcf-c76a7097a0c2-config-data\") pod \"heat-api-7d99fbd676-58btb\" (UID: \"35738265-6edb-4fdd-9dcf-c76a7097a0c2\") " pod="openstack/heat-api-7d99fbd676-58btb" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.229829 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-combined-ca-bundle\") pod \"heat-cfnapi-77885f85fc-q7crr\" (UID: \"c11d0d38-cbc2-42e5-bbc8-8bdd61756602\") " pod="openstack/heat-cfnapi-77885f85fc-q7crr" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.229851 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35738265-6edb-4fdd-9dcf-c76a7097a0c2-config-data-custom\") pod \"heat-api-7d99fbd676-58btb\" (UID: \"35738265-6edb-4fdd-9dcf-c76a7097a0c2\") " pod="openstack/heat-api-7d99fbd676-58btb" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.229872 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfdgf\" (UniqueName: \"kubernetes.io/projected/35738265-6edb-4fdd-9dcf-c76a7097a0c2-kube-api-access-wfdgf\") pod \"heat-api-7d99fbd676-58btb\" (UID: \"35738265-6edb-4fdd-9dcf-c76a7097a0c2\") " pod="openstack/heat-api-7d99fbd676-58btb" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.229893 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxqkr\" (UniqueName: \"kubernetes.io/projected/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-kube-api-access-jxqkr\") pod \"heat-cfnapi-77885f85fc-q7crr\" (UID: \"c11d0d38-cbc2-42e5-bbc8-8bdd61756602\") " pod="openstack/heat-cfnapi-77885f85fc-q7crr" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.229925 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35738265-6edb-4fdd-9dcf-c76a7097a0c2-combined-ca-bundle\") pod \"heat-api-7d99fbd676-58btb\" (UID: \"35738265-6edb-4fdd-9dcf-c76a7097a0c2\") " pod="openstack/heat-api-7d99fbd676-58btb" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.240116 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35738265-6edb-4fdd-9dcf-c76a7097a0c2-combined-ca-bundle\") pod \"heat-api-7d99fbd676-58btb\" (UID: \"35738265-6edb-4fdd-9dcf-c76a7097a0c2\") " pod="openstack/heat-api-7d99fbd676-58btb" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.243083 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-combined-ca-bundle\") pod \"heat-cfnapi-77885f85fc-q7crr\" (UID: \"c11d0d38-cbc2-42e5-bbc8-8bdd61756602\") " pod="openstack/heat-cfnapi-77885f85fc-q7crr" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.246764 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-config-data\") pod \"heat-cfnapi-77885f85fc-q7crr\" (UID: \"c11d0d38-cbc2-42e5-bbc8-8bdd61756602\") " pod="openstack/heat-cfnapi-77885f85fc-q7crr" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.248131 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-config-data-custom\") pod \"heat-cfnapi-77885f85fc-q7crr\" (UID: \"c11d0d38-cbc2-42e5-bbc8-8bdd61756602\") " pod="openstack/heat-cfnapi-77885f85fc-q7crr" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.248913 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35738265-6edb-4fdd-9dcf-c76a7097a0c2-config-data\") pod \"heat-api-7d99fbd676-58btb\" (UID: \"35738265-6edb-4fdd-9dcf-c76a7097a0c2\") " pod="openstack/heat-api-7d99fbd676-58btb" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.253810 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfdgf\" (UniqueName: \"kubernetes.io/projected/35738265-6edb-4fdd-9dcf-c76a7097a0c2-kube-api-access-wfdgf\") pod \"heat-api-7d99fbd676-58btb\" (UID: \"35738265-6edb-4fdd-9dcf-c76a7097a0c2\") " pod="openstack/heat-api-7d99fbd676-58btb" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.259321 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35738265-6edb-4fdd-9dcf-c76a7097a0c2-config-data-custom\") pod \"heat-api-7d99fbd676-58btb\" (UID: \"35738265-6edb-4fdd-9dcf-c76a7097a0c2\") " pod="openstack/heat-api-7d99fbd676-58btb" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.260450 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxqkr\" (UniqueName: \"kubernetes.io/projected/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-kube-api-access-jxqkr\") pod \"heat-cfnapi-77885f85fc-q7crr\" (UID: \"c11d0d38-cbc2-42e5-bbc8-8bdd61756602\") " pod="openstack/heat-cfnapi-77885f85fc-q7crr" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.273957 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d99fbd676-58btb" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.283035 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77885f85fc-q7crr" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.629166 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.670110 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7fb88df497-nxz6w"] Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.882840 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7d99fbd676-58btb"] Nov 24 21:01:31 crc kubenswrapper[4812]: W1124 21:01:31.885465 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35738265_6edb_4fdd_9dcf_c76a7097a0c2.slice/crio-658ea9ca1114856e02b8f1ae7c00d1e95c6428a0f7a9d37c07e4d4b5bde7f071 WatchSource:0}: Error finding container 658ea9ca1114856e02b8f1ae7c00d1e95c6428a0f7a9d37c07e4d4b5bde7f071: Status 404 returned error can't find the container with id 658ea9ca1114856e02b8f1ae7c00d1e95c6428a0f7a9d37c07e4d4b5bde7f071 Nov 24 21:01:31 crc kubenswrapper[4812]: I1124 21:01:31.977653 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-77885f85fc-q7crr"] Nov 24 21:01:31 crc kubenswrapper[4812]: W1124 21:01:31.979727 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc11d0d38_cbc2_42e5_bbc8_8bdd61756602.slice/crio-2e64032f66df6894367a555961a55fd54ccc48f1a50d9e1e5f34434ff79d2803 WatchSource:0}: Error finding container 2e64032f66df6894367a555961a55fd54ccc48f1a50d9e1e5f34434ff79d2803: Status 404 returned error can't find the container with id 2e64032f66df6894367a555961a55fd54ccc48f1a50d9e1e5f34434ff79d2803 Nov 24 21:01:32 crc kubenswrapper[4812]: I1124 21:01:32.605601 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-77885f85fc-q7crr" event={"ID":"c11d0d38-cbc2-42e5-bbc8-8bdd61756602","Type":"ContainerStarted","Data":"2e64032f66df6894367a555961a55fd54ccc48f1a50d9e1e5f34434ff79d2803"} Nov 24 21:01:32 crc kubenswrapper[4812]: I1124 21:01:32.607865 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7fb88df497-nxz6w" event={"ID":"1ce7c567-be13-4d0c-bf12-fb472b0e8661","Type":"ContainerStarted","Data":"d9e45454fd74ef73bfcacaf653c0edad0012048d4ed1ec72988726a686dcec71"} Nov 24 21:01:32 crc kubenswrapper[4812]: I1124 21:01:32.607891 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7fb88df497-nxz6w" event={"ID":"1ce7c567-be13-4d0c-bf12-fb472b0e8661","Type":"ContainerStarted","Data":"bf893361b5bd95a4aee59fef5c116806df19e8920a8d4c711a594f0172ec7e65"} Nov 24 21:01:32 crc kubenswrapper[4812]: I1124 21:01:32.609098 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7fb88df497-nxz6w" Nov 24 21:01:32 crc kubenswrapper[4812]: I1124 21:01:32.610473 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d99fbd676-58btb" event={"ID":"35738265-6edb-4fdd-9dcf-c76a7097a0c2","Type":"ContainerStarted","Data":"658ea9ca1114856e02b8f1ae7c00d1e95c6428a0f7a9d37c07e4d4b5bde7f071"} Nov 24 21:01:32 crc kubenswrapper[4812]: I1124 21:01:32.630458 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7fb88df497-nxz6w" podStartSLOduration=2.630436691 podStartE2EDuration="2.630436691s" podCreationTimestamp="2025-11-24 21:01:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:01:32.623907626 +0000 UTC m=+6286.412859997" watchObservedRunningTime="2025-11-24 21:01:32.630436691 +0000 UTC m=+6286.419389062" Nov 24 21:01:33 crc kubenswrapper[4812]: I1124 21:01:33.377574 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-746cdf7854-2hgwq" Nov 24 21:01:33 crc kubenswrapper[4812]: I1124 21:01:33.509722 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-655bf47886-z4qqq"] Nov 24 21:01:33 crc kubenswrapper[4812]: I1124 21:01:33.509965 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-655bf47886-z4qqq" podUID="28908d79-deac-497f-94c0-dd47c148c6ba" containerName="horizon-log" containerID="cri-o://649ee3af869868644eca5d7c0e968148d978e2206bb43fcbfd899f5f607c9c55" gracePeriod=30 Nov 24 21:01:33 crc kubenswrapper[4812]: I1124 21:01:33.510465 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-655bf47886-z4qqq" podUID="28908d79-deac-497f-94c0-dd47c148c6ba" containerName="horizon" containerID="cri-o://36d8decc746379e90c829e25f954e81c7eebcdc68e71035283157b86b3d07e36" gracePeriod=30 Nov 24 21:01:35 crc kubenswrapper[4812]: I1124 21:01:35.664790 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d99fbd676-58btb" event={"ID":"35738265-6edb-4fdd-9dcf-c76a7097a0c2","Type":"ContainerStarted","Data":"759e42e8197ec91838883e1dde590f538a048648b4a27a6cfbfe8089857b47ad"} Nov 24 21:01:35 crc kubenswrapper[4812]: I1124 21:01:35.665581 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7d99fbd676-58btb" Nov 24 21:01:35 crc kubenswrapper[4812]: I1124 21:01:35.668185 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-77885f85fc-q7crr" event={"ID":"c11d0d38-cbc2-42e5-bbc8-8bdd61756602","Type":"ContainerStarted","Data":"09529b44512a6edf216621395cd5dd295405ce3e686743e3c8ec390d88acbc08"} Nov 24 21:01:35 crc kubenswrapper[4812]: I1124 21:01:35.669238 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-77885f85fc-q7crr" Nov 24 21:01:35 crc kubenswrapper[4812]: I1124 21:01:35.706760 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7d99fbd676-58btb" podStartSLOduration=3.21604724 podStartE2EDuration="5.706741853s" podCreationTimestamp="2025-11-24 21:01:30 +0000 UTC" firstStartedPulling="2025-11-24 21:01:31.888164122 +0000 UTC m=+6285.677116493" lastFinishedPulling="2025-11-24 21:01:34.378858735 +0000 UTC m=+6288.167811106" observedRunningTime="2025-11-24 21:01:35.688461886 +0000 UTC m=+6289.477414327" watchObservedRunningTime="2025-11-24 21:01:35.706741853 +0000 UTC m=+6289.495694224" Nov 24 21:01:35 crc kubenswrapper[4812]: I1124 21:01:35.724509 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-77885f85fc-q7crr" podStartSLOduration=3.325533918 podStartE2EDuration="5.724489095s" podCreationTimestamp="2025-11-24 21:01:30 +0000 UTC" firstStartedPulling="2025-11-24 21:01:31.981709479 +0000 UTC m=+6285.770661850" lastFinishedPulling="2025-11-24 21:01:34.380664656 +0000 UTC m=+6288.169617027" observedRunningTime="2025-11-24 21:01:35.716432167 +0000 UTC m=+6289.505384578" watchObservedRunningTime="2025-11-24 21:01:35.724489095 +0000 UTC m=+6289.513441466" Nov 24 21:01:37 crc kubenswrapper[4812]: I1124 21:01:37.686856 4812 generic.go:334] "Generic (PLEG): container finished" podID="28908d79-deac-497f-94c0-dd47c148c6ba" containerID="36d8decc746379e90c829e25f954e81c7eebcdc68e71035283157b86b3d07e36" exitCode=0 Nov 24 21:01:37 crc kubenswrapper[4812]: I1124 21:01:37.686946 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-655bf47886-z4qqq" event={"ID":"28908d79-deac-497f-94c0-dd47c148c6ba","Type":"ContainerDied","Data":"36d8decc746379e90c829e25f954e81c7eebcdc68e71035283157b86b3d07e36"} Nov 24 21:01:37 crc kubenswrapper[4812]: I1124 21:01:37.810418 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6d5d7f948-4gnkf"] Nov 24 21:01:37 crc kubenswrapper[4812]: I1124 21:01:37.812118 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6d5d7f948-4gnkf" Nov 24 21:01:37 crc kubenswrapper[4812]: I1124 21:01:37.845554 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-69c756d86-pwhmr"] Nov 24 21:01:37 crc kubenswrapper[4812]: I1124 21:01:37.846946 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-69c756d86-pwhmr" Nov 24 21:01:37 crc kubenswrapper[4812]: I1124 21:01:37.859692 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6d5d7f948-4gnkf"] Nov 24 21:01:37 crc kubenswrapper[4812]: I1124 21:01:37.878810 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-69c756d86-pwhmr"] Nov 24 21:01:37 crc kubenswrapper[4812]: I1124 21:01:37.931930 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7ffd46775-pj869"] Nov 24 21:01:37 crc kubenswrapper[4812]: I1124 21:01:37.933614 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7ffd46775-pj869" Nov 24 21:01:37 crc kubenswrapper[4812]: I1124 21:01:37.951161 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7ffd46775-pj869"] Nov 24 21:01:37 crc kubenswrapper[4812]: I1124 21:01:37.994241 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be078702-3711-4540-834f-9627bfc1da1c-combined-ca-bundle\") pod \"heat-engine-6d5d7f948-4gnkf\" (UID: \"be078702-3711-4540-834f-9627bfc1da1c\") " pod="openstack/heat-engine-6d5d7f948-4gnkf" Nov 24 21:01:37 crc kubenswrapper[4812]: I1124 21:01:37.994289 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdqcj\" (UniqueName: \"kubernetes.io/projected/979a479b-f9c2-49dc-8069-15e64a67447e-kube-api-access-rdqcj\") pod \"heat-api-69c756d86-pwhmr\" (UID: \"979a479b-f9c2-49dc-8069-15e64a67447e\") " pod="openstack/heat-api-69c756d86-pwhmr" Nov 24 21:01:37 crc kubenswrapper[4812]: I1124 21:01:37.994308 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/979a479b-f9c2-49dc-8069-15e64a67447e-combined-ca-bundle\") pod \"heat-api-69c756d86-pwhmr\" (UID: \"979a479b-f9c2-49dc-8069-15e64a67447e\") " pod="openstack/heat-api-69c756d86-pwhmr" Nov 24 21:01:37 crc kubenswrapper[4812]: I1124 21:01:37.994327 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/979a479b-f9c2-49dc-8069-15e64a67447e-config-data\") pod \"heat-api-69c756d86-pwhmr\" (UID: \"979a479b-f9c2-49dc-8069-15e64a67447e\") " pod="openstack/heat-api-69c756d86-pwhmr" Nov 24 21:01:37 crc kubenswrapper[4812]: I1124 21:01:37.994471 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxcf6\" (UniqueName: \"kubernetes.io/projected/be078702-3711-4540-834f-9627bfc1da1c-kube-api-access-xxcf6\") pod \"heat-engine-6d5d7f948-4gnkf\" (UID: \"be078702-3711-4540-834f-9627bfc1da1c\") " pod="openstack/heat-engine-6d5d7f948-4gnkf" Nov 24 21:01:37 crc kubenswrapper[4812]: I1124 21:01:37.994495 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be078702-3711-4540-834f-9627bfc1da1c-config-data-custom\") pod \"heat-engine-6d5d7f948-4gnkf\" (UID: \"be078702-3711-4540-834f-9627bfc1da1c\") " pod="openstack/heat-engine-6d5d7f948-4gnkf" Nov 24 21:01:37 crc kubenswrapper[4812]: I1124 21:01:37.994553 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be078702-3711-4540-834f-9627bfc1da1c-config-data\") pod \"heat-engine-6d5d7f948-4gnkf\" (UID: \"be078702-3711-4540-834f-9627bfc1da1c\") " pod="openstack/heat-engine-6d5d7f948-4gnkf" Nov 24 21:01:37 crc kubenswrapper[4812]: I1124 21:01:37.994593 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/979a479b-f9c2-49dc-8069-15e64a67447e-config-data-custom\") pod \"heat-api-69c756d86-pwhmr\" (UID: \"979a479b-f9c2-49dc-8069-15e64a67447e\") " pod="openstack/heat-api-69c756d86-pwhmr" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.096159 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf7k9\" (UniqueName: \"kubernetes.io/projected/9f685759-c557-4f80-be1e-7719f27d09a9-kube-api-access-rf7k9\") pod \"heat-cfnapi-7ffd46775-pj869\" (UID: \"9f685759-c557-4f80-be1e-7719f27d09a9\") " pod="openstack/heat-cfnapi-7ffd46775-pj869" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.096263 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxcf6\" (UniqueName: \"kubernetes.io/projected/be078702-3711-4540-834f-9627bfc1da1c-kube-api-access-xxcf6\") pod \"heat-engine-6d5d7f948-4gnkf\" (UID: \"be078702-3711-4540-834f-9627bfc1da1c\") " pod="openstack/heat-engine-6d5d7f948-4gnkf" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.096295 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be078702-3711-4540-834f-9627bfc1da1c-config-data-custom\") pod \"heat-engine-6d5d7f948-4gnkf\" (UID: \"be078702-3711-4540-834f-9627bfc1da1c\") " pod="openstack/heat-engine-6d5d7f948-4gnkf" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.096319 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f685759-c557-4f80-be1e-7719f27d09a9-combined-ca-bundle\") pod \"heat-cfnapi-7ffd46775-pj869\" (UID: \"9f685759-c557-4f80-be1e-7719f27d09a9\") " pod="openstack/heat-cfnapi-7ffd46775-pj869" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.096379 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f685759-c557-4f80-be1e-7719f27d09a9-config-data\") pod \"heat-cfnapi-7ffd46775-pj869\" (UID: \"9f685759-c557-4f80-be1e-7719f27d09a9\") " pod="openstack/heat-cfnapi-7ffd46775-pj869" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.096436 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be078702-3711-4540-834f-9627bfc1da1c-config-data\") pod \"heat-engine-6d5d7f948-4gnkf\" (UID: \"be078702-3711-4540-834f-9627bfc1da1c\") " pod="openstack/heat-engine-6d5d7f948-4gnkf" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.096482 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/979a479b-f9c2-49dc-8069-15e64a67447e-config-data-custom\") pod \"heat-api-69c756d86-pwhmr\" (UID: \"979a479b-f9c2-49dc-8069-15e64a67447e\") " pod="openstack/heat-api-69c756d86-pwhmr" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.096525 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f685759-c557-4f80-be1e-7719f27d09a9-config-data-custom\") pod \"heat-cfnapi-7ffd46775-pj869\" (UID: \"9f685759-c557-4f80-be1e-7719f27d09a9\") " pod="openstack/heat-cfnapi-7ffd46775-pj869" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.096562 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be078702-3711-4540-834f-9627bfc1da1c-combined-ca-bundle\") pod \"heat-engine-6d5d7f948-4gnkf\" (UID: \"be078702-3711-4540-834f-9627bfc1da1c\") " pod="openstack/heat-engine-6d5d7f948-4gnkf" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.096589 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdqcj\" (UniqueName: \"kubernetes.io/projected/979a479b-f9c2-49dc-8069-15e64a67447e-kube-api-access-rdqcj\") pod \"heat-api-69c756d86-pwhmr\" (UID: \"979a479b-f9c2-49dc-8069-15e64a67447e\") " pod="openstack/heat-api-69c756d86-pwhmr" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.096611 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/979a479b-f9c2-49dc-8069-15e64a67447e-combined-ca-bundle\") pod \"heat-api-69c756d86-pwhmr\" (UID: \"979a479b-f9c2-49dc-8069-15e64a67447e\") " pod="openstack/heat-api-69c756d86-pwhmr" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.096636 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/979a479b-f9c2-49dc-8069-15e64a67447e-config-data\") pod \"heat-api-69c756d86-pwhmr\" (UID: \"979a479b-f9c2-49dc-8069-15e64a67447e\") " pod="openstack/heat-api-69c756d86-pwhmr" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.108181 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/979a479b-f9c2-49dc-8069-15e64a67447e-config-data-custom\") pod \"heat-api-69c756d86-pwhmr\" (UID: \"979a479b-f9c2-49dc-8069-15e64a67447e\") " pod="openstack/heat-api-69c756d86-pwhmr" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.108946 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be078702-3711-4540-834f-9627bfc1da1c-combined-ca-bundle\") pod \"heat-engine-6d5d7f948-4gnkf\" (UID: \"be078702-3711-4540-834f-9627bfc1da1c\") " pod="openstack/heat-engine-6d5d7f948-4gnkf" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.108991 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be078702-3711-4540-834f-9627bfc1da1c-config-data-custom\") pod \"heat-engine-6d5d7f948-4gnkf\" (UID: \"be078702-3711-4540-834f-9627bfc1da1c\") " pod="openstack/heat-engine-6d5d7f948-4gnkf" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.109431 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/979a479b-f9c2-49dc-8069-15e64a67447e-combined-ca-bundle\") pod \"heat-api-69c756d86-pwhmr\" (UID: \"979a479b-f9c2-49dc-8069-15e64a67447e\") " pod="openstack/heat-api-69c756d86-pwhmr" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.109744 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be078702-3711-4540-834f-9627bfc1da1c-config-data\") pod \"heat-engine-6d5d7f948-4gnkf\" (UID: \"be078702-3711-4540-834f-9627bfc1da1c\") " pod="openstack/heat-engine-6d5d7f948-4gnkf" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.114921 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/979a479b-f9c2-49dc-8069-15e64a67447e-config-data\") pod \"heat-api-69c756d86-pwhmr\" (UID: \"979a479b-f9c2-49dc-8069-15e64a67447e\") " pod="openstack/heat-api-69c756d86-pwhmr" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.114974 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxcf6\" (UniqueName: \"kubernetes.io/projected/be078702-3711-4540-834f-9627bfc1da1c-kube-api-access-xxcf6\") pod \"heat-engine-6d5d7f948-4gnkf\" (UID: \"be078702-3711-4540-834f-9627bfc1da1c\") " pod="openstack/heat-engine-6d5d7f948-4gnkf" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.116684 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdqcj\" (UniqueName: \"kubernetes.io/projected/979a479b-f9c2-49dc-8069-15e64a67447e-kube-api-access-rdqcj\") pod \"heat-api-69c756d86-pwhmr\" (UID: \"979a479b-f9c2-49dc-8069-15e64a67447e\") " pod="openstack/heat-api-69c756d86-pwhmr" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.141276 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6d5d7f948-4gnkf" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.176397 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-69c756d86-pwhmr" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.199064 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f685759-c557-4f80-be1e-7719f27d09a9-config-data\") pod \"heat-cfnapi-7ffd46775-pj869\" (UID: \"9f685759-c557-4f80-be1e-7719f27d09a9\") " pod="openstack/heat-cfnapi-7ffd46775-pj869" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.199172 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f685759-c557-4f80-be1e-7719f27d09a9-config-data-custom\") pod \"heat-cfnapi-7ffd46775-pj869\" (UID: \"9f685759-c557-4f80-be1e-7719f27d09a9\") " pod="openstack/heat-cfnapi-7ffd46775-pj869" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.199234 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf7k9\" (UniqueName: \"kubernetes.io/projected/9f685759-c557-4f80-be1e-7719f27d09a9-kube-api-access-rf7k9\") pod \"heat-cfnapi-7ffd46775-pj869\" (UID: \"9f685759-c557-4f80-be1e-7719f27d09a9\") " pod="openstack/heat-cfnapi-7ffd46775-pj869" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.199292 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f685759-c557-4f80-be1e-7719f27d09a9-combined-ca-bundle\") pod \"heat-cfnapi-7ffd46775-pj869\" (UID: \"9f685759-c557-4f80-be1e-7719f27d09a9\") " pod="openstack/heat-cfnapi-7ffd46775-pj869" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.203386 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f685759-c557-4f80-be1e-7719f27d09a9-config-data\") pod \"heat-cfnapi-7ffd46775-pj869\" (UID: \"9f685759-c557-4f80-be1e-7719f27d09a9\") " pod="openstack/heat-cfnapi-7ffd46775-pj869" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.204492 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f685759-c557-4f80-be1e-7719f27d09a9-config-data-custom\") pod \"heat-cfnapi-7ffd46775-pj869\" (UID: \"9f685759-c557-4f80-be1e-7719f27d09a9\") " pod="openstack/heat-cfnapi-7ffd46775-pj869" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.215471 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf7k9\" (UniqueName: \"kubernetes.io/projected/9f685759-c557-4f80-be1e-7719f27d09a9-kube-api-access-rf7k9\") pod \"heat-cfnapi-7ffd46775-pj869\" (UID: \"9f685759-c557-4f80-be1e-7719f27d09a9\") " pod="openstack/heat-cfnapi-7ffd46775-pj869" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.215810 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f685759-c557-4f80-be1e-7719f27d09a9-combined-ca-bundle\") pod \"heat-cfnapi-7ffd46775-pj869\" (UID: \"9f685759-c557-4f80-be1e-7719f27d09a9\") " pod="openstack/heat-cfnapi-7ffd46775-pj869" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.254799 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7ffd46775-pj869" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.578115 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-69c756d86-pwhmr"] Nov 24 21:01:38 crc kubenswrapper[4812]: W1124 21:01:38.592693 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod979a479b_f9c2_49dc_8069_15e64a67447e.slice/crio-1e6092a9bf7c1fcae5ed97a948d9b48ecb3cbf94f77f1004cc8227a9e775cc93 WatchSource:0}: Error finding container 1e6092a9bf7c1fcae5ed97a948d9b48ecb3cbf94f77f1004cc8227a9e775cc93: Status 404 returned error can't find the container with id 1e6092a9bf7c1fcae5ed97a948d9b48ecb3cbf94f77f1004cc8227a9e775cc93 Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.683104 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6d5d7f948-4gnkf"] Nov 24 21:01:38 crc kubenswrapper[4812]: W1124 21:01:38.686633 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe078702_3711_4540_834f_9627bfc1da1c.slice/crio-da80990e4a3ad41840d8e495055d580051dc5a596e0c126f3df3a4209beca1ed WatchSource:0}: Error finding container da80990e4a3ad41840d8e495055d580051dc5a596e0c126f3df3a4209beca1ed: Status 404 returned error can't find the container with id da80990e4a3ad41840d8e495055d580051dc5a596e0c126f3df3a4209beca1ed Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.721210 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6d5d7f948-4gnkf" event={"ID":"be078702-3711-4540-834f-9627bfc1da1c","Type":"ContainerStarted","Data":"da80990e4a3ad41840d8e495055d580051dc5a596e0c126f3df3a4209beca1ed"} Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.722750 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69c756d86-pwhmr" event={"ID":"979a479b-f9c2-49dc-8069-15e64a67447e","Type":"ContainerStarted","Data":"1e6092a9bf7c1fcae5ed97a948d9b48ecb3cbf94f77f1004cc8227a9e775cc93"} Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.857432 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7ffd46775-pj869"] Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.913174 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-77885f85fc-q7crr"] Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.913417 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-77885f85fc-q7crr" podUID="c11d0d38-cbc2-42e5-bbc8-8bdd61756602" containerName="heat-cfnapi" containerID="cri-o://09529b44512a6edf216621395cd5dd295405ce3e686743e3c8ec390d88acbc08" gracePeriod=60 Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.933992 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7d99fbd676-58btb"] Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.934162 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-7d99fbd676-58btb" podUID="35738265-6edb-4fdd-9dcf-c76a7097a0c2" containerName="heat-api" containerID="cri-o://759e42e8197ec91838883e1dde590f538a048648b4a27a6cfbfe8089857b47ad" gracePeriod=60 Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.938800 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-65d8fb47c7-4swww"] Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.940055 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.947135 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.947553 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.957118 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-65d8fb47c7-4swww"] Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.965539 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-69f4dd449f-ktzpp"] Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.966973 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.975612 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 24 21:01:38 crc kubenswrapper[4812]: I1124 21:01:38.975787 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.012846 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-655bf47886-z4qqq" podUID="28908d79-deac-497f-94c0-dd47c148c6ba" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.122:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.122:8443: connect: connection refused" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.023440 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-69f4dd449f-ktzpp"] Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.127681 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7ec3a8b-b145-4ae6-9ec5-36d0e8778782-combined-ca-bundle\") pod \"heat-cfnapi-65d8fb47c7-4swww\" (UID: \"d7ec3a8b-b145-4ae6-9ec5-36d0e8778782\") " pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.127759 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9ae0eab-8a74-4569-993b-ac9cfca1fe08-internal-tls-certs\") pod \"heat-api-69f4dd449f-ktzpp\" (UID: \"d9ae0eab-8a74-4569-993b-ac9cfca1fe08\") " pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.127816 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7ec3a8b-b145-4ae6-9ec5-36d0e8778782-config-data\") pod \"heat-cfnapi-65d8fb47c7-4swww\" (UID: \"d7ec3a8b-b145-4ae6-9ec5-36d0e8778782\") " pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.127849 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ttzr\" (UniqueName: \"kubernetes.io/projected/d9ae0eab-8a74-4569-993b-ac9cfca1fe08-kube-api-access-9ttzr\") pod \"heat-api-69f4dd449f-ktzpp\" (UID: \"d9ae0eab-8a74-4569-993b-ac9cfca1fe08\") " pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.127868 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9ae0eab-8a74-4569-993b-ac9cfca1fe08-config-data-custom\") pod \"heat-api-69f4dd449f-ktzpp\" (UID: \"d9ae0eab-8a74-4569-993b-ac9cfca1fe08\") " pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.127934 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7ec3a8b-b145-4ae6-9ec5-36d0e8778782-internal-tls-certs\") pod \"heat-cfnapi-65d8fb47c7-4swww\" (UID: \"d7ec3a8b-b145-4ae6-9ec5-36d0e8778782\") " pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.127977 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7t2d\" (UniqueName: \"kubernetes.io/projected/d7ec3a8b-b145-4ae6-9ec5-36d0e8778782-kube-api-access-s7t2d\") pod \"heat-cfnapi-65d8fb47c7-4swww\" (UID: \"d7ec3a8b-b145-4ae6-9ec5-36d0e8778782\") " pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.128021 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7ec3a8b-b145-4ae6-9ec5-36d0e8778782-public-tls-certs\") pod \"heat-cfnapi-65d8fb47c7-4swww\" (UID: \"d7ec3a8b-b145-4ae6-9ec5-36d0e8778782\") " pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.128039 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9ae0eab-8a74-4569-993b-ac9cfca1fe08-config-data\") pod \"heat-api-69f4dd449f-ktzpp\" (UID: \"d9ae0eab-8a74-4569-993b-ac9cfca1fe08\") " pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.128078 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7ec3a8b-b145-4ae6-9ec5-36d0e8778782-config-data-custom\") pod \"heat-cfnapi-65d8fb47c7-4swww\" (UID: \"d7ec3a8b-b145-4ae6-9ec5-36d0e8778782\") " pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.128106 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9ae0eab-8a74-4569-993b-ac9cfca1fe08-combined-ca-bundle\") pod \"heat-api-69f4dd449f-ktzpp\" (UID: \"d9ae0eab-8a74-4569-993b-ac9cfca1fe08\") " pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.128143 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9ae0eab-8a74-4569-993b-ac9cfca1fe08-public-tls-certs\") pod \"heat-api-69f4dd449f-ktzpp\" (UID: \"d9ae0eab-8a74-4569-993b-ac9cfca1fe08\") " pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.229889 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7t2d\" (UniqueName: \"kubernetes.io/projected/d7ec3a8b-b145-4ae6-9ec5-36d0e8778782-kube-api-access-s7t2d\") pod \"heat-cfnapi-65d8fb47c7-4swww\" (UID: \"d7ec3a8b-b145-4ae6-9ec5-36d0e8778782\") " pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.229957 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7ec3a8b-b145-4ae6-9ec5-36d0e8778782-public-tls-certs\") pod \"heat-cfnapi-65d8fb47c7-4swww\" (UID: \"d7ec3a8b-b145-4ae6-9ec5-36d0e8778782\") " pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.229984 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9ae0eab-8a74-4569-993b-ac9cfca1fe08-config-data\") pod \"heat-api-69f4dd449f-ktzpp\" (UID: \"d9ae0eab-8a74-4569-993b-ac9cfca1fe08\") " pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.230017 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7ec3a8b-b145-4ae6-9ec5-36d0e8778782-config-data-custom\") pod \"heat-cfnapi-65d8fb47c7-4swww\" (UID: \"d7ec3a8b-b145-4ae6-9ec5-36d0e8778782\") " pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.230040 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9ae0eab-8a74-4569-993b-ac9cfca1fe08-combined-ca-bundle\") pod \"heat-api-69f4dd449f-ktzpp\" (UID: \"d9ae0eab-8a74-4569-993b-ac9cfca1fe08\") " pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.230077 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9ae0eab-8a74-4569-993b-ac9cfca1fe08-public-tls-certs\") pod \"heat-api-69f4dd449f-ktzpp\" (UID: \"d9ae0eab-8a74-4569-993b-ac9cfca1fe08\") " pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.230152 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7ec3a8b-b145-4ae6-9ec5-36d0e8778782-combined-ca-bundle\") pod \"heat-cfnapi-65d8fb47c7-4swww\" (UID: \"d7ec3a8b-b145-4ae6-9ec5-36d0e8778782\") " pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.230207 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9ae0eab-8a74-4569-993b-ac9cfca1fe08-internal-tls-certs\") pod \"heat-api-69f4dd449f-ktzpp\" (UID: \"d9ae0eab-8a74-4569-993b-ac9cfca1fe08\") " pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.230243 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7ec3a8b-b145-4ae6-9ec5-36d0e8778782-config-data\") pod \"heat-cfnapi-65d8fb47c7-4swww\" (UID: \"d7ec3a8b-b145-4ae6-9ec5-36d0e8778782\") " pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.230271 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ttzr\" (UniqueName: \"kubernetes.io/projected/d9ae0eab-8a74-4569-993b-ac9cfca1fe08-kube-api-access-9ttzr\") pod \"heat-api-69f4dd449f-ktzpp\" (UID: \"d9ae0eab-8a74-4569-993b-ac9cfca1fe08\") " pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.230297 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9ae0eab-8a74-4569-993b-ac9cfca1fe08-config-data-custom\") pod \"heat-api-69f4dd449f-ktzpp\" (UID: \"d9ae0eab-8a74-4569-993b-ac9cfca1fe08\") " pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.230359 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7ec3a8b-b145-4ae6-9ec5-36d0e8778782-internal-tls-certs\") pod \"heat-cfnapi-65d8fb47c7-4swww\" (UID: \"d7ec3a8b-b145-4ae6-9ec5-36d0e8778782\") " pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.239010 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7ec3a8b-b145-4ae6-9ec5-36d0e8778782-config-data-custom\") pod \"heat-cfnapi-65d8fb47c7-4swww\" (UID: \"d7ec3a8b-b145-4ae6-9ec5-36d0e8778782\") " pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.242558 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7ec3a8b-b145-4ae6-9ec5-36d0e8778782-public-tls-certs\") pod \"heat-cfnapi-65d8fb47c7-4swww\" (UID: \"d7ec3a8b-b145-4ae6-9ec5-36d0e8778782\") " pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.242722 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9ae0eab-8a74-4569-993b-ac9cfca1fe08-internal-tls-certs\") pod \"heat-api-69f4dd449f-ktzpp\" (UID: \"d9ae0eab-8a74-4569-993b-ac9cfca1fe08\") " pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.243021 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7ec3a8b-b145-4ae6-9ec5-36d0e8778782-internal-tls-certs\") pod \"heat-cfnapi-65d8fb47c7-4swww\" (UID: \"d7ec3a8b-b145-4ae6-9ec5-36d0e8778782\") " pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.243082 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9ae0eab-8a74-4569-993b-ac9cfca1fe08-config-data\") pod \"heat-api-69f4dd449f-ktzpp\" (UID: \"d9ae0eab-8a74-4569-993b-ac9cfca1fe08\") " pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.243150 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7ec3a8b-b145-4ae6-9ec5-36d0e8778782-combined-ca-bundle\") pod \"heat-cfnapi-65d8fb47c7-4swww\" (UID: \"d7ec3a8b-b145-4ae6-9ec5-36d0e8778782\") " pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.243359 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d9ae0eab-8a74-4569-993b-ac9cfca1fe08-public-tls-certs\") pod \"heat-api-69f4dd449f-ktzpp\" (UID: \"d9ae0eab-8a74-4569-993b-ac9cfca1fe08\") " pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.243825 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d9ae0eab-8a74-4569-993b-ac9cfca1fe08-config-data-custom\") pod \"heat-api-69f4dd449f-ktzpp\" (UID: \"d9ae0eab-8a74-4569-993b-ac9cfca1fe08\") " pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.244002 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9ae0eab-8a74-4569-993b-ac9cfca1fe08-combined-ca-bundle\") pod \"heat-api-69f4dd449f-ktzpp\" (UID: \"d9ae0eab-8a74-4569-993b-ac9cfca1fe08\") " pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.255646 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7ec3a8b-b145-4ae6-9ec5-36d0e8778782-config-data\") pod \"heat-cfnapi-65d8fb47c7-4swww\" (UID: \"d7ec3a8b-b145-4ae6-9ec5-36d0e8778782\") " pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.267221 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7t2d\" (UniqueName: \"kubernetes.io/projected/d7ec3a8b-b145-4ae6-9ec5-36d0e8778782-kube-api-access-s7t2d\") pod \"heat-cfnapi-65d8fb47c7-4swww\" (UID: \"d7ec3a8b-b145-4ae6-9ec5-36d0e8778782\") " pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.267706 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.272054 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ttzr\" (UniqueName: \"kubernetes.io/projected/d9ae0eab-8a74-4569-993b-ac9cfca1fe08-kube-api-access-9ttzr\") pod \"heat-api-69f4dd449f-ktzpp\" (UID: \"d9ae0eab-8a74-4569-993b-ac9cfca1fe08\") " pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.298242 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.573085 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d99fbd676-58btb" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.699070 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77885f85fc-q7crr" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.742254 4812 generic.go:334] "Generic (PLEG): container finished" podID="979a479b-f9c2-49dc-8069-15e64a67447e" containerID="586ed90b4044ed54812f21f9258fd9ee83767678743412dac129725613cc46ea" exitCode=1 Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.742310 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69c756d86-pwhmr" event={"ID":"979a479b-f9c2-49dc-8069-15e64a67447e","Type":"ContainerDied","Data":"586ed90b4044ed54812f21f9258fd9ee83767678743412dac129725613cc46ea"} Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.742937 4812 scope.go:117] "RemoveContainer" containerID="586ed90b4044ed54812f21f9258fd9ee83767678743412dac129725613cc46ea" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.751547 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35738265-6edb-4fdd-9dcf-c76a7097a0c2-config-data-custom\") pod \"35738265-6edb-4fdd-9dcf-c76a7097a0c2\" (UID: \"35738265-6edb-4fdd-9dcf-c76a7097a0c2\") " Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.752274 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfdgf\" (UniqueName: \"kubernetes.io/projected/35738265-6edb-4fdd-9dcf-c76a7097a0c2-kube-api-access-wfdgf\") pod \"35738265-6edb-4fdd-9dcf-c76a7097a0c2\" (UID: \"35738265-6edb-4fdd-9dcf-c76a7097a0c2\") " Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.752300 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxqkr\" (UniqueName: \"kubernetes.io/projected/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-kube-api-access-jxqkr\") pod \"c11d0d38-cbc2-42e5-bbc8-8bdd61756602\" (UID: \"c11d0d38-cbc2-42e5-bbc8-8bdd61756602\") " Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.752348 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-config-data-custom\") pod \"c11d0d38-cbc2-42e5-bbc8-8bdd61756602\" (UID: \"c11d0d38-cbc2-42e5-bbc8-8bdd61756602\") " Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.752365 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35738265-6edb-4fdd-9dcf-c76a7097a0c2-config-data\") pod \"35738265-6edb-4fdd-9dcf-c76a7097a0c2\" (UID: \"35738265-6edb-4fdd-9dcf-c76a7097a0c2\") " Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.752401 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35738265-6edb-4fdd-9dcf-c76a7097a0c2-combined-ca-bundle\") pod \"35738265-6edb-4fdd-9dcf-c76a7097a0c2\" (UID: \"35738265-6edb-4fdd-9dcf-c76a7097a0c2\") " Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.752431 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-combined-ca-bundle\") pod \"c11d0d38-cbc2-42e5-bbc8-8bdd61756602\" (UID: \"c11d0d38-cbc2-42e5-bbc8-8bdd61756602\") " Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.752460 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-config-data\") pod \"c11d0d38-cbc2-42e5-bbc8-8bdd61756602\" (UID: \"c11d0d38-cbc2-42e5-bbc8-8bdd61756602\") " Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.762627 4812 generic.go:334] "Generic (PLEG): container finished" podID="c11d0d38-cbc2-42e5-bbc8-8bdd61756602" containerID="09529b44512a6edf216621395cd5dd295405ce3e686743e3c8ec390d88acbc08" exitCode=0 Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.762712 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-77885f85fc-q7crr" event={"ID":"c11d0d38-cbc2-42e5-bbc8-8bdd61756602","Type":"ContainerDied","Data":"09529b44512a6edf216621395cd5dd295405ce3e686743e3c8ec390d88acbc08"} Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.762741 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-77885f85fc-q7crr" event={"ID":"c11d0d38-cbc2-42e5-bbc8-8bdd61756602","Type":"ContainerDied","Data":"2e64032f66df6894367a555961a55fd54ccc48f1a50d9e1e5f34434ff79d2803"} Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.762760 4812 scope.go:117] "RemoveContainer" containerID="09529b44512a6edf216621395cd5dd295405ce3e686743e3c8ec390d88acbc08" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.762887 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77885f85fc-q7crr" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.768230 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35738265-6edb-4fdd-9dcf-c76a7097a0c2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "35738265-6edb-4fdd-9dcf-c76a7097a0c2" (UID: "35738265-6edb-4fdd-9dcf-c76a7097a0c2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.775943 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c11d0d38-cbc2-42e5-bbc8-8bdd61756602" (UID: "c11d0d38-cbc2-42e5-bbc8-8bdd61756602"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.776517 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35738265-6edb-4fdd-9dcf-c76a7097a0c2-kube-api-access-wfdgf" (OuterVolumeSpecName: "kube-api-access-wfdgf") pod "35738265-6edb-4fdd-9dcf-c76a7097a0c2" (UID: "35738265-6edb-4fdd-9dcf-c76a7097a0c2"). InnerVolumeSpecName "kube-api-access-wfdgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.776817 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7ffd46775-pj869" event={"ID":"9f685759-c557-4f80-be1e-7719f27d09a9","Type":"ContainerDied","Data":"03d0036d38252c5c7d210e88fd58998a19b2b83b4ac9b77512a228a0ba0f5306"} Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.776793 4812 generic.go:334] "Generic (PLEG): container finished" podID="9f685759-c557-4f80-be1e-7719f27d09a9" containerID="03d0036d38252c5c7d210e88fd58998a19b2b83b4ac9b77512a228a0ba0f5306" exitCode=1 Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.779412 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7ffd46775-pj869" event={"ID":"9f685759-c557-4f80-be1e-7719f27d09a9","Type":"ContainerStarted","Data":"48767358f32222a39041f314120177a73a302e88de66a9624474f1baf46729fd"} Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.777460 4812 scope.go:117] "RemoveContainer" containerID="03d0036d38252c5c7d210e88fd58998a19b2b83b4ac9b77512a228a0ba0f5306" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.789978 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-kube-api-access-jxqkr" (OuterVolumeSpecName: "kube-api-access-jxqkr") pod "c11d0d38-cbc2-42e5-bbc8-8bdd61756602" (UID: "c11d0d38-cbc2-42e5-bbc8-8bdd61756602"). InnerVolumeSpecName "kube-api-access-jxqkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.802222 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6d5d7f948-4gnkf" event={"ID":"be078702-3711-4540-834f-9627bfc1da1c","Type":"ContainerStarted","Data":"0dab2236560737878ec970e3a862cc481e2953dd2494fcbe79b15ca0362877f8"} Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.803099 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6d5d7f948-4gnkf" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.824058 4812 generic.go:334] "Generic (PLEG): container finished" podID="35738265-6edb-4fdd-9dcf-c76a7097a0c2" containerID="759e42e8197ec91838883e1dde590f538a048648b4a27a6cfbfe8089857b47ad" exitCode=0 Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.824110 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d99fbd676-58btb" event={"ID":"35738265-6edb-4fdd-9dcf-c76a7097a0c2","Type":"ContainerDied","Data":"759e42e8197ec91838883e1dde590f538a048648b4a27a6cfbfe8089857b47ad"} Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.824229 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7d99fbd676-58btb" event={"ID":"35738265-6edb-4fdd-9dcf-c76a7097a0c2","Type":"ContainerDied","Data":"658ea9ca1114856e02b8f1ae7c00d1e95c6428a0f7a9d37c07e4d4b5bde7f071"} Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.824137 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7d99fbd676-58btb" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.828713 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-65d8fb47c7-4swww"] Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.833402 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6d5d7f948-4gnkf" podStartSLOduration=2.833393128 podStartE2EDuration="2.833393128s" podCreationTimestamp="2025-11-24 21:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:01:39.83168778 +0000 UTC m=+6293.620640161" watchObservedRunningTime="2025-11-24 21:01:39.833393128 +0000 UTC m=+6293.622345499" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.861222 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.861545 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35738265-6edb-4fdd-9dcf-c76a7097a0c2-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.862915 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfdgf\" (UniqueName: \"kubernetes.io/projected/35738265-6edb-4fdd-9dcf-c76a7097a0c2-kube-api-access-wfdgf\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.862984 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxqkr\" (UniqueName: \"kubernetes.io/projected/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-kube-api-access-jxqkr\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.863989 4812 scope.go:117] "RemoveContainer" containerID="09529b44512a6edf216621395cd5dd295405ce3e686743e3c8ec390d88acbc08" Nov 24 21:01:39 crc kubenswrapper[4812]: E1124 21:01:39.864389 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09529b44512a6edf216621395cd5dd295405ce3e686743e3c8ec390d88acbc08\": container with ID starting with 09529b44512a6edf216621395cd5dd295405ce3e686743e3c8ec390d88acbc08 not found: ID does not exist" containerID="09529b44512a6edf216621395cd5dd295405ce3e686743e3c8ec390d88acbc08" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.864417 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09529b44512a6edf216621395cd5dd295405ce3e686743e3c8ec390d88acbc08"} err="failed to get container status \"09529b44512a6edf216621395cd5dd295405ce3e686743e3c8ec390d88acbc08\": rpc error: code = NotFound desc = could not find container \"09529b44512a6edf216621395cd5dd295405ce3e686743e3c8ec390d88acbc08\": container with ID starting with 09529b44512a6edf216621395cd5dd295405ce3e686743e3c8ec390d88acbc08 not found: ID does not exist" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.864436 4812 scope.go:117] "RemoveContainer" containerID="759e42e8197ec91838883e1dde590f538a048648b4a27a6cfbfe8089857b47ad" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.870629 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c11d0d38-cbc2-42e5-bbc8-8bdd61756602" (UID: "c11d0d38-cbc2-42e5-bbc8-8bdd61756602"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.883518 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35738265-6edb-4fdd-9dcf-c76a7097a0c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35738265-6edb-4fdd-9dcf-c76a7097a0c2" (UID: "35738265-6edb-4fdd-9dcf-c76a7097a0c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.889595 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-config-data" (OuterVolumeSpecName: "config-data") pod "c11d0d38-cbc2-42e5-bbc8-8bdd61756602" (UID: "c11d0d38-cbc2-42e5-bbc8-8bdd61756602"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.895837 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35738265-6edb-4fdd-9dcf-c76a7097a0c2-config-data" (OuterVolumeSpecName: "config-data") pod "35738265-6edb-4fdd-9dcf-c76a7097a0c2" (UID: "35738265-6edb-4fdd-9dcf-c76a7097a0c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.915410 4812 scope.go:117] "RemoveContainer" containerID="759e42e8197ec91838883e1dde590f538a048648b4a27a6cfbfe8089857b47ad" Nov 24 21:01:39 crc kubenswrapper[4812]: E1124 21:01:39.922178 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"759e42e8197ec91838883e1dde590f538a048648b4a27a6cfbfe8089857b47ad\": container with ID starting with 759e42e8197ec91838883e1dde590f538a048648b4a27a6cfbfe8089857b47ad not found: ID does not exist" containerID="759e42e8197ec91838883e1dde590f538a048648b4a27a6cfbfe8089857b47ad" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.922305 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"759e42e8197ec91838883e1dde590f538a048648b4a27a6cfbfe8089857b47ad"} err="failed to get container status \"759e42e8197ec91838883e1dde590f538a048648b4a27a6cfbfe8089857b47ad\": rpc error: code = NotFound desc = could not find container \"759e42e8197ec91838883e1dde590f538a048648b4a27a6cfbfe8089857b47ad\": container with ID starting with 759e42e8197ec91838883e1dde590f538a048648b4a27a6cfbfe8089857b47ad not found: ID does not exist" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.952890 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-69f4dd449f-ktzpp"] Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.966113 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 21:01:39 crc kubenswrapper[4812]: E1124 21:01:39.966530 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:01:39 crc kubenswrapper[4812]: W1124 21:01:39.967248 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9ae0eab_8a74_4569_993b_ac9cfca1fe08.slice/crio-bbe91a41b9a6e4ef99273349315ae781dcc300c60daeff8d2e62e7d32c08cc21 WatchSource:0}: Error finding container bbe91a41b9a6e4ef99273349315ae781dcc300c60daeff8d2e62e7d32c08cc21: Status 404 returned error can't find the container with id bbe91a41b9a6e4ef99273349315ae781dcc300c60daeff8d2e62e7d32c08cc21 Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.967519 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.967550 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35738265-6edb-4fdd-9dcf-c76a7097a0c2-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.967559 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35738265-6edb-4fdd-9dcf-c76a7097a0c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:39 crc kubenswrapper[4812]: I1124 21:01:39.967569 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c11d0d38-cbc2-42e5-bbc8-8bdd61756602-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.095511 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-77885f85fc-q7crr"] Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.106659 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-77885f85fc-q7crr"] Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.229736 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7d99fbd676-58btb"] Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.239093 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7d99fbd676-58btb"] Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.845105 4812 generic.go:334] "Generic (PLEG): container finished" podID="979a479b-f9c2-49dc-8069-15e64a67447e" containerID="662b0292ebf0edd9792f00f2f36519afadaedcf9209662915dd9d3ebca6d410c" exitCode=1 Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.845174 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69c756d86-pwhmr" event={"ID":"979a479b-f9c2-49dc-8069-15e64a67447e","Type":"ContainerDied","Data":"662b0292ebf0edd9792f00f2f36519afadaedcf9209662915dd9d3ebca6d410c"} Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.845209 4812 scope.go:117] "RemoveContainer" containerID="586ed90b4044ed54812f21f9258fd9ee83767678743412dac129725613cc46ea" Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.845887 4812 scope.go:117] "RemoveContainer" containerID="662b0292ebf0edd9792f00f2f36519afadaedcf9209662915dd9d3ebca6d410c" Nov 24 21:01:40 crc kubenswrapper[4812]: E1124 21:01:40.846203 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-69c756d86-pwhmr_openstack(979a479b-f9c2-49dc-8069-15e64a67447e)\"" pod="openstack/heat-api-69c756d86-pwhmr" podUID="979a479b-f9c2-49dc-8069-15e64a67447e" Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.875194 4812 generic.go:334] "Generic (PLEG): container finished" podID="9f685759-c557-4f80-be1e-7719f27d09a9" containerID="2844ff36ae67f62eddf84503b7e8bd46ff08cf0e36b0534689543a7e751415cb" exitCode=1 Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.875273 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7ffd46775-pj869" event={"ID":"9f685759-c557-4f80-be1e-7719f27d09a9","Type":"ContainerDied","Data":"2844ff36ae67f62eddf84503b7e8bd46ff08cf0e36b0534689543a7e751415cb"} Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.876436 4812 scope.go:117] "RemoveContainer" containerID="2844ff36ae67f62eddf84503b7e8bd46ff08cf0e36b0534689543a7e751415cb" Nov 24 21:01:40 crc kubenswrapper[4812]: E1124 21:01:40.876707 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7ffd46775-pj869_openstack(9f685759-c557-4f80-be1e-7719f27d09a9)\"" pod="openstack/heat-cfnapi-7ffd46775-pj869" podUID="9f685759-c557-4f80-be1e-7719f27d09a9" Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.887057 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69f4dd449f-ktzpp" event={"ID":"d9ae0eab-8a74-4569-993b-ac9cfca1fe08","Type":"ContainerStarted","Data":"49245aa0d2f70f9802994dc93331ed4c53c28c0a39b60e686458ad73b5b0c4ad"} Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.887095 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69f4dd449f-ktzpp" event={"ID":"d9ae0eab-8a74-4569-993b-ac9cfca1fe08","Type":"ContainerStarted","Data":"bbe91a41b9a6e4ef99273349315ae781dcc300c60daeff8d2e62e7d32c08cc21"} Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.887838 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.908441 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65d8fb47c7-4swww" event={"ID":"d7ec3a8b-b145-4ae6-9ec5-36d0e8778782","Type":"ContainerStarted","Data":"a92802b4e7b3d12144568d007407f3688aeaefec36f5c2005dd76bbda2deb053"} Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.908478 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-65d8fb47c7-4swww" event={"ID":"d7ec3a8b-b145-4ae6-9ec5-36d0e8778782","Type":"ContainerStarted","Data":"8b88445e6aa77ec66a52fee31767176451c0761f7e4dadddf477d2bead364e95"} Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.908512 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.934139 4812 scope.go:117] "RemoveContainer" containerID="03d0036d38252c5c7d210e88fd58998a19b2b83b4ac9b77512a228a0ba0f5306" Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.939863 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-69f4dd449f-ktzpp" podStartSLOduration=2.939837991 podStartE2EDuration="2.939837991s" podCreationTimestamp="2025-11-24 21:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:01:40.907493656 +0000 UTC m=+6294.696446027" watchObservedRunningTime="2025-11-24 21:01:40.939837991 +0000 UTC m=+6294.728790372" Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.949268 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-65d8fb47c7-4swww" podStartSLOduration=2.949248777 podStartE2EDuration="2.949248777s" podCreationTimestamp="2025-11-24 21:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:01:40.929285192 +0000 UTC m=+6294.718237583" watchObservedRunningTime="2025-11-24 21:01:40.949248777 +0000 UTC m=+6294.738201148" Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.993219 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35738265-6edb-4fdd-9dcf-c76a7097a0c2" path="/var/lib/kubelet/pods/35738265-6edb-4fdd-9dcf-c76a7097a0c2/volumes" Nov 24 21:01:40 crc kubenswrapper[4812]: I1124 21:01:40.993795 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c11d0d38-cbc2-42e5-bbc8-8bdd61756602" path="/var/lib/kubelet/pods/c11d0d38-cbc2-42e5-bbc8-8bdd61756602/volumes" Nov 24 21:01:41 crc kubenswrapper[4812]: I1124 21:01:41.218035 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7fb88df497-nxz6w" Nov 24 21:01:41 crc kubenswrapper[4812]: I1124 21:01:41.922014 4812 scope.go:117] "RemoveContainer" containerID="662b0292ebf0edd9792f00f2f36519afadaedcf9209662915dd9d3ebca6d410c" Nov 24 21:01:41 crc kubenswrapper[4812]: E1124 21:01:41.922615 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-69c756d86-pwhmr_openstack(979a479b-f9c2-49dc-8069-15e64a67447e)\"" pod="openstack/heat-api-69c756d86-pwhmr" podUID="979a479b-f9c2-49dc-8069-15e64a67447e" Nov 24 21:01:41 crc kubenswrapper[4812]: I1124 21:01:41.926219 4812 scope.go:117] "RemoveContainer" containerID="2844ff36ae67f62eddf84503b7e8bd46ff08cf0e36b0534689543a7e751415cb" Nov 24 21:01:41 crc kubenswrapper[4812]: E1124 21:01:41.926472 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7ffd46775-pj869_openstack(9f685759-c557-4f80-be1e-7719f27d09a9)\"" pod="openstack/heat-cfnapi-7ffd46775-pj869" podUID="9f685759-c557-4f80-be1e-7719f27d09a9" Nov 24 21:01:43 crc kubenswrapper[4812]: I1124 21:01:43.177881 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-69c756d86-pwhmr" Nov 24 21:01:43 crc kubenswrapper[4812]: I1124 21:01:43.177927 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-69c756d86-pwhmr" Nov 24 21:01:43 crc kubenswrapper[4812]: I1124 21:01:43.178449 4812 scope.go:117] "RemoveContainer" containerID="662b0292ebf0edd9792f00f2f36519afadaedcf9209662915dd9d3ebca6d410c" Nov 24 21:01:43 crc kubenswrapper[4812]: E1124 21:01:43.178656 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-69c756d86-pwhmr_openstack(979a479b-f9c2-49dc-8069-15e64a67447e)\"" pod="openstack/heat-api-69c756d86-pwhmr" podUID="979a479b-f9c2-49dc-8069-15e64a67447e" Nov 24 21:01:43 crc kubenswrapper[4812]: I1124 21:01:43.255513 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-7ffd46775-pj869" Nov 24 21:01:43 crc kubenswrapper[4812]: I1124 21:01:43.256040 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7ffd46775-pj869" Nov 24 21:01:43 crc kubenswrapper[4812]: I1124 21:01:43.257393 4812 scope.go:117] "RemoveContainer" containerID="2844ff36ae67f62eddf84503b7e8bd46ff08cf0e36b0534689543a7e751415cb" Nov 24 21:01:43 crc kubenswrapper[4812]: E1124 21:01:43.257935 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-7ffd46775-pj869_openstack(9f685759-c557-4f80-be1e-7719f27d09a9)\"" pod="openstack/heat-cfnapi-7ffd46775-pj869" podUID="9f685759-c557-4f80-be1e-7719f27d09a9" Nov 24 21:01:48 crc kubenswrapper[4812]: I1124 21:01:48.190255 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6d5d7f948-4gnkf" Nov 24 21:01:48 crc kubenswrapper[4812]: I1124 21:01:48.252919 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7fb88df497-nxz6w"] Nov 24 21:01:48 crc kubenswrapper[4812]: I1124 21:01:48.253275 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-7fb88df497-nxz6w" podUID="1ce7c567-be13-4d0c-bf12-fb472b0e8661" containerName="heat-engine" containerID="cri-o://d9e45454fd74ef73bfcacaf653c0edad0012048d4ed1ec72988726a686dcec71" gracePeriod=60 Nov 24 21:01:49 crc kubenswrapper[4812]: I1124 21:01:49.012751 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-655bf47886-z4qqq" podUID="28908d79-deac-497f-94c0-dd47c148c6ba" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.122:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.122:8443: connect: connection refused" Nov 24 21:01:50 crc kubenswrapper[4812]: I1124 21:01:50.782420 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-65d8fb47c7-4swww" Nov 24 21:01:50 crc kubenswrapper[4812]: I1124 21:01:50.805109 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-69f4dd449f-ktzpp" Nov 24 21:01:50 crc kubenswrapper[4812]: I1124 21:01:50.871552 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7ffd46775-pj869"] Nov 24 21:01:50 crc kubenswrapper[4812]: I1124 21:01:50.907589 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-69c756d86-pwhmr"] Nov 24 21:01:51 crc kubenswrapper[4812]: E1124 21:01:51.184888 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d9e45454fd74ef73bfcacaf653c0edad0012048d4ed1ec72988726a686dcec71" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 21:01:51 crc kubenswrapper[4812]: E1124 21:01:51.186228 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d9e45454fd74ef73bfcacaf653c0edad0012048d4ed1ec72988726a686dcec71" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 21:01:51 crc kubenswrapper[4812]: E1124 21:01:51.187612 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d9e45454fd74ef73bfcacaf653c0edad0012048d4ed1ec72988726a686dcec71" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 21:01:51 crc kubenswrapper[4812]: E1124 21:01:51.187655 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7fb88df497-nxz6w" podUID="1ce7c567-be13-4d0c-bf12-fb472b0e8661" containerName="heat-engine" Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.404966 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-69c756d86-pwhmr" Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.413271 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7ffd46775-pj869" Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.449388 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf7k9\" (UniqueName: \"kubernetes.io/projected/9f685759-c557-4f80-be1e-7719f27d09a9-kube-api-access-rf7k9\") pod \"9f685759-c557-4f80-be1e-7719f27d09a9\" (UID: \"9f685759-c557-4f80-be1e-7719f27d09a9\") " Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.449669 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/979a479b-f9c2-49dc-8069-15e64a67447e-config-data-custom\") pod \"979a479b-f9c2-49dc-8069-15e64a67447e\" (UID: \"979a479b-f9c2-49dc-8069-15e64a67447e\") " Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.449804 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdqcj\" (UniqueName: \"kubernetes.io/projected/979a479b-f9c2-49dc-8069-15e64a67447e-kube-api-access-rdqcj\") pod \"979a479b-f9c2-49dc-8069-15e64a67447e\" (UID: \"979a479b-f9c2-49dc-8069-15e64a67447e\") " Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.450466 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/979a479b-f9c2-49dc-8069-15e64a67447e-config-data\") pod \"979a479b-f9c2-49dc-8069-15e64a67447e\" (UID: \"979a479b-f9c2-49dc-8069-15e64a67447e\") " Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.450549 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f685759-c557-4f80-be1e-7719f27d09a9-config-data-custom\") pod \"9f685759-c557-4f80-be1e-7719f27d09a9\" (UID: \"9f685759-c557-4f80-be1e-7719f27d09a9\") " Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.450627 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/979a479b-f9c2-49dc-8069-15e64a67447e-combined-ca-bundle\") pod \"979a479b-f9c2-49dc-8069-15e64a67447e\" (UID: \"979a479b-f9c2-49dc-8069-15e64a67447e\") " Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.450700 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f685759-c557-4f80-be1e-7719f27d09a9-combined-ca-bundle\") pod \"9f685759-c557-4f80-be1e-7719f27d09a9\" (UID: \"9f685759-c557-4f80-be1e-7719f27d09a9\") " Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.450784 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f685759-c557-4f80-be1e-7719f27d09a9-config-data\") pod \"9f685759-c557-4f80-be1e-7719f27d09a9\" (UID: \"9f685759-c557-4f80-be1e-7719f27d09a9\") " Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.459904 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f685759-c557-4f80-be1e-7719f27d09a9-kube-api-access-rf7k9" (OuterVolumeSpecName: "kube-api-access-rf7k9") pod "9f685759-c557-4f80-be1e-7719f27d09a9" (UID: "9f685759-c557-4f80-be1e-7719f27d09a9"). InnerVolumeSpecName "kube-api-access-rf7k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.460822 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/979a479b-f9c2-49dc-8069-15e64a67447e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "979a479b-f9c2-49dc-8069-15e64a67447e" (UID: "979a479b-f9c2-49dc-8069-15e64a67447e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.462584 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f685759-c557-4f80-be1e-7719f27d09a9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9f685759-c557-4f80-be1e-7719f27d09a9" (UID: "9f685759-c557-4f80-be1e-7719f27d09a9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.483638 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/979a479b-f9c2-49dc-8069-15e64a67447e-kube-api-access-rdqcj" (OuterVolumeSpecName: "kube-api-access-rdqcj") pod "979a479b-f9c2-49dc-8069-15e64a67447e" (UID: "979a479b-f9c2-49dc-8069-15e64a67447e"). InnerVolumeSpecName "kube-api-access-rdqcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.500136 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/979a479b-f9c2-49dc-8069-15e64a67447e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "979a479b-f9c2-49dc-8069-15e64a67447e" (UID: "979a479b-f9c2-49dc-8069-15e64a67447e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.514799 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f685759-c557-4f80-be1e-7719f27d09a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f685759-c557-4f80-be1e-7719f27d09a9" (UID: "9f685759-c557-4f80-be1e-7719f27d09a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.522646 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/979a479b-f9c2-49dc-8069-15e64a67447e-config-data" (OuterVolumeSpecName: "config-data") pod "979a479b-f9c2-49dc-8069-15e64a67447e" (UID: "979a479b-f9c2-49dc-8069-15e64a67447e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.547311 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f685759-c557-4f80-be1e-7719f27d09a9-config-data" (OuterVolumeSpecName: "config-data") pod "9f685759-c557-4f80-be1e-7719f27d09a9" (UID: "9f685759-c557-4f80-be1e-7719f27d09a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.554203 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f685759-c557-4f80-be1e-7719f27d09a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.554230 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f685759-c557-4f80-be1e-7719f27d09a9-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.554239 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf7k9\" (UniqueName: \"kubernetes.io/projected/9f685759-c557-4f80-be1e-7719f27d09a9-kube-api-access-rf7k9\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.554249 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/979a479b-f9c2-49dc-8069-15e64a67447e-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.554258 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdqcj\" (UniqueName: \"kubernetes.io/projected/979a479b-f9c2-49dc-8069-15e64a67447e-kube-api-access-rdqcj\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.554266 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/979a479b-f9c2-49dc-8069-15e64a67447e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.554273 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f685759-c557-4f80-be1e-7719f27d09a9-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:51 crc kubenswrapper[4812]: I1124 21:01:51.554282 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/979a479b-f9c2-49dc-8069-15e64a67447e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:52 crc kubenswrapper[4812]: I1124 21:01:52.035619 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69c756d86-pwhmr" event={"ID":"979a479b-f9c2-49dc-8069-15e64a67447e","Type":"ContainerDied","Data":"1e6092a9bf7c1fcae5ed97a948d9b48ecb3cbf94f77f1004cc8227a9e775cc93"} Nov 24 21:01:52 crc kubenswrapper[4812]: I1124 21:01:52.035668 4812 scope.go:117] "RemoveContainer" containerID="662b0292ebf0edd9792f00f2f36519afadaedcf9209662915dd9d3ebca6d410c" Nov 24 21:01:52 crc kubenswrapper[4812]: I1124 21:01:52.035723 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-69c756d86-pwhmr" Nov 24 21:01:52 crc kubenswrapper[4812]: I1124 21:01:52.038734 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7ffd46775-pj869" event={"ID":"9f685759-c557-4f80-be1e-7719f27d09a9","Type":"ContainerDied","Data":"48767358f32222a39041f314120177a73a302e88de66a9624474f1baf46729fd"} Nov 24 21:01:52 crc kubenswrapper[4812]: I1124 21:01:52.038831 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7ffd46775-pj869" Nov 24 21:01:52 crc kubenswrapper[4812]: I1124 21:01:52.065525 4812 scope.go:117] "RemoveContainer" containerID="2844ff36ae67f62eddf84503b7e8bd46ff08cf0e36b0534689543a7e751415cb" Nov 24 21:01:52 crc kubenswrapper[4812]: I1124 21:01:52.081046 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-69c756d86-pwhmr"] Nov 24 21:01:52 crc kubenswrapper[4812]: I1124 21:01:52.089120 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-69c756d86-pwhmr"] Nov 24 21:01:52 crc kubenswrapper[4812]: I1124 21:01:52.097059 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7ffd46775-pj869"] Nov 24 21:01:52 crc kubenswrapper[4812]: I1124 21:01:52.104313 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7ffd46775-pj869"] Nov 24 21:01:52 crc kubenswrapper[4812]: I1124 21:01:52.981560 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="979a479b-f9c2-49dc-8069-15e64a67447e" path="/var/lib/kubelet/pods/979a479b-f9c2-49dc-8069-15e64a67447e/volumes" Nov 24 21:01:52 crc kubenswrapper[4812]: I1124 21:01:52.982539 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f685759-c557-4f80-be1e-7719f27d09a9" path="/var/lib/kubelet/pods/9f685759-c557-4f80-be1e-7719f27d09a9/volumes" Nov 24 21:01:53 crc kubenswrapper[4812]: I1124 21:01:53.967109 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 21:01:53 crc kubenswrapper[4812]: E1124 21:01:53.967832 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:01:56 crc kubenswrapper[4812]: I1124 21:01:56.061979 4812 scope.go:117] "RemoveContainer" containerID="9bc46cfb314a54f3cde70e4067c64d432c623ab0c8a0939edd8d5898a7b3801f" Nov 24 21:01:56 crc kubenswrapper[4812]: I1124 21:01:56.101988 4812 scope.go:117] "RemoveContainer" containerID="76b6eb9693cdf7b0aac5eba5e66a95786a942ebc1a8f63bd8efcfb1f6bc8bebe" Nov 24 21:01:56 crc kubenswrapper[4812]: I1124 21:01:56.217633 4812 scope.go:117] "RemoveContainer" containerID="e6c83beac92b914ac6eb587410357b8c38334d2f8b704b2611bcb6176a7a98b2" Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.148320 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7fb88df497-nxz6w" Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.175793 4812 generic.go:334] "Generic (PLEG): container finished" podID="1ce7c567-be13-4d0c-bf12-fb472b0e8661" containerID="d9e45454fd74ef73bfcacaf653c0edad0012048d4ed1ec72988726a686dcec71" exitCode=0 Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.175845 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7fb88df497-nxz6w" event={"ID":"1ce7c567-be13-4d0c-bf12-fb472b0e8661","Type":"ContainerDied","Data":"d9e45454fd74ef73bfcacaf653c0edad0012048d4ed1ec72988726a686dcec71"} Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.175872 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7fb88df497-nxz6w" Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.175906 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7fb88df497-nxz6w" event={"ID":"1ce7c567-be13-4d0c-bf12-fb472b0e8661","Type":"ContainerDied","Data":"bf893361b5bd95a4aee59fef5c116806df19e8920a8d4c711a594f0172ec7e65"} Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.175934 4812 scope.go:117] "RemoveContainer" containerID="d9e45454fd74ef73bfcacaf653c0edad0012048d4ed1ec72988726a686dcec71" Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.202681 4812 scope.go:117] "RemoveContainer" containerID="d9e45454fd74ef73bfcacaf653c0edad0012048d4ed1ec72988726a686dcec71" Nov 24 21:01:58 crc kubenswrapper[4812]: E1124 21:01:58.203214 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9e45454fd74ef73bfcacaf653c0edad0012048d4ed1ec72988726a686dcec71\": container with ID starting with d9e45454fd74ef73bfcacaf653c0edad0012048d4ed1ec72988726a686dcec71 not found: ID does not exist" containerID="d9e45454fd74ef73bfcacaf653c0edad0012048d4ed1ec72988726a686dcec71" Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.203278 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9e45454fd74ef73bfcacaf653c0edad0012048d4ed1ec72988726a686dcec71"} err="failed to get container status \"d9e45454fd74ef73bfcacaf653c0edad0012048d4ed1ec72988726a686dcec71\": rpc error: code = NotFound desc = could not find container \"d9e45454fd74ef73bfcacaf653c0edad0012048d4ed1ec72988726a686dcec71\": container with ID starting with d9e45454fd74ef73bfcacaf653c0edad0012048d4ed1ec72988726a686dcec71 not found: ID does not exist" Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.212273 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ce7c567-be13-4d0c-bf12-fb472b0e8661-config-data-custom\") pod \"1ce7c567-be13-4d0c-bf12-fb472b0e8661\" (UID: \"1ce7c567-be13-4d0c-bf12-fb472b0e8661\") " Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.212521 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ce7c567-be13-4d0c-bf12-fb472b0e8661-combined-ca-bundle\") pod \"1ce7c567-be13-4d0c-bf12-fb472b0e8661\" (UID: \"1ce7c567-be13-4d0c-bf12-fb472b0e8661\") " Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.212690 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ce7c567-be13-4d0c-bf12-fb472b0e8661-config-data\") pod \"1ce7c567-be13-4d0c-bf12-fb472b0e8661\" (UID: \"1ce7c567-be13-4d0c-bf12-fb472b0e8661\") " Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.212816 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnc6f\" (UniqueName: \"kubernetes.io/projected/1ce7c567-be13-4d0c-bf12-fb472b0e8661-kube-api-access-cnc6f\") pod \"1ce7c567-be13-4d0c-bf12-fb472b0e8661\" (UID: \"1ce7c567-be13-4d0c-bf12-fb472b0e8661\") " Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.220829 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ce7c567-be13-4d0c-bf12-fb472b0e8661-kube-api-access-cnc6f" (OuterVolumeSpecName: "kube-api-access-cnc6f") pod "1ce7c567-be13-4d0c-bf12-fb472b0e8661" (UID: "1ce7c567-be13-4d0c-bf12-fb472b0e8661"). InnerVolumeSpecName "kube-api-access-cnc6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.221736 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ce7c567-be13-4d0c-bf12-fb472b0e8661-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1ce7c567-be13-4d0c-bf12-fb472b0e8661" (UID: "1ce7c567-be13-4d0c-bf12-fb472b0e8661"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.254776 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ce7c567-be13-4d0c-bf12-fb472b0e8661-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ce7c567-be13-4d0c-bf12-fb472b0e8661" (UID: "1ce7c567-be13-4d0c-bf12-fb472b0e8661"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.272717 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ce7c567-be13-4d0c-bf12-fb472b0e8661-config-data" (OuterVolumeSpecName: "config-data") pod "1ce7c567-be13-4d0c-bf12-fb472b0e8661" (UID: "1ce7c567-be13-4d0c-bf12-fb472b0e8661"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.315550 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnc6f\" (UniqueName: \"kubernetes.io/projected/1ce7c567-be13-4d0c-bf12-fb472b0e8661-kube-api-access-cnc6f\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.315608 4812 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ce7c567-be13-4d0c-bf12-fb472b0e8661-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.315627 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ce7c567-be13-4d0c-bf12-fb472b0e8661-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.315644 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ce7c567-be13-4d0c-bf12-fb472b0e8661-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.519246 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7fb88df497-nxz6w"] Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.530048 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-7fb88df497-nxz6w"] Nov 24 21:01:58 crc kubenswrapper[4812]: I1124 21:01:58.982398 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ce7c567-be13-4d0c-bf12-fb472b0e8661" path="/var/lib/kubelet/pods/1ce7c567-be13-4d0c-bf12-fb472b0e8661/volumes" Nov 24 21:01:59 crc kubenswrapper[4812]: I1124 21:01:59.012601 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-655bf47886-z4qqq" podUID="28908d79-deac-497f-94c0-dd47c148c6ba" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.122:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.122:8443: connect: connection refused" Nov 24 21:01:59 crc kubenswrapper[4812]: I1124 21:01:59.012742 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-655bf47886-z4qqq" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.040893 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-655bf47886-z4qqq" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.212972 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/28908d79-deac-497f-94c0-dd47c148c6ba-horizon-secret-key\") pod \"28908d79-deac-497f-94c0-dd47c148c6ba\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.213211 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28908d79-deac-497f-94c0-dd47c148c6ba-scripts\") pod \"28908d79-deac-497f-94c0-dd47c148c6ba\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.213382 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28908d79-deac-497f-94c0-dd47c148c6ba-logs\") pod \"28908d79-deac-497f-94c0-dd47c148c6ba\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.213467 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28908d79-deac-497f-94c0-dd47c148c6ba-config-data\") pod \"28908d79-deac-497f-94c0-dd47c148c6ba\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.213503 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28908d79-deac-497f-94c0-dd47c148c6ba-combined-ca-bundle\") pod \"28908d79-deac-497f-94c0-dd47c148c6ba\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.213548 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6rgp\" (UniqueName: \"kubernetes.io/projected/28908d79-deac-497f-94c0-dd47c148c6ba-kube-api-access-t6rgp\") pod \"28908d79-deac-497f-94c0-dd47c148c6ba\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.213682 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/28908d79-deac-497f-94c0-dd47c148c6ba-horizon-tls-certs\") pod \"28908d79-deac-497f-94c0-dd47c148c6ba\" (UID: \"28908d79-deac-497f-94c0-dd47c148c6ba\") " Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.213897 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28908d79-deac-497f-94c0-dd47c148c6ba-logs" (OuterVolumeSpecName: "logs") pod "28908d79-deac-497f-94c0-dd47c148c6ba" (UID: "28908d79-deac-497f-94c0-dd47c148c6ba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.214625 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28908d79-deac-497f-94c0-dd47c148c6ba-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.220675 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28908d79-deac-497f-94c0-dd47c148c6ba-kube-api-access-t6rgp" (OuterVolumeSpecName: "kube-api-access-t6rgp") pod "28908d79-deac-497f-94c0-dd47c148c6ba" (UID: "28908d79-deac-497f-94c0-dd47c148c6ba"). InnerVolumeSpecName "kube-api-access-t6rgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.221541 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28908d79-deac-497f-94c0-dd47c148c6ba-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "28908d79-deac-497f-94c0-dd47c148c6ba" (UID: "28908d79-deac-497f-94c0-dd47c148c6ba"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.246896 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28908d79-deac-497f-94c0-dd47c148c6ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28908d79-deac-497f-94c0-dd47c148c6ba" (UID: "28908d79-deac-497f-94c0-dd47c148c6ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.251081 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28908d79-deac-497f-94c0-dd47c148c6ba-config-data" (OuterVolumeSpecName: "config-data") pod "28908d79-deac-497f-94c0-dd47c148c6ba" (UID: "28908d79-deac-497f-94c0-dd47c148c6ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.252651 4812 generic.go:334] "Generic (PLEG): container finished" podID="28908d79-deac-497f-94c0-dd47c148c6ba" containerID="649ee3af869868644eca5d7c0e968148d978e2206bb43fcbfd899f5f607c9c55" exitCode=137 Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.252700 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-655bf47886-z4qqq" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.252749 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-655bf47886-z4qqq" event={"ID":"28908d79-deac-497f-94c0-dd47c148c6ba","Type":"ContainerDied","Data":"649ee3af869868644eca5d7c0e968148d978e2206bb43fcbfd899f5f607c9c55"} Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.252791 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-655bf47886-z4qqq" event={"ID":"28908d79-deac-497f-94c0-dd47c148c6ba","Type":"ContainerDied","Data":"d06275dbefc94ce8b0de5bf7c435fa2ea1176c2705fbb19432dbe41c8b59e4a8"} Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.252821 4812 scope.go:117] "RemoveContainer" containerID="36d8decc746379e90c829e25f954e81c7eebcdc68e71035283157b86b3d07e36" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.275248 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28908d79-deac-497f-94c0-dd47c148c6ba-scripts" (OuterVolumeSpecName: "scripts") pod "28908d79-deac-497f-94c0-dd47c148c6ba" (UID: "28908d79-deac-497f-94c0-dd47c148c6ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.317815 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28908d79-deac-497f-94c0-dd47c148c6ba-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.317853 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28908d79-deac-497f-94c0-dd47c148c6ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.317867 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6rgp\" (UniqueName: \"kubernetes.io/projected/28908d79-deac-497f-94c0-dd47c148c6ba-kube-api-access-t6rgp\") on node \"crc\" DevicePath \"\"" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.317879 4812 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/28908d79-deac-497f-94c0-dd47c148c6ba-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.317892 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/28908d79-deac-497f-94c0-dd47c148c6ba-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.318363 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28908d79-deac-497f-94c0-dd47c148c6ba-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "28908d79-deac-497f-94c0-dd47c148c6ba" (UID: "28908d79-deac-497f-94c0-dd47c148c6ba"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.419951 4812 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/28908d79-deac-497f-94c0-dd47c148c6ba-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.602393 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-655bf47886-z4qqq"] Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.614732 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-655bf47886-z4qqq"] Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.659999 4812 scope.go:117] "RemoveContainer" containerID="649ee3af869868644eca5d7c0e968148d978e2206bb43fcbfd899f5f607c9c55" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.697783 4812 scope.go:117] "RemoveContainer" containerID="36d8decc746379e90c829e25f954e81c7eebcdc68e71035283157b86b3d07e36" Nov 24 21:02:04 crc kubenswrapper[4812]: E1124 21:02:04.698301 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36d8decc746379e90c829e25f954e81c7eebcdc68e71035283157b86b3d07e36\": container with ID starting with 36d8decc746379e90c829e25f954e81c7eebcdc68e71035283157b86b3d07e36 not found: ID does not exist" containerID="36d8decc746379e90c829e25f954e81c7eebcdc68e71035283157b86b3d07e36" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.698342 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36d8decc746379e90c829e25f954e81c7eebcdc68e71035283157b86b3d07e36"} err="failed to get container status \"36d8decc746379e90c829e25f954e81c7eebcdc68e71035283157b86b3d07e36\": rpc error: code = NotFound desc = could not find container \"36d8decc746379e90c829e25f954e81c7eebcdc68e71035283157b86b3d07e36\": container with ID starting with 36d8decc746379e90c829e25f954e81c7eebcdc68e71035283157b86b3d07e36 not found: ID does not exist" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.698362 4812 scope.go:117] "RemoveContainer" containerID="649ee3af869868644eca5d7c0e968148d978e2206bb43fcbfd899f5f607c9c55" Nov 24 21:02:04 crc kubenswrapper[4812]: E1124 21:02:04.698757 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"649ee3af869868644eca5d7c0e968148d978e2206bb43fcbfd899f5f607c9c55\": container with ID starting with 649ee3af869868644eca5d7c0e968148d978e2206bb43fcbfd899f5f607c9c55 not found: ID does not exist" containerID="649ee3af869868644eca5d7c0e968148d978e2206bb43fcbfd899f5f607c9c55" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.698778 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"649ee3af869868644eca5d7c0e968148d978e2206bb43fcbfd899f5f607c9c55"} err="failed to get container status \"649ee3af869868644eca5d7c0e968148d978e2206bb43fcbfd899f5f607c9c55\": rpc error: code = NotFound desc = could not find container \"649ee3af869868644eca5d7c0e968148d978e2206bb43fcbfd899f5f607c9c55\": container with ID starting with 649ee3af869868644eca5d7c0e968148d978e2206bb43fcbfd899f5f607c9c55 not found: ID does not exist" Nov 24 21:02:04 crc kubenswrapper[4812]: I1124 21:02:04.979822 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28908d79-deac-497f-94c0-dd47c148c6ba" path="/var/lib/kubelet/pods/28908d79-deac-497f-94c0-dd47c148c6ba/volumes" Nov 24 21:02:08 crc kubenswrapper[4812]: I1124 21:02:08.966403 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 21:02:09 crc kubenswrapper[4812]: I1124 21:02:09.313157 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"6358004cd03b55cd72e6ef5108e3f4db8274e1654f97564aedd7e31f6875f7ff"} Nov 24 21:02:23 crc kubenswrapper[4812]: I1124 21:02:23.040960 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1c3c-account-create-vf6wg"] Nov 24 21:02:23 crc kubenswrapper[4812]: I1124 21:02:23.053659 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-r76nd"] Nov 24 21:02:23 crc kubenswrapper[4812]: I1124 21:02:23.069560 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-r76nd"] Nov 24 21:02:23 crc kubenswrapper[4812]: I1124 21:02:23.079976 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1c3c-account-create-vf6wg"] Nov 24 21:02:24 crc kubenswrapper[4812]: I1124 21:02:24.979432 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67a7a55d-20ce-43b0-a4b7-4e26064e31d1" path="/var/lib/kubelet/pods/67a7a55d-20ce-43b0-a4b7-4e26064e31d1/volumes" Nov 24 21:02:24 crc kubenswrapper[4812]: I1124 21:02:24.980820 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f4e56a8-83ee-4dc6-93d9-5a02a1175e90" path="/var/lib/kubelet/pods/8f4e56a8-83ee-4dc6-93d9-5a02a1175e90/volumes" Nov 24 21:02:30 crc kubenswrapper[4812]: I1124 21:02:30.031296 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-sstpq"] Nov 24 21:02:30 crc kubenswrapper[4812]: I1124 21:02:30.041635 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-sstpq"] Nov 24 21:02:30 crc kubenswrapper[4812]: I1124 21:02:30.978790 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e" path="/var/lib/kubelet/pods/180b0dab-3a40-4d9c-aa4a-6bc8ec20f82e/volumes" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.197802 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv"] Nov 24 21:02:34 crc kubenswrapper[4812]: E1124 21:02:34.198628 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11d0d38-cbc2-42e5-bbc8-8bdd61756602" containerName="heat-cfnapi" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.198941 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11d0d38-cbc2-42e5-bbc8-8bdd61756602" containerName="heat-cfnapi" Nov 24 21:02:34 crc kubenswrapper[4812]: E1124 21:02:34.198965 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f685759-c557-4f80-be1e-7719f27d09a9" containerName="heat-cfnapi" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.198972 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f685759-c557-4f80-be1e-7719f27d09a9" containerName="heat-cfnapi" Nov 24 21:02:34 crc kubenswrapper[4812]: E1124 21:02:34.198983 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28908d79-deac-497f-94c0-dd47c148c6ba" containerName="horizon-log" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.198990 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="28908d79-deac-497f-94c0-dd47c148c6ba" containerName="horizon-log" Nov 24 21:02:34 crc kubenswrapper[4812]: E1124 21:02:34.199002 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ce7c567-be13-4d0c-bf12-fb472b0e8661" containerName="heat-engine" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.199009 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ce7c567-be13-4d0c-bf12-fb472b0e8661" containerName="heat-engine" Nov 24 21:02:34 crc kubenswrapper[4812]: E1124 21:02:34.199025 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f685759-c557-4f80-be1e-7719f27d09a9" containerName="heat-cfnapi" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.199031 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f685759-c557-4f80-be1e-7719f27d09a9" containerName="heat-cfnapi" Nov 24 21:02:34 crc kubenswrapper[4812]: E1124 21:02:34.199046 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="979a479b-f9c2-49dc-8069-15e64a67447e" containerName="heat-api" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.199053 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="979a479b-f9c2-49dc-8069-15e64a67447e" containerName="heat-api" Nov 24 21:02:34 crc kubenswrapper[4812]: E1124 21:02:34.199068 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28908d79-deac-497f-94c0-dd47c148c6ba" containerName="horizon" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.199075 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="28908d79-deac-497f-94c0-dd47c148c6ba" containerName="horizon" Nov 24 21:02:34 crc kubenswrapper[4812]: E1124 21:02:34.199083 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="979a479b-f9c2-49dc-8069-15e64a67447e" containerName="heat-api" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.199091 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="979a479b-f9c2-49dc-8069-15e64a67447e" containerName="heat-api" Nov 24 21:02:34 crc kubenswrapper[4812]: E1124 21:02:34.199105 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35738265-6edb-4fdd-9dcf-c76a7097a0c2" containerName="heat-api" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.199113 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="35738265-6edb-4fdd-9dcf-c76a7097a0c2" containerName="heat-api" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.199378 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="28908d79-deac-497f-94c0-dd47c148c6ba" containerName="horizon" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.199389 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="979a479b-f9c2-49dc-8069-15e64a67447e" containerName="heat-api" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.199402 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f685759-c557-4f80-be1e-7719f27d09a9" containerName="heat-cfnapi" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.199419 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="28908d79-deac-497f-94c0-dd47c148c6ba" containerName="horizon-log" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.199431 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="35738265-6edb-4fdd-9dcf-c76a7097a0c2" containerName="heat-api" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.199444 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c11d0d38-cbc2-42e5-bbc8-8bdd61756602" containerName="heat-cfnapi" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.199459 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="979a479b-f9c2-49dc-8069-15e64a67447e" containerName="heat-api" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.199467 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f685759-c557-4f80-be1e-7719f27d09a9" containerName="heat-cfnapi" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.199475 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ce7c567-be13-4d0c-bf12-fb472b0e8661" containerName="heat-engine" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.201188 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.203157 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.227633 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv"] Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.315907 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv\" (UID: \"c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.316148 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv\" (UID: \"c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.316211 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mkts\" (UniqueName: \"kubernetes.io/projected/c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe-kube-api-access-7mkts\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv\" (UID: \"c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.418108 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv\" (UID: \"c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.418223 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv\" (UID: \"c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.418254 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mkts\" (UniqueName: \"kubernetes.io/projected/c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe-kube-api-access-7mkts\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv\" (UID: \"c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.418951 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv\" (UID: \"c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.419366 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv\" (UID: \"c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.441620 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mkts\" (UniqueName: \"kubernetes.io/projected/c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe-kube-api-access-7mkts\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv\" (UID: \"c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv" Nov 24 21:02:34 crc kubenswrapper[4812]: I1124 21:02:34.526587 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv" Nov 24 21:02:35 crc kubenswrapper[4812]: I1124 21:02:35.021900 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv"] Nov 24 21:02:35 crc kubenswrapper[4812]: I1124 21:02:35.648924 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv" event={"ID":"c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe","Type":"ContainerStarted","Data":"56144e511739c18981fe599fa7cb22bbe00b67dea2c4d8b1f5f7f528789ff150"} Nov 24 21:02:35 crc kubenswrapper[4812]: I1124 21:02:35.649277 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv" event={"ID":"c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe","Type":"ContainerStarted","Data":"9bbcdee815c9d2fa26a30a8aacc338097461a4f0e32d0ad03110fadf5a089a63"} Nov 24 21:02:36 crc kubenswrapper[4812]: I1124 21:02:36.663574 4812 generic.go:334] "Generic (PLEG): container finished" podID="c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe" containerID="56144e511739c18981fe599fa7cb22bbe00b67dea2c4d8b1f5f7f528789ff150" exitCode=0 Nov 24 21:02:36 crc kubenswrapper[4812]: I1124 21:02:36.663649 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv" event={"ID":"c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe","Type":"ContainerDied","Data":"56144e511739c18981fe599fa7cb22bbe00b67dea2c4d8b1f5f7f528789ff150"} Nov 24 21:02:36 crc kubenswrapper[4812]: I1124 21:02:36.670535 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:02:38 crc kubenswrapper[4812]: I1124 21:02:38.689629 4812 generic.go:334] "Generic (PLEG): container finished" podID="c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe" containerID="7fcd72cfd3802908d64c6ca701448f0b8731c19c9e716165e325e36ee1a92c53" exitCode=0 Nov 24 21:02:38 crc kubenswrapper[4812]: I1124 21:02:38.689707 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv" event={"ID":"c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe","Type":"ContainerDied","Data":"7fcd72cfd3802908d64c6ca701448f0b8731c19c9e716165e325e36ee1a92c53"} Nov 24 21:02:39 crc kubenswrapper[4812]: I1124 21:02:39.700959 4812 generic.go:334] "Generic (PLEG): container finished" podID="c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe" containerID="aabaeebd59b399adc8495063195cc3181d0e904f9a6c7ee49695b1ee34a5ef88" exitCode=0 Nov 24 21:02:39 crc kubenswrapper[4812]: I1124 21:02:39.701126 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv" event={"ID":"c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe","Type":"ContainerDied","Data":"aabaeebd59b399adc8495063195cc3181d0e904f9a6c7ee49695b1ee34a5ef88"} Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.176217 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.294785 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mkts\" (UniqueName: \"kubernetes.io/projected/c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe-kube-api-access-7mkts\") pod \"c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe\" (UID: \"c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe\") " Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.295010 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe-bundle\") pod \"c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe\" (UID: \"c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe\") " Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.295192 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe-util\") pod \"c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe\" (UID: \"c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe\") " Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.300224 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe-bundle" (OuterVolumeSpecName: "bundle") pod "c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe" (UID: "c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.301797 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe-kube-api-access-7mkts" (OuterVolumeSpecName: "kube-api-access-7mkts") pod "c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe" (UID: "c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe"). InnerVolumeSpecName "kube-api-access-7mkts". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.315523 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe-util" (OuterVolumeSpecName: "util") pod "c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe" (UID: "c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.398627 4812 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.398925 4812 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe-util\") on node \"crc\" DevicePath \"\"" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.399065 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mkts\" (UniqueName: \"kubernetes.io/projected/c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe-kube-api-access-7mkts\") on node \"crc\" DevicePath \"\"" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.688461 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vf4jm"] Nov 24 21:02:41 crc kubenswrapper[4812]: E1124 21:02:41.689601 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe" containerName="extract" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.689628 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe" containerName="extract" Nov 24 21:02:41 crc kubenswrapper[4812]: E1124 21:02:41.689666 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe" containerName="pull" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.689676 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe" containerName="pull" Nov 24 21:02:41 crc kubenswrapper[4812]: E1124 21:02:41.689691 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe" containerName="util" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.689704 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe" containerName="util" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.690276 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe" containerName="extract" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.712205 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf4jm" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.720486 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dvcz\" (UniqueName: \"kubernetes.io/projected/864a6605-c368-4c3b-bfb3-f282eb5229f3-kube-api-access-2dvcz\") pod \"redhat-marketplace-vf4jm\" (UID: \"864a6605-c368-4c3b-bfb3-f282eb5229f3\") " pod="openshift-marketplace/redhat-marketplace-vf4jm" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.720780 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864a6605-c368-4c3b-bfb3-f282eb5229f3-utilities\") pod \"redhat-marketplace-vf4jm\" (UID: \"864a6605-c368-4c3b-bfb3-f282eb5229f3\") " pod="openshift-marketplace/redhat-marketplace-vf4jm" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.720861 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864a6605-c368-4c3b-bfb3-f282eb5229f3-catalog-content\") pod \"redhat-marketplace-vf4jm\" (UID: \"864a6605-c368-4c3b-bfb3-f282eb5229f3\") " pod="openshift-marketplace/redhat-marketplace-vf4jm" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.724291 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf4jm"] Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.741747 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.741734 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv" event={"ID":"c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe","Type":"ContainerDied","Data":"9bbcdee815c9d2fa26a30a8aacc338097461a4f0e32d0ad03110fadf5a089a63"} Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.742165 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bbcdee815c9d2fa26a30a8aacc338097461a4f0e32d0ad03110fadf5a089a63" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.822575 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864a6605-c368-4c3b-bfb3-f282eb5229f3-utilities\") pod \"redhat-marketplace-vf4jm\" (UID: \"864a6605-c368-4c3b-bfb3-f282eb5229f3\") " pod="openshift-marketplace/redhat-marketplace-vf4jm" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.822652 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864a6605-c368-4c3b-bfb3-f282eb5229f3-catalog-content\") pod \"redhat-marketplace-vf4jm\" (UID: \"864a6605-c368-4c3b-bfb3-f282eb5229f3\") " pod="openshift-marketplace/redhat-marketplace-vf4jm" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.822780 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dvcz\" (UniqueName: \"kubernetes.io/projected/864a6605-c368-4c3b-bfb3-f282eb5229f3-kube-api-access-2dvcz\") pod \"redhat-marketplace-vf4jm\" (UID: \"864a6605-c368-4c3b-bfb3-f282eb5229f3\") " pod="openshift-marketplace/redhat-marketplace-vf4jm" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.823175 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864a6605-c368-4c3b-bfb3-f282eb5229f3-utilities\") pod \"redhat-marketplace-vf4jm\" (UID: \"864a6605-c368-4c3b-bfb3-f282eb5229f3\") " pod="openshift-marketplace/redhat-marketplace-vf4jm" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.823503 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864a6605-c368-4c3b-bfb3-f282eb5229f3-catalog-content\") pod \"redhat-marketplace-vf4jm\" (UID: \"864a6605-c368-4c3b-bfb3-f282eb5229f3\") " pod="openshift-marketplace/redhat-marketplace-vf4jm" Nov 24 21:02:41 crc kubenswrapper[4812]: I1124 21:02:41.843100 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dvcz\" (UniqueName: \"kubernetes.io/projected/864a6605-c368-4c3b-bfb3-f282eb5229f3-kube-api-access-2dvcz\") pod \"redhat-marketplace-vf4jm\" (UID: \"864a6605-c368-4c3b-bfb3-f282eb5229f3\") " pod="openshift-marketplace/redhat-marketplace-vf4jm" Nov 24 21:02:42 crc kubenswrapper[4812]: I1124 21:02:42.040480 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf4jm" Nov 24 21:02:42 crc kubenswrapper[4812]: I1124 21:02:42.545303 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf4jm"] Nov 24 21:02:42 crc kubenswrapper[4812]: W1124 21:02:42.551312 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod864a6605_c368_4c3b_bfb3_f282eb5229f3.slice/crio-ff9ea18adac402364426c89fade2822b0f8dd0f98a6e3c7a1981e5e5ce0c8612 WatchSource:0}: Error finding container ff9ea18adac402364426c89fade2822b0f8dd0f98a6e3c7a1981e5e5ce0c8612: Status 404 returned error can't find the container with id ff9ea18adac402364426c89fade2822b0f8dd0f98a6e3c7a1981e5e5ce0c8612 Nov 24 21:02:42 crc kubenswrapper[4812]: I1124 21:02:42.753196 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf4jm" event={"ID":"864a6605-c368-4c3b-bfb3-f282eb5229f3","Type":"ContainerStarted","Data":"ff9ea18adac402364426c89fade2822b0f8dd0f98a6e3c7a1981e5e5ce0c8612"} Nov 24 21:02:43 crc kubenswrapper[4812]: I1124 21:02:43.770527 4812 generic.go:334] "Generic (PLEG): container finished" podID="864a6605-c368-4c3b-bfb3-f282eb5229f3" containerID="be232913ddc20878e68bf99f8f65a52c01e0ab01339d04a5ba133694649cf976" exitCode=0 Nov 24 21:02:43 crc kubenswrapper[4812]: I1124 21:02:43.770622 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf4jm" event={"ID":"864a6605-c368-4c3b-bfb3-f282eb5229f3","Type":"ContainerDied","Data":"be232913ddc20878e68bf99f8f65a52c01e0ab01339d04a5ba133694649cf976"} Nov 24 21:02:45 crc kubenswrapper[4812]: I1124 21:02:45.800692 4812 generic.go:334] "Generic (PLEG): container finished" podID="864a6605-c368-4c3b-bfb3-f282eb5229f3" containerID="a5efbd9ed59118045c590ad31de4a0b5a75747be73095c6b752600d1ba25ec1c" exitCode=0 Nov 24 21:02:45 crc kubenswrapper[4812]: I1124 21:02:45.800800 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf4jm" event={"ID":"864a6605-c368-4c3b-bfb3-f282eb5229f3","Type":"ContainerDied","Data":"a5efbd9ed59118045c590ad31de4a0b5a75747be73095c6b752600d1ba25ec1c"} Nov 24 21:02:46 crc kubenswrapper[4812]: I1124 21:02:46.836740 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf4jm" event={"ID":"864a6605-c368-4c3b-bfb3-f282eb5229f3","Type":"ContainerStarted","Data":"ccf2eed915ade075a362a23733c60731f75d18254a00a03ebd4bb253d492b3dc"} Nov 24 21:02:46 crc kubenswrapper[4812]: I1124 21:02:46.869267 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vf4jm" podStartSLOduration=3.433199863 podStartE2EDuration="5.86924964s" podCreationTimestamp="2025-11-24 21:02:41 +0000 UTC" firstStartedPulling="2025-11-24 21:02:43.775509838 +0000 UTC m=+6357.564462239" lastFinishedPulling="2025-11-24 21:02:46.211559645 +0000 UTC m=+6360.000512016" observedRunningTime="2025-11-24 21:02:46.864457884 +0000 UTC m=+6360.653410285" watchObservedRunningTime="2025-11-24 21:02:46.86924964 +0000 UTC m=+6360.658202001" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.183594 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-djtpv"] Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.185247 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-djtpv" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.193456 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-djtpv"] Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.198434 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-7x7d9" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.198547 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.198595 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.301248 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-sf5b8"] Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.302477 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-sf5b8" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.307776 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-pmgnj" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.307880 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.314839 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-7lm9l"] Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.316233 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-7lm9l" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.323394 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-sf5b8"] Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.332074 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-7lm9l"] Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.343876 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpw6f\" (UniqueName: \"kubernetes.io/projected/2dfb256e-0c0f-4208-84f8-389921336bf1-kube-api-access-cpw6f\") pod \"obo-prometheus-operator-668cf9dfbb-djtpv\" (UID: \"2dfb256e-0c0f-4208-84f8-389921336bf1\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-djtpv" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.424964 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-wk9fv"] Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.426920 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-wk9fv" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.430953 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-vbdj7" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.431157 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.441639 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-wk9fv"] Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.446066 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpw6f\" (UniqueName: \"kubernetes.io/projected/2dfb256e-0c0f-4208-84f8-389921336bf1-kube-api-access-cpw6f\") pod \"obo-prometheus-operator-668cf9dfbb-djtpv\" (UID: \"2dfb256e-0c0f-4208-84f8-389921336bf1\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-djtpv" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.446126 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b8adfdfd-e5dd-4213-8cb5-56e8d2cf1b4e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c965c4cd8-7lm9l\" (UID: \"b8adfdfd-e5dd-4213-8cb5-56e8d2cf1b4e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-7lm9l" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.446208 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/67d260a7-9265-4dc5-ad9a-0d52330ffab9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c965c4cd8-sf5b8\" (UID: \"67d260a7-9265-4dc5-ad9a-0d52330ffab9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-sf5b8" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.446229 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b8adfdfd-e5dd-4213-8cb5-56e8d2cf1b4e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c965c4cd8-7lm9l\" (UID: \"b8adfdfd-e5dd-4213-8cb5-56e8d2cf1b4e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-7lm9l" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.446480 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/67d260a7-9265-4dc5-ad9a-0d52330ffab9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c965c4cd8-sf5b8\" (UID: \"67d260a7-9265-4dc5-ad9a-0d52330ffab9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-sf5b8" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.484225 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpw6f\" (UniqueName: \"kubernetes.io/projected/2dfb256e-0c0f-4208-84f8-389921336bf1-kube-api-access-cpw6f\") pod \"obo-prometheus-operator-668cf9dfbb-djtpv\" (UID: \"2dfb256e-0c0f-4208-84f8-389921336bf1\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-djtpv" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.503023 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-djtpv" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.528842 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-b78tl"] Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.530175 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-b78tl" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.539473 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-rgn7v" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.549271 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b8adfdfd-e5dd-4213-8cb5-56e8d2cf1b4e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c965c4cd8-7lm9l\" (UID: \"b8adfdfd-e5dd-4213-8cb5-56e8d2cf1b4e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-7lm9l" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.549404 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5xrl\" (UniqueName: \"kubernetes.io/projected/90781d50-55b2-4978-85f5-c107971271e1-kube-api-access-q5xrl\") pod \"observability-operator-d8bb48f5d-wk9fv\" (UID: \"90781d50-55b2-4978-85f5-c107971271e1\") " pod="openshift-operators/observability-operator-d8bb48f5d-wk9fv" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.549453 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/67d260a7-9265-4dc5-ad9a-0d52330ffab9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c965c4cd8-sf5b8\" (UID: \"67d260a7-9265-4dc5-ad9a-0d52330ffab9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-sf5b8" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.549509 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b8adfdfd-e5dd-4213-8cb5-56e8d2cf1b4e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c965c4cd8-7lm9l\" (UID: \"b8adfdfd-e5dd-4213-8cb5-56e8d2cf1b4e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-7lm9l" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.550132 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/67d260a7-9265-4dc5-ad9a-0d52330ffab9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c965c4cd8-sf5b8\" (UID: \"67d260a7-9265-4dc5-ad9a-0d52330ffab9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-sf5b8" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.550166 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/90781d50-55b2-4978-85f5-c107971271e1-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-wk9fv\" (UID: \"90781d50-55b2-4978-85f5-c107971271e1\") " pod="openshift-operators/observability-operator-d8bb48f5d-wk9fv" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.555838 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/67d260a7-9265-4dc5-ad9a-0d52330ffab9-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c965c4cd8-sf5b8\" (UID: \"67d260a7-9265-4dc5-ad9a-0d52330ffab9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-sf5b8" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.566288 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b8adfdfd-e5dd-4213-8cb5-56e8d2cf1b4e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c965c4cd8-7lm9l\" (UID: \"b8adfdfd-e5dd-4213-8cb5-56e8d2cf1b4e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-7lm9l" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.569477 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b8adfdfd-e5dd-4213-8cb5-56e8d2cf1b4e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c965c4cd8-7lm9l\" (UID: \"b8adfdfd-e5dd-4213-8cb5-56e8d2cf1b4e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-7lm9l" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.573486 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-b78tl"] Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.582067 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/67d260a7-9265-4dc5-ad9a-0d52330ffab9-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c965c4cd8-sf5b8\" (UID: \"67d260a7-9265-4dc5-ad9a-0d52330ffab9\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-sf5b8" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.628247 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-sf5b8" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.638067 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-7lm9l" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.652713 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pszl2\" (UniqueName: \"kubernetes.io/projected/429f0e17-50aa-44ad-917b-2c5f8b25b182-kube-api-access-pszl2\") pod \"perses-operator-5446b9c989-b78tl\" (UID: \"429f0e17-50aa-44ad-917b-2c5f8b25b182\") " pod="openshift-operators/perses-operator-5446b9c989-b78tl" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.652988 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/429f0e17-50aa-44ad-917b-2c5f8b25b182-openshift-service-ca\") pod \"perses-operator-5446b9c989-b78tl\" (UID: \"429f0e17-50aa-44ad-917b-2c5f8b25b182\") " pod="openshift-operators/perses-operator-5446b9c989-b78tl" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.653108 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5xrl\" (UniqueName: \"kubernetes.io/projected/90781d50-55b2-4978-85f5-c107971271e1-kube-api-access-q5xrl\") pod \"observability-operator-d8bb48f5d-wk9fv\" (UID: \"90781d50-55b2-4978-85f5-c107971271e1\") " pod="openshift-operators/observability-operator-d8bb48f5d-wk9fv" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.653270 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/90781d50-55b2-4978-85f5-c107971271e1-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-wk9fv\" (UID: \"90781d50-55b2-4978-85f5-c107971271e1\") " pod="openshift-operators/observability-operator-d8bb48f5d-wk9fv" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.671398 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/90781d50-55b2-4978-85f5-c107971271e1-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-wk9fv\" (UID: \"90781d50-55b2-4978-85f5-c107971271e1\") " pod="openshift-operators/observability-operator-d8bb48f5d-wk9fv" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.687505 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5xrl\" (UniqueName: \"kubernetes.io/projected/90781d50-55b2-4978-85f5-c107971271e1-kube-api-access-q5xrl\") pod \"observability-operator-d8bb48f5d-wk9fv\" (UID: \"90781d50-55b2-4978-85f5-c107971271e1\") " pod="openshift-operators/observability-operator-d8bb48f5d-wk9fv" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.748282 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-wk9fv" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.765033 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pszl2\" (UniqueName: \"kubernetes.io/projected/429f0e17-50aa-44ad-917b-2c5f8b25b182-kube-api-access-pszl2\") pod \"perses-operator-5446b9c989-b78tl\" (UID: \"429f0e17-50aa-44ad-917b-2c5f8b25b182\") " pod="openshift-operators/perses-operator-5446b9c989-b78tl" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.765240 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/429f0e17-50aa-44ad-917b-2c5f8b25b182-openshift-service-ca\") pod \"perses-operator-5446b9c989-b78tl\" (UID: \"429f0e17-50aa-44ad-917b-2c5f8b25b182\") " pod="openshift-operators/perses-operator-5446b9c989-b78tl" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.766127 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/429f0e17-50aa-44ad-917b-2c5f8b25b182-openshift-service-ca\") pod \"perses-operator-5446b9c989-b78tl\" (UID: \"429f0e17-50aa-44ad-917b-2c5f8b25b182\") " pod="openshift-operators/perses-operator-5446b9c989-b78tl" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.787206 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pszl2\" (UniqueName: \"kubernetes.io/projected/429f0e17-50aa-44ad-917b-2c5f8b25b182-kube-api-access-pszl2\") pod \"perses-operator-5446b9c989-b78tl\" (UID: \"429f0e17-50aa-44ad-917b-2c5f8b25b182\") " pod="openshift-operators/perses-operator-5446b9c989-b78tl" Nov 24 21:02:51 crc kubenswrapper[4812]: I1124 21:02:51.963186 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-b78tl" Nov 24 21:02:52 crc kubenswrapper[4812]: I1124 21:02:52.041231 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vf4jm" Nov 24 21:02:52 crc kubenswrapper[4812]: I1124 21:02:52.043499 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vf4jm" Nov 24 21:02:52 crc kubenswrapper[4812]: I1124 21:02:52.137612 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-djtpv"] Nov 24 21:02:52 crc kubenswrapper[4812]: I1124 21:02:52.205968 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-sf5b8"] Nov 24 21:02:52 crc kubenswrapper[4812]: I1124 21:02:52.285075 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-7lm9l"] Nov 24 21:02:52 crc kubenswrapper[4812]: W1124 21:02:52.296099 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8adfdfd_e5dd_4213_8cb5_56e8d2cf1b4e.slice/crio-399929527daa51f94dcac160fd9b4282aedd0aff636a1f06e24de83d07a34f2d WatchSource:0}: Error finding container 399929527daa51f94dcac160fd9b4282aedd0aff636a1f06e24de83d07a34f2d: Status 404 returned error can't find the container with id 399929527daa51f94dcac160fd9b4282aedd0aff636a1f06e24de83d07a34f2d Nov 24 21:02:52 crc kubenswrapper[4812]: I1124 21:02:52.437310 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-wk9fv"] Nov 24 21:02:52 crc kubenswrapper[4812]: W1124 21:02:52.446048 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90781d50_55b2_4978_85f5_c107971271e1.slice/crio-ed0cede8f2b9b8557fac72a43bfb8f1cdcf8659939f89a4feac6eb25bd6e6d47 WatchSource:0}: Error finding container ed0cede8f2b9b8557fac72a43bfb8f1cdcf8659939f89a4feac6eb25bd6e6d47: Status 404 returned error can't find the container with id ed0cede8f2b9b8557fac72a43bfb8f1cdcf8659939f89a4feac6eb25bd6e6d47 Nov 24 21:02:52 crc kubenswrapper[4812]: I1124 21:02:52.508648 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-b78tl"] Nov 24 21:02:52 crc kubenswrapper[4812]: W1124 21:02:52.511466 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod429f0e17_50aa_44ad_917b_2c5f8b25b182.slice/crio-a06efad7ace138e9e068387b98de323fd747b2460698c39a5914f8901417f3af WatchSource:0}: Error finding container a06efad7ace138e9e068387b98de323fd747b2460698c39a5914f8901417f3af: Status 404 returned error can't find the container with id a06efad7ace138e9e068387b98de323fd747b2460698c39a5914f8901417f3af Nov 24 21:02:52 crc kubenswrapper[4812]: I1124 21:02:52.945227 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-wk9fv" event={"ID":"90781d50-55b2-4978-85f5-c107971271e1","Type":"ContainerStarted","Data":"ed0cede8f2b9b8557fac72a43bfb8f1cdcf8659939f89a4feac6eb25bd6e6d47"} Nov 24 21:02:52 crc kubenswrapper[4812]: I1124 21:02:52.946355 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-djtpv" event={"ID":"2dfb256e-0c0f-4208-84f8-389921336bf1","Type":"ContainerStarted","Data":"fadc7430fd4f3eeae9b06ce1ea067822629bfff314814c79b941f0ed9a6be066"} Nov 24 21:02:52 crc kubenswrapper[4812]: I1124 21:02:52.951180 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-sf5b8" event={"ID":"67d260a7-9265-4dc5-ad9a-0d52330ffab9","Type":"ContainerStarted","Data":"cc2ebc6f917c7dd052d87ddee5b8e1d421348f220e188447e55e99d65caedbb9"} Nov 24 21:02:52 crc kubenswrapper[4812]: I1124 21:02:52.956266 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-7lm9l" event={"ID":"b8adfdfd-e5dd-4213-8cb5-56e8d2cf1b4e","Type":"ContainerStarted","Data":"399929527daa51f94dcac160fd9b4282aedd0aff636a1f06e24de83d07a34f2d"} Nov 24 21:02:53 crc kubenswrapper[4812]: I1124 21:02:53.006795 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-b78tl" event={"ID":"429f0e17-50aa-44ad-917b-2c5f8b25b182","Type":"ContainerStarted","Data":"a06efad7ace138e9e068387b98de323fd747b2460698c39a5914f8901417f3af"} Nov 24 21:02:53 crc kubenswrapper[4812]: I1124 21:02:53.129704 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-vf4jm" podUID="864a6605-c368-4c3b-bfb3-f282eb5229f3" containerName="registry-server" probeResult="failure" output=< Nov 24 21:02:53 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 21:02:53 crc kubenswrapper[4812]: > Nov 24 21:02:56 crc kubenswrapper[4812]: I1124 21:02:56.031553 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-sf5b8" event={"ID":"67d260a7-9265-4dc5-ad9a-0d52330ffab9","Type":"ContainerStarted","Data":"69504da79199ac857f7fa296db8b7ec475b742ff543d1e1b196dca4da115f823"} Nov 24 21:02:56 crc kubenswrapper[4812]: I1124 21:02:56.035645 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-7lm9l" event={"ID":"b8adfdfd-e5dd-4213-8cb5-56e8d2cf1b4e","Type":"ContainerStarted","Data":"bad70485314eecc480c1d081abb15ecedb9dd2b2c6bfcae47a6de96c5cb8c384"} Nov 24 21:02:56 crc kubenswrapper[4812]: I1124 21:02:56.066203 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-sf5b8" podStartSLOduration=2.530010358 podStartE2EDuration="5.06618459s" podCreationTimestamp="2025-11-24 21:02:51 +0000 UTC" firstStartedPulling="2025-11-24 21:02:52.239876572 +0000 UTC m=+6366.028828953" lastFinishedPulling="2025-11-24 21:02:54.776050814 +0000 UTC m=+6368.565003185" observedRunningTime="2025-11-24 21:02:56.051365591 +0000 UTC m=+6369.840317952" watchObservedRunningTime="2025-11-24 21:02:56.06618459 +0000 UTC m=+6369.855136961" Nov 24 21:02:56 crc kubenswrapper[4812]: I1124 21:02:56.450294 4812 scope.go:117] "RemoveContainer" containerID="a2f676d6b8ae5c685f768d0507d185c509f0d56f5792833b7ad9a0a681f109cb" Nov 24 21:02:56 crc kubenswrapper[4812]: I1124 21:02:56.993094 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c965c4cd8-7lm9l" podStartSLOduration=3.518584723 podStartE2EDuration="5.993076459s" podCreationTimestamp="2025-11-24 21:02:51 +0000 UTC" firstStartedPulling="2025-11-24 21:02:52.298677617 +0000 UTC m=+6366.087629988" lastFinishedPulling="2025-11-24 21:02:54.773169343 +0000 UTC m=+6368.562121724" observedRunningTime="2025-11-24 21:02:56.07571981 +0000 UTC m=+6369.864672181" watchObservedRunningTime="2025-11-24 21:02:56.993076459 +0000 UTC m=+6370.782028830" Nov 24 21:02:59 crc kubenswrapper[4812]: I1124 21:02:59.031914 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c39f-account-create-wbbb6"] Nov 24 21:02:59 crc kubenswrapper[4812]: I1124 21:02:59.043741 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-khlxn"] Nov 24 21:02:59 crc kubenswrapper[4812]: I1124 21:02:59.052847 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c39f-account-create-wbbb6"] Nov 24 21:02:59 crc kubenswrapper[4812]: I1124 21:02:59.065215 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-khlxn"] Nov 24 21:03:00 crc kubenswrapper[4812]: I1124 21:03:00.610061 4812 scope.go:117] "RemoveContainer" containerID="c723f575776c611264ba81577f9f13a7fd68241671eb51a5d5cea5340c5950ff" Nov 24 21:03:00 crc kubenswrapper[4812]: I1124 21:03:00.688508 4812 scope.go:117] "RemoveContainer" containerID="bc340bc102ec7e1253bb436bbd557051fad2569885134985579154a38b405cba" Nov 24 21:03:00 crc kubenswrapper[4812]: I1124 21:03:00.739542 4812 scope.go:117] "RemoveContainer" containerID="626f4f319849b2a007b337d0e47ca243b36a8ede53f59bcacc89682800a8a007" Nov 24 21:03:00 crc kubenswrapper[4812]: I1124 21:03:00.981423 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b" path="/var/lib/kubelet/pods/73ec7d8a-ecc8-4fed-bfdc-6dc7077a351b/volumes" Nov 24 21:03:00 crc kubenswrapper[4812]: I1124 21:03:00.982265 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76be24ae-cec2-4e47-a425-c3fd4f80897e" path="/var/lib/kubelet/pods/76be24ae-cec2-4e47-a425-c3fd4f80897e/volumes" Nov 24 21:03:01 crc kubenswrapper[4812]: I1124 21:03:01.096186 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-b78tl" event={"ID":"429f0e17-50aa-44ad-917b-2c5f8b25b182","Type":"ContainerStarted","Data":"897cf063c3cadb4e3c55d70ef30fd29ca9d70f0be1f9bb51b045c6abd7670a16"} Nov 24 21:03:01 crc kubenswrapper[4812]: I1124 21:03:01.096262 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-b78tl" Nov 24 21:03:01 crc kubenswrapper[4812]: I1124 21:03:01.105604 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-wk9fv" event={"ID":"90781d50-55b2-4978-85f5-c107971271e1","Type":"ContainerStarted","Data":"4815d582306debbb1f84418e1ce569c3b1dbfc848e42ded61f59486e97a89ee3"} Nov 24 21:03:01 crc kubenswrapper[4812]: I1124 21:03:01.105897 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-wk9fv" Nov 24 21:03:01 crc kubenswrapper[4812]: I1124 21:03:01.107022 4812 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-wk9fv container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.145:8081/healthz\": dial tcp 10.217.1.145:8081: connect: connection refused" start-of-body= Nov 24 21:03:01 crc kubenswrapper[4812]: I1124 21:03:01.107069 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-wk9fv" podUID="90781d50-55b2-4978-85f5-c107971271e1" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.145:8081/healthz\": dial tcp 10.217.1.145:8081: connect: connection refused" Nov 24 21:03:01 crc kubenswrapper[4812]: I1124 21:03:01.126724 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-b78tl" podStartSLOduration=1.951471703 podStartE2EDuration="10.126710081s" podCreationTimestamp="2025-11-24 21:02:51 +0000 UTC" firstStartedPulling="2025-11-24 21:02:52.513595563 +0000 UTC m=+6366.302547934" lastFinishedPulling="2025-11-24 21:03:00.688833941 +0000 UTC m=+6374.477786312" observedRunningTime="2025-11-24 21:03:01.117698296 +0000 UTC m=+6374.906650667" watchObservedRunningTime="2025-11-24 21:03:01.126710081 +0000 UTC m=+6374.915662452" Nov 24 21:03:01 crc kubenswrapper[4812]: I1124 21:03:01.147300 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-wk9fv" podStartSLOduration=1.857443252 podStartE2EDuration="10.147282484s" podCreationTimestamp="2025-11-24 21:02:51 +0000 UTC" firstStartedPulling="2025-11-24 21:02:52.449586351 +0000 UTC m=+6366.238538722" lastFinishedPulling="2025-11-24 21:03:00.739425593 +0000 UTC m=+6374.528377954" observedRunningTime="2025-11-24 21:03:01.141551211 +0000 UTC m=+6374.930503602" watchObservedRunningTime="2025-11-24 21:03:01.147282484 +0000 UTC m=+6374.936234855" Nov 24 21:03:01 crc kubenswrapper[4812]: I1124 21:03:01.814805 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-wk9fv" Nov 24 21:03:02 crc kubenswrapper[4812]: I1124 21:03:02.099802 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vf4jm" Nov 24 21:03:02 crc kubenswrapper[4812]: I1124 21:03:02.124371 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-djtpv" event={"ID":"2dfb256e-0c0f-4208-84f8-389921336bf1","Type":"ContainerStarted","Data":"bb33fcc50d17ee76790942dc3f1326b457eb742d7ac53e1744aa39f6ba672ef8"} Nov 24 21:03:02 crc kubenswrapper[4812]: I1124 21:03:02.148115 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-djtpv" podStartSLOduration=2.608847499 podStartE2EDuration="11.148095505s" podCreationTimestamp="2025-11-24 21:02:51 +0000 UTC" firstStartedPulling="2025-11-24 21:02:52.151708195 +0000 UTC m=+6365.940660566" lastFinishedPulling="2025-11-24 21:03:00.690956201 +0000 UTC m=+6374.479908572" observedRunningTime="2025-11-24 21:03:02.142100115 +0000 UTC m=+6375.931052496" watchObservedRunningTime="2025-11-24 21:03:02.148095505 +0000 UTC m=+6375.937047886" Nov 24 21:03:02 crc kubenswrapper[4812]: I1124 21:03:02.169081 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vf4jm" Nov 24 21:03:02 crc kubenswrapper[4812]: I1124 21:03:02.335667 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf4jm"] Nov 24 21:03:03 crc kubenswrapper[4812]: I1124 21:03:03.133412 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vf4jm" podUID="864a6605-c368-4c3b-bfb3-f282eb5229f3" containerName="registry-server" containerID="cri-o://ccf2eed915ade075a362a23733c60731f75d18254a00a03ebd4bb253d492b3dc" gracePeriod=2 Nov 24 21:03:04 crc kubenswrapper[4812]: I1124 21:03:04.144811 4812 generic.go:334] "Generic (PLEG): container finished" podID="864a6605-c368-4c3b-bfb3-f282eb5229f3" containerID="ccf2eed915ade075a362a23733c60731f75d18254a00a03ebd4bb253d492b3dc" exitCode=0 Nov 24 21:03:04 crc kubenswrapper[4812]: I1124 21:03:04.144837 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf4jm" event={"ID":"864a6605-c368-4c3b-bfb3-f282eb5229f3","Type":"ContainerDied","Data":"ccf2eed915ade075a362a23733c60731f75d18254a00a03ebd4bb253d492b3dc"} Nov 24 21:03:04 crc kubenswrapper[4812]: I1124 21:03:04.825028 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf4jm" Nov 24 21:03:04 crc kubenswrapper[4812]: I1124 21:03:04.961617 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864a6605-c368-4c3b-bfb3-f282eb5229f3-utilities\") pod \"864a6605-c368-4c3b-bfb3-f282eb5229f3\" (UID: \"864a6605-c368-4c3b-bfb3-f282eb5229f3\") " Nov 24 21:03:04 crc kubenswrapper[4812]: I1124 21:03:04.961740 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864a6605-c368-4c3b-bfb3-f282eb5229f3-catalog-content\") pod \"864a6605-c368-4c3b-bfb3-f282eb5229f3\" (UID: \"864a6605-c368-4c3b-bfb3-f282eb5229f3\") " Nov 24 21:03:04 crc kubenswrapper[4812]: I1124 21:03:04.961811 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dvcz\" (UniqueName: \"kubernetes.io/projected/864a6605-c368-4c3b-bfb3-f282eb5229f3-kube-api-access-2dvcz\") pod \"864a6605-c368-4c3b-bfb3-f282eb5229f3\" (UID: \"864a6605-c368-4c3b-bfb3-f282eb5229f3\") " Nov 24 21:03:04 crc kubenswrapper[4812]: I1124 21:03:04.962328 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/864a6605-c368-4c3b-bfb3-f282eb5229f3-utilities" (OuterVolumeSpecName: "utilities") pod "864a6605-c368-4c3b-bfb3-f282eb5229f3" (UID: "864a6605-c368-4c3b-bfb3-f282eb5229f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:03:04 crc kubenswrapper[4812]: I1124 21:03:04.967232 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/864a6605-c368-4c3b-bfb3-f282eb5229f3-kube-api-access-2dvcz" (OuterVolumeSpecName: "kube-api-access-2dvcz") pod "864a6605-c368-4c3b-bfb3-f282eb5229f3" (UID: "864a6605-c368-4c3b-bfb3-f282eb5229f3"). InnerVolumeSpecName "kube-api-access-2dvcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:03:04 crc kubenswrapper[4812]: I1124 21:03:04.984150 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/864a6605-c368-4c3b-bfb3-f282eb5229f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "864a6605-c368-4c3b-bfb3-f282eb5229f3" (UID: "864a6605-c368-4c3b-bfb3-f282eb5229f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:03:05 crc kubenswrapper[4812]: I1124 21:03:05.027562 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-k4kdh"] Nov 24 21:03:05 crc kubenswrapper[4812]: I1124 21:03:05.037227 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-k4kdh"] Nov 24 21:03:05 crc kubenswrapper[4812]: I1124 21:03:05.064112 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/864a6605-c368-4c3b-bfb3-f282eb5229f3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:03:05 crc kubenswrapper[4812]: I1124 21:03:05.064145 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dvcz\" (UniqueName: \"kubernetes.io/projected/864a6605-c368-4c3b-bfb3-f282eb5229f3-kube-api-access-2dvcz\") on node \"crc\" DevicePath \"\"" Nov 24 21:03:05 crc kubenswrapper[4812]: I1124 21:03:05.064172 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/864a6605-c368-4c3b-bfb3-f282eb5229f3-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:03:05 crc kubenswrapper[4812]: I1124 21:03:05.169869 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf4jm" event={"ID":"864a6605-c368-4c3b-bfb3-f282eb5229f3","Type":"ContainerDied","Data":"ff9ea18adac402364426c89fade2822b0f8dd0f98a6e3c7a1981e5e5ce0c8612"} Nov 24 21:03:05 crc kubenswrapper[4812]: I1124 21:03:05.169916 4812 scope.go:117] "RemoveContainer" containerID="ccf2eed915ade075a362a23733c60731f75d18254a00a03ebd4bb253d492b3dc" Nov 24 21:03:05 crc kubenswrapper[4812]: I1124 21:03:05.170054 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf4jm" Nov 24 21:03:05 crc kubenswrapper[4812]: I1124 21:03:05.189740 4812 scope.go:117] "RemoveContainer" containerID="a5efbd9ed59118045c590ad31de4a0b5a75747be73095c6b752600d1ba25ec1c" Nov 24 21:03:05 crc kubenswrapper[4812]: I1124 21:03:05.208025 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf4jm"] Nov 24 21:03:05 crc kubenswrapper[4812]: I1124 21:03:05.215481 4812 scope.go:117] "RemoveContainer" containerID="be232913ddc20878e68bf99f8f65a52c01e0ab01339d04a5ba133694649cf976" Nov 24 21:03:05 crc kubenswrapper[4812]: I1124 21:03:05.216663 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf4jm"] Nov 24 21:03:07 crc kubenswrapper[4812]: I1124 21:03:07.010432 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0" path="/var/lib/kubelet/pods/0ab3c432-0cc9-45e9-b3c2-5a96f901c3d0/volumes" Nov 24 21:03:07 crc kubenswrapper[4812]: I1124 21:03:07.011316 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="864a6605-c368-4c3b-bfb3-f282eb5229f3" path="/var/lib/kubelet/pods/864a6605-c368-4c3b-bfb3-f282eb5229f3/volumes" Nov 24 21:03:11 crc kubenswrapper[4812]: I1124 21:03:11.966851 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-b78tl" Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.676649 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.677408 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="bee308e6-8ad3-4d4b-b3e4-34ba092d429e" containerName="openstackclient" containerID="cri-o://3d43920bccf804994c005a7a93b0e12f40753803693800250156e11dd515ef1e" gracePeriod=2 Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.688480 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.723158 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 24 21:03:14 crc kubenswrapper[4812]: E1124 21:03:14.723609 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864a6605-c368-4c3b-bfb3-f282eb5229f3" containerName="registry-server" Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.723626 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="864a6605-c368-4c3b-bfb3-f282eb5229f3" containerName="registry-server" Nov 24 21:03:14 crc kubenswrapper[4812]: E1124 21:03:14.723640 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee308e6-8ad3-4d4b-b3e4-34ba092d429e" containerName="openstackclient" Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.723647 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee308e6-8ad3-4d4b-b3e4-34ba092d429e" containerName="openstackclient" Nov 24 21:03:14 crc kubenswrapper[4812]: E1124 21:03:14.723659 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864a6605-c368-4c3b-bfb3-f282eb5229f3" containerName="extract-utilities" Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.723665 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="864a6605-c368-4c3b-bfb3-f282eb5229f3" containerName="extract-utilities" Nov 24 21:03:14 crc kubenswrapper[4812]: E1124 21:03:14.723675 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864a6605-c368-4c3b-bfb3-f282eb5229f3" containerName="extract-content" Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.723681 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="864a6605-c368-4c3b-bfb3-f282eb5229f3" containerName="extract-content" Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.723869 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="bee308e6-8ad3-4d4b-b3e4-34ba092d429e" containerName="openstackclient" Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.723882 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="864a6605-c368-4c3b-bfb3-f282eb5229f3" containerName="registry-server" Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.724601 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.731936 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.753210 4812 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="bee308e6-8ad3-4d4b-b3e4-34ba092d429e" podUID="d5c10c0e-5a45-4741-a82f-985844fa105c" Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.874403 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5c10c0e-5a45-4741-a82f-985844fa105c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d5c10c0e-5a45-4741-a82f-985844fa105c\") " pod="openstack/openstackclient" Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.874461 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d5c10c0e-5a45-4741-a82f-985844fa105c-openstack-config\") pod \"openstackclient\" (UID: \"d5c10c0e-5a45-4741-a82f-985844fa105c\") " pod="openstack/openstackclient" Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.874519 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d5c10c0e-5a45-4741-a82f-985844fa105c-openstack-config-secret\") pod \"openstackclient\" (UID: \"d5c10c0e-5a45-4741-a82f-985844fa105c\") " pod="openstack/openstackclient" Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.874698 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l4l4\" (UniqueName: \"kubernetes.io/projected/d5c10c0e-5a45-4741-a82f-985844fa105c-kube-api-access-2l4l4\") pod \"openstackclient\" (UID: \"d5c10c0e-5a45-4741-a82f-985844fa105c\") " pod="openstack/openstackclient" Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.913290 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.916571 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.921405 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-q7sgx" Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.924005 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.987131 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l4l4\" (UniqueName: \"kubernetes.io/projected/d5c10c0e-5a45-4741-a82f-985844fa105c-kube-api-access-2l4l4\") pod \"openstackclient\" (UID: \"d5c10c0e-5a45-4741-a82f-985844fa105c\") " pod="openstack/openstackclient" Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.987194 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5c10c0e-5a45-4741-a82f-985844fa105c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d5c10c0e-5a45-4741-a82f-985844fa105c\") " pod="openstack/openstackclient" Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.987237 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d5c10c0e-5a45-4741-a82f-985844fa105c-openstack-config\") pod \"openstackclient\" (UID: \"d5c10c0e-5a45-4741-a82f-985844fa105c\") " pod="openstack/openstackclient" Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.987259 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9xmh\" (UniqueName: \"kubernetes.io/projected/bc18986f-71c1-4bac-a3cb-c8d60192cbe6-kube-api-access-h9xmh\") pod \"kube-state-metrics-0\" (UID: \"bc18986f-71c1-4bac-a3cb-c8d60192cbe6\") " pod="openstack/kube-state-metrics-0" Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.987316 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d5c10c0e-5a45-4741-a82f-985844fa105c-openstack-config-secret\") pod \"openstackclient\" (UID: \"d5c10c0e-5a45-4741-a82f-985844fa105c\") " pod="openstack/openstackclient" Nov 24 21:03:14 crc kubenswrapper[4812]: I1124 21:03:14.989284 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d5c10c0e-5a45-4741-a82f-985844fa105c-openstack-config\") pod \"openstackclient\" (UID: \"d5c10c0e-5a45-4741-a82f-985844fa105c\") " pod="openstack/openstackclient" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.004021 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d5c10c0e-5a45-4741-a82f-985844fa105c-openstack-config-secret\") pod \"openstackclient\" (UID: \"d5c10c0e-5a45-4741-a82f-985844fa105c\") " pod="openstack/openstackclient" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.027754 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5c10c0e-5a45-4741-a82f-985844fa105c-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d5c10c0e-5a45-4741-a82f-985844fa105c\") " pod="openstack/openstackclient" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.095017 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9xmh\" (UniqueName: \"kubernetes.io/projected/bc18986f-71c1-4bac-a3cb-c8d60192cbe6-kube-api-access-h9xmh\") pod \"kube-state-metrics-0\" (UID: \"bc18986f-71c1-4bac-a3cb-c8d60192cbe6\") " pod="openstack/kube-state-metrics-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.096692 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l4l4\" (UniqueName: \"kubernetes.io/projected/d5c10c0e-5a45-4741-a82f-985844fa105c-kube-api-access-2l4l4\") pod \"openstackclient\" (UID: \"d5c10c0e-5a45-4741-a82f-985844fa105c\") " pod="openstack/openstackclient" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.141110 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9xmh\" (UniqueName: \"kubernetes.io/projected/bc18986f-71c1-4bac-a3cb-c8d60192cbe6-kube-api-access-h9xmh\") pod \"kube-state-metrics-0\" (UID: \"bc18986f-71c1-4bac-a3cb-c8d60192cbe6\") " pod="openstack/kube-state-metrics-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.213270 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.368933 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.741780 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.743994 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.748680 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.748887 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.749391 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.749625 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.749754 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-dvbsp" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.775531 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.829722 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/09f77557-419d-478a-a804-0b610554d370-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"09f77557-419d-478a-a804-0b610554d370\") " pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.829985 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/09f77557-419d-478a-a804-0b610554d370-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"09f77557-419d-478a-a804-0b610554d370\") " pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.830013 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/09f77557-419d-478a-a804-0b610554d370-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"09f77557-419d-478a-a804-0b610554d370\") " pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.830070 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/09f77557-419d-478a-a804-0b610554d370-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"09f77557-419d-478a-a804-0b610554d370\") " pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.830105 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/09f77557-419d-478a-a804-0b610554d370-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"09f77557-419d-478a-a804-0b610554d370\") " pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.830126 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-482v4\" (UniqueName: \"kubernetes.io/projected/09f77557-419d-478a-a804-0b610554d370-kube-api-access-482v4\") pod \"alertmanager-metric-storage-0\" (UID: \"09f77557-419d-478a-a804-0b610554d370\") " pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.830147 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/09f77557-419d-478a-a804-0b610554d370-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"09f77557-419d-478a-a804-0b610554d370\") " pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.932857 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/09f77557-419d-478a-a804-0b610554d370-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"09f77557-419d-478a-a804-0b610554d370\") " pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.932938 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/09f77557-419d-478a-a804-0b610554d370-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"09f77557-419d-478a-a804-0b610554d370\") " pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.932966 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/09f77557-419d-478a-a804-0b610554d370-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"09f77557-419d-478a-a804-0b610554d370\") " pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.933015 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/09f77557-419d-478a-a804-0b610554d370-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"09f77557-419d-478a-a804-0b610554d370\") " pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.933050 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/09f77557-419d-478a-a804-0b610554d370-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"09f77557-419d-478a-a804-0b610554d370\") " pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.933088 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-482v4\" (UniqueName: \"kubernetes.io/projected/09f77557-419d-478a-a804-0b610554d370-kube-api-access-482v4\") pod \"alertmanager-metric-storage-0\" (UID: \"09f77557-419d-478a-a804-0b610554d370\") " pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.933105 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/09f77557-419d-478a-a804-0b610554d370-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"09f77557-419d-478a-a804-0b610554d370\") " pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.937640 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/09f77557-419d-478a-a804-0b610554d370-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"09f77557-419d-478a-a804-0b610554d370\") " pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.942665 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/09f77557-419d-478a-a804-0b610554d370-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"09f77557-419d-478a-a804-0b610554d370\") " pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.948063 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/09f77557-419d-478a-a804-0b610554d370-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"09f77557-419d-478a-a804-0b610554d370\") " pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.961835 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/09f77557-419d-478a-a804-0b610554d370-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"09f77557-419d-478a-a804-0b610554d370\") " pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.962139 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/09f77557-419d-478a-a804-0b610554d370-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"09f77557-419d-478a-a804-0b610554d370\") " pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.964998 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-482v4\" (UniqueName: \"kubernetes.io/projected/09f77557-419d-478a-a804-0b610554d370-kube-api-access-482v4\") pod \"alertmanager-metric-storage-0\" (UID: \"09f77557-419d-478a-a804-0b610554d370\") " pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:15 crc kubenswrapper[4812]: I1124 21:03:15.976887 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/09f77557-419d-478a-a804-0b610554d370-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"09f77557-419d-478a-a804-0b610554d370\") " pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.051855 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.095160 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.307434 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.310619 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.314022 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.314222 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.314352 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.314521 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.314533 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.314613 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8xp77" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.334712 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.337201 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bc18986f-71c1-4bac-a3cb-c8d60192cbe6","Type":"ContainerStarted","Data":"2002349606d1ad7d970f1553957d1806c3a7641ce88c30c02b0453b216846d20"} Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.428020 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.446811 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c2tm\" (UniqueName: \"kubernetes.io/projected/19455782-88b2-40bd-afcd-f6334de7013a-kube-api-access-7c2tm\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.446874 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/19455782-88b2-40bd-afcd-f6334de7013a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.446915 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/19455782-88b2-40bd-afcd-f6334de7013a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.446952 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/19455782-88b2-40bd-afcd-f6334de7013a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.446976 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/19455782-88b2-40bd-afcd-f6334de7013a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.447015 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-888926e7-09ea-417e-9dc3-e9734339eb4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-888926e7-09ea-417e-9dc3-e9734339eb4c\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.447139 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/19455782-88b2-40bd-afcd-f6334de7013a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.447171 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/19455782-88b2-40bd-afcd-f6334de7013a-config\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.548642 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/19455782-88b2-40bd-afcd-f6334de7013a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.548691 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/19455782-88b2-40bd-afcd-f6334de7013a-config\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.548734 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c2tm\" (UniqueName: \"kubernetes.io/projected/19455782-88b2-40bd-afcd-f6334de7013a-kube-api-access-7c2tm\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.548773 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/19455782-88b2-40bd-afcd-f6334de7013a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.548809 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/19455782-88b2-40bd-afcd-f6334de7013a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.548847 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/19455782-88b2-40bd-afcd-f6334de7013a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.548866 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/19455782-88b2-40bd-afcd-f6334de7013a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.548895 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-888926e7-09ea-417e-9dc3-e9734339eb4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-888926e7-09ea-417e-9dc3-e9734339eb4c\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.551787 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/19455782-88b2-40bd-afcd-f6334de7013a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.562765 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.562819 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-888926e7-09ea-417e-9dc3-e9734339eb4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-888926e7-09ea-417e-9dc3-e9734339eb4c\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e88d844691aa2a1fc69e7f251357e53e73a5ccd47896beb8c8f42aaa6dfed265/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.563297 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/19455782-88b2-40bd-afcd-f6334de7013a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.565711 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/19455782-88b2-40bd-afcd-f6334de7013a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.565935 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/19455782-88b2-40bd-afcd-f6334de7013a-config\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.566232 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/19455782-88b2-40bd-afcd-f6334de7013a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.571021 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/19455782-88b2-40bd-afcd-f6334de7013a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.581487 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c2tm\" (UniqueName: \"kubernetes.io/projected/19455782-88b2-40bd-afcd-f6334de7013a-kube-api-access-7c2tm\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.716888 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-888926e7-09ea-417e-9dc3-e9734339eb4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-888926e7-09ea-417e-9dc3-e9734339eb4c\") pod \"prometheus-metric-storage-0\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.822621 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 24 21:03:16 crc kubenswrapper[4812]: I1124 21:03:16.963304 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.130851 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.133447 4812 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="bee308e6-8ad3-4d4b-b3e4-34ba092d429e" podUID="d5c10c0e-5a45-4741-a82f-985844fa105c" Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.282209 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd2nl\" (UniqueName: \"kubernetes.io/projected/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-kube-api-access-xd2nl\") pod \"bee308e6-8ad3-4d4b-b3e4-34ba092d429e\" (UID: \"bee308e6-8ad3-4d4b-b3e4-34ba092d429e\") " Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.282606 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-combined-ca-bundle\") pod \"bee308e6-8ad3-4d4b-b3e4-34ba092d429e\" (UID: \"bee308e6-8ad3-4d4b-b3e4-34ba092d429e\") " Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.282643 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-openstack-config\") pod \"bee308e6-8ad3-4d4b-b3e4-34ba092d429e\" (UID: \"bee308e6-8ad3-4d4b-b3e4-34ba092d429e\") " Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.282713 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-openstack-config-secret\") pod \"bee308e6-8ad3-4d4b-b3e4-34ba092d429e\" (UID: \"bee308e6-8ad3-4d4b-b3e4-34ba092d429e\") " Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.290181 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-kube-api-access-xd2nl" (OuterVolumeSpecName: "kube-api-access-xd2nl") pod "bee308e6-8ad3-4d4b-b3e4-34ba092d429e" (UID: "bee308e6-8ad3-4d4b-b3e4-34ba092d429e"). InnerVolumeSpecName "kube-api-access-xd2nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.321064 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "bee308e6-8ad3-4d4b-b3e4-34ba092d429e" (UID: "bee308e6-8ad3-4d4b-b3e4-34ba092d429e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.330953 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bee308e6-8ad3-4d4b-b3e4-34ba092d429e" (UID: "bee308e6-8ad3-4d4b-b3e4-34ba092d429e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.354020 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bc18986f-71c1-4bac-a3cb-c8d60192cbe6","Type":"ContainerStarted","Data":"aa42ba1992ccf424c346fb1ee01c9e7a14593ff42fad48541822519cf08ad179"} Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.354602 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.362164 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"09f77557-419d-478a-a804-0b610554d370","Type":"ContainerStarted","Data":"e5436d219c86dd31339e9113199aacc893fa65beb0564501e97f3f3ef5b84209"} Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.366937 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d5c10c0e-5a45-4741-a82f-985844fa105c","Type":"ContainerStarted","Data":"09f2fa04d98ac84966e0a176fbcb820f8624c7ebc4121dc8d7c96b05cea6fdf3"} Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.366996 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d5c10c0e-5a45-4741-a82f-985844fa105c","Type":"ContainerStarted","Data":"54dcc1cedba9de915568da534455d78a3d551428bfc2a6d811b9ba1e03219607"} Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.382600 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.892012663 podStartE2EDuration="3.382561785s" podCreationTimestamp="2025-11-24 21:03:14 +0000 UTC" firstStartedPulling="2025-11-24 21:03:16.126499084 +0000 UTC m=+6389.915451455" lastFinishedPulling="2025-11-24 21:03:16.617048206 +0000 UTC m=+6390.406000577" observedRunningTime="2025-11-24 21:03:17.37390816 +0000 UTC m=+6391.162860531" watchObservedRunningTime="2025-11-24 21:03:17.382561785 +0000 UTC m=+6391.171514156" Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.384600 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "bee308e6-8ad3-4d4b-b3e4-34ba092d429e" (UID: "bee308e6-8ad3-4d4b-b3e4-34ba092d429e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.384666 4812 generic.go:334] "Generic (PLEG): container finished" podID="bee308e6-8ad3-4d4b-b3e4-34ba092d429e" containerID="3d43920bccf804994c005a7a93b0e12f40753803693800250156e11dd515ef1e" exitCode=137 Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.384725 4812 scope.go:117] "RemoveContainer" containerID="3d43920bccf804994c005a7a93b0e12f40753803693800250156e11dd515ef1e" Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.384728 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.384767 4812 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.385193 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd2nl\" (UniqueName: \"kubernetes.io/projected/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-kube-api-access-xd2nl\") on node \"crc\" DevicePath \"\"" Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.385667 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.385692 4812 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bee308e6-8ad3-4d4b-b3e4-34ba092d429e-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.397061 4812 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="bee308e6-8ad3-4d4b-b3e4-34ba092d429e" podUID="d5c10c0e-5a45-4741-a82f-985844fa105c" Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.403160 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.403142958 podStartE2EDuration="3.403142958s" podCreationTimestamp="2025-11-24 21:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:03:17.394183814 +0000 UTC m=+6391.183136185" watchObservedRunningTime="2025-11-24 21:03:17.403142958 +0000 UTC m=+6391.192095329" Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.468028 4812 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="bee308e6-8ad3-4d4b-b3e4-34ba092d429e" podUID="d5c10c0e-5a45-4741-a82f-985844fa105c" Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.478687 4812 scope.go:117] "RemoveContainer" containerID="3d43920bccf804994c005a7a93b0e12f40753803693800250156e11dd515ef1e" Nov 24 21:03:17 crc kubenswrapper[4812]: E1124 21:03:17.479156 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d43920bccf804994c005a7a93b0e12f40753803693800250156e11dd515ef1e\": container with ID starting with 3d43920bccf804994c005a7a93b0e12f40753803693800250156e11dd515ef1e not found: ID does not exist" containerID="3d43920bccf804994c005a7a93b0e12f40753803693800250156e11dd515ef1e" Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.479181 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d43920bccf804994c005a7a93b0e12f40753803693800250156e11dd515ef1e"} err="failed to get container status \"3d43920bccf804994c005a7a93b0e12f40753803693800250156e11dd515ef1e\": rpc error: code = NotFound desc = could not find container \"3d43920bccf804994c005a7a93b0e12f40753803693800250156e11dd515ef1e\": container with ID starting with 3d43920bccf804994c005a7a93b0e12f40753803693800250156e11dd515ef1e not found: ID does not exist" Nov 24 21:03:17 crc kubenswrapper[4812]: I1124 21:03:17.490315 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:03:17 crc kubenswrapper[4812]: W1124 21:03:17.493318 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19455782_88b2_40bd_afcd_f6334de7013a.slice/crio-0459ae699d76737adcc926bcfc7edaf60cd8f6d2612cbfa15f01d43f505e6d3f WatchSource:0}: Error finding container 0459ae699d76737adcc926bcfc7edaf60cd8f6d2612cbfa15f01d43f505e6d3f: Status 404 returned error can't find the container with id 0459ae699d76737adcc926bcfc7edaf60cd8f6d2612cbfa15f01d43f505e6d3f Nov 24 21:03:18 crc kubenswrapper[4812]: I1124 21:03:18.398091 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"19455782-88b2-40bd-afcd-f6334de7013a","Type":"ContainerStarted","Data":"0459ae699d76737adcc926bcfc7edaf60cd8f6d2612cbfa15f01d43f505e6d3f"} Nov 24 21:03:19 crc kubenswrapper[4812]: I1124 21:03:19.006982 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bee308e6-8ad3-4d4b-b3e4-34ba092d429e" path="/var/lib/kubelet/pods/bee308e6-8ad3-4d4b-b3e4-34ba092d429e/volumes" Nov 24 21:03:23 crc kubenswrapper[4812]: I1124 21:03:23.480448 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"19455782-88b2-40bd-afcd-f6334de7013a","Type":"ContainerStarted","Data":"67c7f46232cd9e43116c7e1624ca0f80a1e206ecbc9397e3dcdbc4878887eb29"} Nov 24 21:03:23 crc kubenswrapper[4812]: I1124 21:03:23.485815 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"09f77557-419d-478a-a804-0b610554d370","Type":"ContainerStarted","Data":"703d2de288c04ba940a1a9387229253ec96ff90c78ddb281f55b0cdd1878fd6e"} Nov 24 21:03:24 crc kubenswrapper[4812]: I1124 21:03:24.336468 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2c7jl"] Nov 24 21:03:24 crc kubenswrapper[4812]: I1124 21:03:24.339998 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2c7jl" Nov 24 21:03:24 crc kubenswrapper[4812]: I1124 21:03:24.350918 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2c7jl"] Nov 24 21:03:24 crc kubenswrapper[4812]: I1124 21:03:24.450785 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00594133-7443-4a52-b929-22e644c37f09-utilities\") pod \"redhat-operators-2c7jl\" (UID: \"00594133-7443-4a52-b929-22e644c37f09\") " pod="openshift-marketplace/redhat-operators-2c7jl" Nov 24 21:03:24 crc kubenswrapper[4812]: I1124 21:03:24.450926 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6pp6\" (UniqueName: \"kubernetes.io/projected/00594133-7443-4a52-b929-22e644c37f09-kube-api-access-r6pp6\") pod \"redhat-operators-2c7jl\" (UID: \"00594133-7443-4a52-b929-22e644c37f09\") " pod="openshift-marketplace/redhat-operators-2c7jl" Nov 24 21:03:24 crc kubenswrapper[4812]: I1124 21:03:24.450999 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00594133-7443-4a52-b929-22e644c37f09-catalog-content\") pod \"redhat-operators-2c7jl\" (UID: \"00594133-7443-4a52-b929-22e644c37f09\") " pod="openshift-marketplace/redhat-operators-2c7jl" Nov 24 21:03:24 crc kubenswrapper[4812]: I1124 21:03:24.553392 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00594133-7443-4a52-b929-22e644c37f09-utilities\") pod \"redhat-operators-2c7jl\" (UID: \"00594133-7443-4a52-b929-22e644c37f09\") " pod="openshift-marketplace/redhat-operators-2c7jl" Nov 24 21:03:24 crc kubenswrapper[4812]: I1124 21:03:24.553471 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6pp6\" (UniqueName: \"kubernetes.io/projected/00594133-7443-4a52-b929-22e644c37f09-kube-api-access-r6pp6\") pod \"redhat-operators-2c7jl\" (UID: \"00594133-7443-4a52-b929-22e644c37f09\") " pod="openshift-marketplace/redhat-operators-2c7jl" Nov 24 21:03:24 crc kubenswrapper[4812]: I1124 21:03:24.553503 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00594133-7443-4a52-b929-22e644c37f09-catalog-content\") pod \"redhat-operators-2c7jl\" (UID: \"00594133-7443-4a52-b929-22e644c37f09\") " pod="openshift-marketplace/redhat-operators-2c7jl" Nov 24 21:03:24 crc kubenswrapper[4812]: I1124 21:03:24.553862 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00594133-7443-4a52-b929-22e644c37f09-utilities\") pod \"redhat-operators-2c7jl\" (UID: \"00594133-7443-4a52-b929-22e644c37f09\") " pod="openshift-marketplace/redhat-operators-2c7jl" Nov 24 21:03:24 crc kubenswrapper[4812]: I1124 21:03:24.553966 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00594133-7443-4a52-b929-22e644c37f09-catalog-content\") pod \"redhat-operators-2c7jl\" (UID: \"00594133-7443-4a52-b929-22e644c37f09\") " pod="openshift-marketplace/redhat-operators-2c7jl" Nov 24 21:03:24 crc kubenswrapper[4812]: I1124 21:03:24.599042 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6pp6\" (UniqueName: \"kubernetes.io/projected/00594133-7443-4a52-b929-22e644c37f09-kube-api-access-r6pp6\") pod \"redhat-operators-2c7jl\" (UID: \"00594133-7443-4a52-b929-22e644c37f09\") " pod="openshift-marketplace/redhat-operators-2c7jl" Nov 24 21:03:24 crc kubenswrapper[4812]: I1124 21:03:24.669760 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2c7jl" Nov 24 21:03:25 crc kubenswrapper[4812]: W1124 21:03:25.198780 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00594133_7443_4a52_b929_22e644c37f09.slice/crio-f31a41bb8dc90422975091f693390b86eb83a0755b52ead5b0f6634930095952 WatchSource:0}: Error finding container f31a41bb8dc90422975091f693390b86eb83a0755b52ead5b0f6634930095952: Status 404 returned error can't find the container with id f31a41bb8dc90422975091f693390b86eb83a0755b52ead5b0f6634930095952 Nov 24 21:03:25 crc kubenswrapper[4812]: I1124 21:03:25.210964 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2c7jl"] Nov 24 21:03:25 crc kubenswrapper[4812]: I1124 21:03:25.221422 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 21:03:25 crc kubenswrapper[4812]: I1124 21:03:25.520017 4812 generic.go:334] "Generic (PLEG): container finished" podID="00594133-7443-4a52-b929-22e644c37f09" containerID="220b3fab276905fc618632d6ae0743524851921e35f067b35a7cbbd955ed2ead" exitCode=0 Nov 24 21:03:25 crc kubenswrapper[4812]: I1124 21:03:25.520393 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c7jl" event={"ID":"00594133-7443-4a52-b929-22e644c37f09","Type":"ContainerDied","Data":"220b3fab276905fc618632d6ae0743524851921e35f067b35a7cbbd955ed2ead"} Nov 24 21:03:25 crc kubenswrapper[4812]: I1124 21:03:25.520436 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c7jl" event={"ID":"00594133-7443-4a52-b929-22e644c37f09","Type":"ContainerStarted","Data":"f31a41bb8dc90422975091f693390b86eb83a0755b52ead5b0f6634930095952"} Nov 24 21:03:26 crc kubenswrapper[4812]: I1124 21:03:26.626691 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c7jl" event={"ID":"00594133-7443-4a52-b929-22e644c37f09","Type":"ContainerStarted","Data":"00f87180d2f93c1b449c1f2ba4490d81f4ca6e386d86d13e4a3921603a8f5ea3"} Nov 24 21:03:29 crc kubenswrapper[4812]: I1124 21:03:29.657033 4812 generic.go:334] "Generic (PLEG): container finished" podID="09f77557-419d-478a-a804-0b610554d370" containerID="703d2de288c04ba940a1a9387229253ec96ff90c78ddb281f55b0cdd1878fd6e" exitCode=0 Nov 24 21:03:29 crc kubenswrapper[4812]: I1124 21:03:29.657113 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"09f77557-419d-478a-a804-0b610554d370","Type":"ContainerDied","Data":"703d2de288c04ba940a1a9387229253ec96ff90c78ddb281f55b0cdd1878fd6e"} Nov 24 21:03:30 crc kubenswrapper[4812]: I1124 21:03:30.676049 4812 generic.go:334] "Generic (PLEG): container finished" podID="19455782-88b2-40bd-afcd-f6334de7013a" containerID="67c7f46232cd9e43116c7e1624ca0f80a1e206ecbc9397e3dcdbc4878887eb29" exitCode=0 Nov 24 21:03:30 crc kubenswrapper[4812]: I1124 21:03:30.676171 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"19455782-88b2-40bd-afcd-f6334de7013a","Type":"ContainerDied","Data":"67c7f46232cd9e43116c7e1624ca0f80a1e206ecbc9397e3dcdbc4878887eb29"} Nov 24 21:03:31 crc kubenswrapper[4812]: I1124 21:03:31.689299 4812 generic.go:334] "Generic (PLEG): container finished" podID="00594133-7443-4a52-b929-22e644c37f09" containerID="00f87180d2f93c1b449c1f2ba4490d81f4ca6e386d86d13e4a3921603a8f5ea3" exitCode=0 Nov 24 21:03:31 crc kubenswrapper[4812]: I1124 21:03:31.689697 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c7jl" event={"ID":"00594133-7443-4a52-b929-22e644c37f09","Type":"ContainerDied","Data":"00f87180d2f93c1b449c1f2ba4490d81f4ca6e386d86d13e4a3921603a8f5ea3"} Nov 24 21:03:33 crc kubenswrapper[4812]: I1124 21:03:33.717091 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c7jl" event={"ID":"00594133-7443-4a52-b929-22e644c37f09","Type":"ContainerStarted","Data":"5f1352be8e9890a4b539c5e413bb6e047c463e5f82047d67f29a5401ee792af2"} Nov 24 21:03:33 crc kubenswrapper[4812]: I1124 21:03:33.737918 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"09f77557-419d-478a-a804-0b610554d370","Type":"ContainerStarted","Data":"b9af8ccb68c301ee6175b7d01d39c598ed0aab2cd2e3c4f76856b170fcd854cf"} Nov 24 21:03:33 crc kubenswrapper[4812]: I1124 21:03:33.748972 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2c7jl" podStartSLOduration=2.721810125 podStartE2EDuration="9.748949509s" podCreationTimestamp="2025-11-24 21:03:24 +0000 UTC" firstStartedPulling="2025-11-24 21:03:25.536753774 +0000 UTC m=+6399.325706145" lastFinishedPulling="2025-11-24 21:03:32.563893158 +0000 UTC m=+6406.352845529" observedRunningTime="2025-11-24 21:03:33.742445024 +0000 UTC m=+6407.531397395" watchObservedRunningTime="2025-11-24 21:03:33.748949509 +0000 UTC m=+6407.537901870" Nov 24 21:03:34 crc kubenswrapper[4812]: I1124 21:03:34.669868 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2c7jl" Nov 24 21:03:34 crc kubenswrapper[4812]: I1124 21:03:34.670162 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2c7jl" Nov 24 21:03:35 crc kubenswrapper[4812]: I1124 21:03:35.738832 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2c7jl" podUID="00594133-7443-4a52-b929-22e644c37f09" containerName="registry-server" probeResult="failure" output=< Nov 24 21:03:35 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 21:03:35 crc kubenswrapper[4812]: > Nov 24 21:03:36 crc kubenswrapper[4812]: I1124 21:03:36.774025 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"09f77557-419d-478a-a804-0b610554d370","Type":"ContainerStarted","Data":"f9f27020086ff980df048207f05310d1dd2594368e5bf73de0f79815659a3e37"} Nov 24 21:03:36 crc kubenswrapper[4812]: I1124 21:03:36.774356 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:36 crc kubenswrapper[4812]: I1124 21:03:36.782604 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Nov 24 21:03:36 crc kubenswrapper[4812]: I1124 21:03:36.799159 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=6.100130413 podStartE2EDuration="21.799137267s" podCreationTimestamp="2025-11-24 21:03:15 +0000 UTC" firstStartedPulling="2025-11-24 21:03:16.864954866 +0000 UTC m=+6390.653907237" lastFinishedPulling="2025-11-24 21:03:32.56396172 +0000 UTC m=+6406.352914091" observedRunningTime="2025-11-24 21:03:36.796531224 +0000 UTC m=+6410.585483595" watchObservedRunningTime="2025-11-24 21:03:36.799137267 +0000 UTC m=+6410.588089648" Nov 24 21:03:39 crc kubenswrapper[4812]: I1124 21:03:39.817397 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"19455782-88b2-40bd-afcd-f6334de7013a","Type":"ContainerStarted","Data":"bf8abc54c3d65c7fe451c66c870108a235cff233e1bed221e0d686664e02a3e7"} Nov 24 21:03:43 crc kubenswrapper[4812]: I1124 21:03:43.866372 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"19455782-88b2-40bd-afcd-f6334de7013a","Type":"ContainerStarted","Data":"4661323ef7c2caf22fdc0baaf39dae473252b88add838e5cc01b802f18b83fd6"} Nov 24 21:03:45 crc kubenswrapper[4812]: I1124 21:03:45.726158 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2c7jl" podUID="00594133-7443-4a52-b929-22e644c37f09" containerName="registry-server" probeResult="failure" output=< Nov 24 21:03:45 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 21:03:45 crc kubenswrapper[4812]: > Nov 24 21:03:47 crc kubenswrapper[4812]: I1124 21:03:47.907905 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"19455782-88b2-40bd-afcd-f6334de7013a","Type":"ContainerStarted","Data":"1b111f25a89c02a165e9cb91dda97909f4030cfe1a51143738fde316fa2808fb"} Nov 24 21:03:47 crc kubenswrapper[4812]: I1124 21:03:47.930509 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.725030873 podStartE2EDuration="32.93049193s" podCreationTimestamp="2025-11-24 21:03:15 +0000 UTC" firstStartedPulling="2025-11-24 21:03:17.496630885 +0000 UTC m=+6391.285583256" lastFinishedPulling="2025-11-24 21:03:46.702091932 +0000 UTC m=+6420.491044313" observedRunningTime="2025-11-24 21:03:47.924365476 +0000 UTC m=+6421.713317847" watchObservedRunningTime="2025-11-24 21:03:47.93049193 +0000 UTC m=+6421.719444311" Nov 24 21:03:51 crc kubenswrapper[4812]: I1124 21:03:51.964667 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 24 21:03:55 crc kubenswrapper[4812]: I1124 21:03:55.723760 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2c7jl" podUID="00594133-7443-4a52-b929-22e644c37f09" containerName="registry-server" probeResult="failure" output=< Nov 24 21:03:55 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 21:03:55 crc kubenswrapper[4812]: > Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.469168 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.472670 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.474851 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.474880 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.484293 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.611427 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-scripts\") pod \"ceilometer-0\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.611555 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0178505f-e93d-49f9-aa8c-f68abc222d19-run-httpd\") pod \"ceilometer-0\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.611586 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-config-data\") pod \"ceilometer-0\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.611640 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.611674 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0178505f-e93d-49f9-aa8c-f68abc222d19-log-httpd\") pod \"ceilometer-0\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.611692 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgtwn\" (UniqueName: \"kubernetes.io/projected/0178505f-e93d-49f9-aa8c-f68abc222d19-kube-api-access-rgtwn\") pod \"ceilometer-0\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.611739 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.713474 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.713542 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-scripts\") pod \"ceilometer-0\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.713606 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0178505f-e93d-49f9-aa8c-f68abc222d19-run-httpd\") pod \"ceilometer-0\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.713629 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-config-data\") pod \"ceilometer-0\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.713683 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.713714 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0178505f-e93d-49f9-aa8c-f68abc222d19-log-httpd\") pod \"ceilometer-0\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.713732 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgtwn\" (UniqueName: \"kubernetes.io/projected/0178505f-e93d-49f9-aa8c-f68abc222d19-kube-api-access-rgtwn\") pod \"ceilometer-0\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.714529 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0178505f-e93d-49f9-aa8c-f68abc222d19-log-httpd\") pod \"ceilometer-0\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.714558 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0178505f-e93d-49f9-aa8c-f68abc222d19-run-httpd\") pod \"ceilometer-0\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.720928 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-config-data\") pod \"ceilometer-0\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.721552 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-scripts\") pod \"ceilometer-0\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.721682 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.730219 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.742668 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgtwn\" (UniqueName: \"kubernetes.io/projected/0178505f-e93d-49f9-aa8c-f68abc222d19-kube-api-access-rgtwn\") pod \"ceilometer-0\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " pod="openstack/ceilometer-0" Nov 24 21:03:56 crc kubenswrapper[4812]: I1124 21:03:56.798949 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:03:57 crc kubenswrapper[4812]: I1124 21:03:57.326795 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:03:58 crc kubenswrapper[4812]: I1124 21:03:58.020694 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0178505f-e93d-49f9-aa8c-f68abc222d19","Type":"ContainerStarted","Data":"8c7c2267058699c04024c87eafda81c4e172db95d1428ea96b28a34f97a44099"} Nov 24 21:03:59 crc kubenswrapper[4812]: I1124 21:03:59.032892 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0178505f-e93d-49f9-aa8c-f68abc222d19","Type":"ContainerStarted","Data":"5f2c18a2db9957503a7e5f7ffb4c6e2d5089e901e2a3a7250aed5d258d027f84"} Nov 24 21:03:59 crc kubenswrapper[4812]: I1124 21:03:59.033107 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0178505f-e93d-49f9-aa8c-f68abc222d19","Type":"ContainerStarted","Data":"e63e1edd3d002ad5dd5b272d594ce2b62c9fd19ad1a3ad959c12423a544a9784"} Nov 24 21:04:00 crc kubenswrapper[4812]: I1124 21:04:00.051846 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0178505f-e93d-49f9-aa8c-f68abc222d19","Type":"ContainerStarted","Data":"ec284e90da49c7872d39a75a72e44ef46dcfe4a2ff6b8f4130f4202567d06867"} Nov 24 21:04:01 crc kubenswrapper[4812]: I1124 21:04:01.005036 4812 scope.go:117] "RemoveContainer" containerID="39fbe1215e4991671722376c2339d8e264aeb8e3c219f841207fba5102ef9d57" Nov 24 21:04:01 crc kubenswrapper[4812]: I1124 21:04:01.053231 4812 scope.go:117] "RemoveContainer" containerID="ff429dcfaf040aff6ae0dda13aed7deb09b8efc304d2b742ecf17720af05bf1d" Nov 24 21:04:01 crc kubenswrapper[4812]: I1124 21:04:01.135060 4812 scope.go:117] "RemoveContainer" containerID="c09b6ddfcbf4f3545bd309b24e14737259ce67e005a61121e2600456ac7086fa" Nov 24 21:04:01 crc kubenswrapper[4812]: I1124 21:04:01.966301 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:01 crc kubenswrapper[4812]: I1124 21:04:01.970543 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:02 crc kubenswrapper[4812]: I1124 21:04:02.073693 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-8317-account-create-wcnt7"] Nov 24 21:04:02 crc kubenswrapper[4812]: I1124 21:04:02.082471 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0178505f-e93d-49f9-aa8c-f68abc222d19","Type":"ContainerStarted","Data":"487a38ba5e33fcfd40c4d6c72e01762cdb6a0e3898fac6874171ee1464c8432e"} Nov 24 21:04:02 crc kubenswrapper[4812]: I1124 21:04:02.082567 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:04:02 crc kubenswrapper[4812]: I1124 21:04:02.086473 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:02 crc kubenswrapper[4812]: I1124 21:04:02.087804 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-c82f7"] Nov 24 21:04:02 crc kubenswrapper[4812]: I1124 21:04:02.100344 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-fl2tv"] Nov 24 21:04:02 crc kubenswrapper[4812]: I1124 21:04:02.114744 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-f8hch"] Nov 24 21:04:02 crc kubenswrapper[4812]: I1124 21:04:02.124538 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-8317-account-create-wcnt7"] Nov 24 21:04:02 crc kubenswrapper[4812]: I1124 21:04:02.133640 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-c82f7"] Nov 24 21:04:02 crc kubenswrapper[4812]: I1124 21:04:02.146757 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-fl2tv"] Nov 24 21:04:02 crc kubenswrapper[4812]: I1124 21:04:02.154177 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-f8hch"] Nov 24 21:04:02 crc kubenswrapper[4812]: I1124 21:04:02.159751 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.072521785 podStartE2EDuration="6.159727701s" podCreationTimestamp="2025-11-24 21:03:56 +0000 UTC" firstStartedPulling="2025-11-24 21:03:57.331941302 +0000 UTC m=+6431.120893683" lastFinishedPulling="2025-11-24 21:04:01.419147228 +0000 UTC m=+6435.208099599" observedRunningTime="2025-11-24 21:04:02.104148897 +0000 UTC m=+6435.893101278" watchObservedRunningTime="2025-11-24 21:04:02.159727701 +0000 UTC m=+6435.948680072" Nov 24 21:04:02 crc kubenswrapper[4812]: I1124 21:04:02.982389 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ab4449a-a536-4c7d-a405-fb3f44af9ed7" path="/var/lib/kubelet/pods/1ab4449a-a536-4c7d-a405-fb3f44af9ed7/volumes" Nov 24 21:04:02 crc kubenswrapper[4812]: I1124 21:04:02.984936 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71f85d53-13a9-4baa-bd62-3015c5c95019" path="/var/lib/kubelet/pods/71f85d53-13a9-4baa-bd62-3015c5c95019/volumes" Nov 24 21:04:02 crc kubenswrapper[4812]: I1124 21:04:02.992642 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7576e0ff-d8e5-48a1-a022-8196382f4a32" path="/var/lib/kubelet/pods/7576e0ff-d8e5-48a1-a022-8196382f4a32/volumes" Nov 24 21:04:02 crc kubenswrapper[4812]: I1124 21:04:02.993876 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b21ece-3976-4e21-8b32-1e316ad1af2c" path="/var/lib/kubelet/pods/75b21ece-3976-4e21-8b32-1e316ad1af2c/volumes" Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.049256 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-04b0-account-create-7n4dp"] Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.061910 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-1080-account-create-6b8hj"] Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.079790 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-04b0-account-create-7n4dp"] Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.093538 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-1080-account-create-6b8hj"] Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.664108 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.674788 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="d5c10c0e-5a45-4741-a82f-985844fa105c" containerName="openstackclient" containerID="cri-o://09f2fa04d98ac84966e0a176fbcb820f8624c7ebc4121dc8d7c96b05cea6fdf3" gracePeriod=2 Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.692865 4812 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="d5c10c0e-5a45-4741-a82f-985844fa105c" podUID="eca6b905-0f46-4617-b411-82c81f4ef400" Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.694170 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.707863 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 24 21:04:03 crc kubenswrapper[4812]: E1124 21:04:03.708487 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5c10c0e-5a45-4741-a82f-985844fa105c" containerName="openstackclient" Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.708512 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5c10c0e-5a45-4741-a82f-985844fa105c" containerName="openstackclient" Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.708746 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5c10c0e-5a45-4741-a82f-985844fa105c" containerName="openstackclient" Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.709639 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.726874 4812 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="eca6b905-0f46-4617-b411-82c81f4ef400" podUID="db618fd5-6493-454f-aaa3-6800c4a15546" Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.730398 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 24 21:04:03 crc kubenswrapper[4812]: E1124 21:04:03.739973 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-fz6sv openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[combined-ca-bundle kube-api-access-fz6sv openstack-config openstack-config-secret]: context canceled" pod="openstack/openstackclient" podUID="eca6b905-0f46-4617-b411-82c81f4ef400" Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.740973 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.753880 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.757300 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.770685 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.795913 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/db618fd5-6493-454f-aaa3-6800c4a15546-openstack-config\") pod \"openstackclient\" (UID: \"db618fd5-6493-454f-aaa3-6800c4a15546\") " pod="openstack/openstackclient" Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.795975 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/db618fd5-6493-454f-aaa3-6800c4a15546-openstack-config-secret\") pod \"openstackclient\" (UID: \"db618fd5-6493-454f-aaa3-6800c4a15546\") " pod="openstack/openstackclient" Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.796072 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84mdd\" (UniqueName: \"kubernetes.io/projected/db618fd5-6493-454f-aaa3-6800c4a15546-kube-api-access-84mdd\") pod \"openstackclient\" (UID: \"db618fd5-6493-454f-aaa3-6800c4a15546\") " pod="openstack/openstackclient" Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.796268 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db618fd5-6493-454f-aaa3-6800c4a15546-combined-ca-bundle\") pod \"openstackclient\" (UID: \"db618fd5-6493-454f-aaa3-6800c4a15546\") " pod="openstack/openstackclient" Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.898409 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84mdd\" (UniqueName: \"kubernetes.io/projected/db618fd5-6493-454f-aaa3-6800c4a15546-kube-api-access-84mdd\") pod \"openstackclient\" (UID: \"db618fd5-6493-454f-aaa3-6800c4a15546\") " pod="openstack/openstackclient" Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.898511 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db618fd5-6493-454f-aaa3-6800c4a15546-combined-ca-bundle\") pod \"openstackclient\" (UID: \"db618fd5-6493-454f-aaa3-6800c4a15546\") " pod="openstack/openstackclient" Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.898557 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/db618fd5-6493-454f-aaa3-6800c4a15546-openstack-config\") pod \"openstackclient\" (UID: \"db618fd5-6493-454f-aaa3-6800c4a15546\") " pod="openstack/openstackclient" Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.898588 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/db618fd5-6493-454f-aaa3-6800c4a15546-openstack-config-secret\") pod \"openstackclient\" (UID: \"db618fd5-6493-454f-aaa3-6800c4a15546\") " pod="openstack/openstackclient" Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.899548 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/db618fd5-6493-454f-aaa3-6800c4a15546-openstack-config\") pod \"openstackclient\" (UID: \"db618fd5-6493-454f-aaa3-6800c4a15546\") " pod="openstack/openstackclient" Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.908036 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/db618fd5-6493-454f-aaa3-6800c4a15546-openstack-config-secret\") pod \"openstackclient\" (UID: \"db618fd5-6493-454f-aaa3-6800c4a15546\") " pod="openstack/openstackclient" Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.908754 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db618fd5-6493-454f-aaa3-6800c4a15546-combined-ca-bundle\") pod \"openstackclient\" (UID: \"db618fd5-6493-454f-aaa3-6800c4a15546\") " pod="openstack/openstackclient" Nov 24 21:04:03 crc kubenswrapper[4812]: I1124 21:04:03.922223 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84mdd\" (UniqueName: \"kubernetes.io/projected/db618fd5-6493-454f-aaa3-6800c4a15546-kube-api-access-84mdd\") pod \"openstackclient\" (UID: \"db618fd5-6493-454f-aaa3-6800c4a15546\") " pod="openstack/openstackclient" Nov 24 21:04:04 crc kubenswrapper[4812]: I1124 21:04:04.099108 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 21:04:04 crc kubenswrapper[4812]: I1124 21:04:04.104720 4812 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="eca6b905-0f46-4617-b411-82c81f4ef400" podUID="db618fd5-6493-454f-aaa3-6800c4a15546" Nov 24 21:04:04 crc kubenswrapper[4812]: I1124 21:04:04.105848 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 21:04:04 crc kubenswrapper[4812]: I1124 21:04:04.113066 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 21:04:04 crc kubenswrapper[4812]: I1124 21:04:04.116243 4812 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="eca6b905-0f46-4617-b411-82c81f4ef400" podUID="db618fd5-6493-454f-aaa3-6800c4a15546" Nov 24 21:04:04 crc kubenswrapper[4812]: I1124 21:04:04.693028 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 21:04:04 crc kubenswrapper[4812]: I1124 21:04:04.740410 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2c7jl" Nov 24 21:04:04 crc kubenswrapper[4812]: I1124 21:04:04.808665 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2c7jl" Nov 24 21:04:04 crc kubenswrapper[4812]: I1124 21:04:04.864906 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:04:04 crc kubenswrapper[4812]: I1124 21:04:04.865141 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="19455782-88b2-40bd-afcd-f6334de7013a" containerName="prometheus" containerID="cri-o://bf8abc54c3d65c7fe451c66c870108a235cff233e1bed221e0d686664e02a3e7" gracePeriod=600 Nov 24 21:04:04 crc kubenswrapper[4812]: I1124 21:04:04.865536 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="19455782-88b2-40bd-afcd-f6334de7013a" containerName="config-reloader" containerID="cri-o://4661323ef7c2caf22fdc0baaf39dae473252b88add838e5cc01b802f18b83fd6" gracePeriod=600 Nov 24 21:04:04 crc kubenswrapper[4812]: I1124 21:04:04.865586 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="19455782-88b2-40bd-afcd-f6334de7013a" containerName="thanos-sidecar" containerID="cri-o://1b111f25a89c02a165e9cb91dda97909f4030cfe1a51143738fde316fa2808fb" gracePeriod=600 Nov 24 21:04:04 crc kubenswrapper[4812]: I1124 21:04:04.977064 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18a6b846-c044-4e56-bc3a-25fec00002fd" path="/var/lib/kubelet/pods/18a6b846-c044-4e56-bc3a-25fec00002fd/volumes" Nov 24 21:04:04 crc kubenswrapper[4812]: I1124 21:04:04.979467 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eca6b905-0f46-4617-b411-82c81f4ef400" path="/var/lib/kubelet/pods/eca6b905-0f46-4617-b411-82c81f4ef400/volumes" Nov 24 21:04:04 crc kubenswrapper[4812]: I1124 21:04:04.979900 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f551f2e4-d0b9-4cf1-9131-24075e1a9b92" path="/var/lib/kubelet/pods/f551f2e4-d0b9-4cf1-9131-24075e1a9b92/volumes" Nov 24 21:04:04 crc kubenswrapper[4812]: I1124 21:04:04.981986 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2c7jl"] Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.110776 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"db618fd5-6493-454f-aaa3-6800c4a15546","Type":"ContainerStarted","Data":"5eaf0e84fbb97ce9ad3a18e627bf4c1cfa5ab431ebe0b2b4802a6a9516d1b537"} Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.110817 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"db618fd5-6493-454f-aaa3-6800c4a15546","Type":"ContainerStarted","Data":"fa8be7c98b3946b958f87ee0cc8fc5b644e99926d66d15b395f9c769cbf4d9af"} Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.114881 4812 generic.go:334] "Generic (PLEG): container finished" podID="19455782-88b2-40bd-afcd-f6334de7013a" containerID="1b111f25a89c02a165e9cb91dda97909f4030cfe1a51143738fde316fa2808fb" exitCode=0 Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.114909 4812 generic.go:334] "Generic (PLEG): container finished" podID="19455782-88b2-40bd-afcd-f6334de7013a" containerID="bf8abc54c3d65c7fe451c66c870108a235cff233e1bed221e0d686664e02a3e7" exitCode=0 Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.115897 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"19455782-88b2-40bd-afcd-f6334de7013a","Type":"ContainerDied","Data":"1b111f25a89c02a165e9cb91dda97909f4030cfe1a51143738fde316fa2808fb"} Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.115926 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"19455782-88b2-40bd-afcd-f6334de7013a","Type":"ContainerDied","Data":"bf8abc54c3d65c7fe451c66c870108a235cff233e1bed221e0d686664e02a3e7"} Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.115960 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.141880 4812 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="eca6b905-0f46-4617-b411-82c81f4ef400" podUID="db618fd5-6493-454f-aaa3-6800c4a15546" Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.143917 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.14389462 podStartE2EDuration="2.14389462s" podCreationTimestamp="2025-11-24 21:04:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:04:05.137041026 +0000 UTC m=+6438.925993397" watchObservedRunningTime="2025-11-24 21:04:05.14389462 +0000 UTC m=+6438.932847001" Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.888873 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.946954 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-888926e7-09ea-417e-9dc3-e9734339eb4c\") pod \"19455782-88b2-40bd-afcd-f6334de7013a\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.947101 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/19455782-88b2-40bd-afcd-f6334de7013a-prometheus-metric-storage-rulefiles-0\") pod \"19455782-88b2-40bd-afcd-f6334de7013a\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.947136 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/19455782-88b2-40bd-afcd-f6334de7013a-config\") pod \"19455782-88b2-40bd-afcd-f6334de7013a\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.947169 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/19455782-88b2-40bd-afcd-f6334de7013a-thanos-prometheus-http-client-file\") pod \"19455782-88b2-40bd-afcd-f6334de7013a\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.947250 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/19455782-88b2-40bd-afcd-f6334de7013a-config-out\") pod \"19455782-88b2-40bd-afcd-f6334de7013a\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.947280 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/19455782-88b2-40bd-afcd-f6334de7013a-web-config\") pod \"19455782-88b2-40bd-afcd-f6334de7013a\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.947312 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/19455782-88b2-40bd-afcd-f6334de7013a-tls-assets\") pod \"19455782-88b2-40bd-afcd-f6334de7013a\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.947398 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c2tm\" (UniqueName: \"kubernetes.io/projected/19455782-88b2-40bd-afcd-f6334de7013a-kube-api-access-7c2tm\") pod \"19455782-88b2-40bd-afcd-f6334de7013a\" (UID: \"19455782-88b2-40bd-afcd-f6334de7013a\") " Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.948272 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19455782-88b2-40bd-afcd-f6334de7013a-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "19455782-88b2-40bd-afcd-f6334de7013a" (UID: "19455782-88b2-40bd-afcd-f6334de7013a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.958663 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19455782-88b2-40bd-afcd-f6334de7013a-config-out" (OuterVolumeSpecName: "config-out") pod "19455782-88b2-40bd-afcd-f6334de7013a" (UID: "19455782-88b2-40bd-afcd-f6334de7013a"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.959239 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19455782-88b2-40bd-afcd-f6334de7013a-kube-api-access-7c2tm" (OuterVolumeSpecName: "kube-api-access-7c2tm") pod "19455782-88b2-40bd-afcd-f6334de7013a" (UID: "19455782-88b2-40bd-afcd-f6334de7013a"). InnerVolumeSpecName "kube-api-access-7c2tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.960504 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19455782-88b2-40bd-afcd-f6334de7013a-config" (OuterVolumeSpecName: "config") pod "19455782-88b2-40bd-afcd-f6334de7013a" (UID: "19455782-88b2-40bd-afcd-f6334de7013a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.960698 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19455782-88b2-40bd-afcd-f6334de7013a-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "19455782-88b2-40bd-afcd-f6334de7013a" (UID: "19455782-88b2-40bd-afcd-f6334de7013a"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:04:05 crc kubenswrapper[4812]: I1124 21:04:05.970572 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19455782-88b2-40bd-afcd-f6334de7013a-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "19455782-88b2-40bd-afcd-f6334de7013a" (UID: "19455782-88b2-40bd-afcd-f6334de7013a"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.000872 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-888926e7-09ea-417e-9dc3-e9734339eb4c" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "19455782-88b2-40bd-afcd-f6334de7013a" (UID: "19455782-88b2-40bd-afcd-f6334de7013a"). InnerVolumeSpecName "pvc-888926e7-09ea-417e-9dc3-e9734339eb4c". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.005229 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19455782-88b2-40bd-afcd-f6334de7013a-web-config" (OuterVolumeSpecName: "web-config") pod "19455782-88b2-40bd-afcd-f6334de7013a" (UID: "19455782-88b2-40bd-afcd-f6334de7013a"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.052678 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c2tm\" (UniqueName: \"kubernetes.io/projected/19455782-88b2-40bd-afcd-f6334de7013a-kube-api-access-7c2tm\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.052750 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-888926e7-09ea-417e-9dc3-e9734339eb4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-888926e7-09ea-417e-9dc3-e9734339eb4c\") on node \"crc\" " Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.052766 4812 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/19455782-88b2-40bd-afcd-f6334de7013a-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.052778 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/19455782-88b2-40bd-afcd-f6334de7013a-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.052789 4812 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/19455782-88b2-40bd-afcd-f6334de7013a-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.052799 4812 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/19455782-88b2-40bd-afcd-f6334de7013a-config-out\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.052807 4812 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/19455782-88b2-40bd-afcd-f6334de7013a-web-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.052815 4812 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/19455782-88b2-40bd-afcd-f6334de7013a-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.108021 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.138145 4812 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.138659 4812 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-888926e7-09ea-417e-9dc3-e9734339eb4c" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-888926e7-09ea-417e-9dc3-e9734339eb4c") on node "crc" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.160057 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-9cz4n"] Nov 24 21:04:06 crc kubenswrapper[4812]: E1124 21:04:06.160566 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19455782-88b2-40bd-afcd-f6334de7013a" containerName="config-reloader" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.160579 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="19455782-88b2-40bd-afcd-f6334de7013a" containerName="config-reloader" Nov 24 21:04:06 crc kubenswrapper[4812]: E1124 21:04:06.160596 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19455782-88b2-40bd-afcd-f6334de7013a" containerName="thanos-sidecar" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.160604 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="19455782-88b2-40bd-afcd-f6334de7013a" containerName="thanos-sidecar" Nov 24 21:04:06 crc kubenswrapper[4812]: E1124 21:04:06.160621 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19455782-88b2-40bd-afcd-f6334de7013a" containerName="prometheus" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.160627 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="19455782-88b2-40bd-afcd-f6334de7013a" containerName="prometheus" Nov 24 21:04:06 crc kubenswrapper[4812]: E1124 21:04:06.160636 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19455782-88b2-40bd-afcd-f6334de7013a" containerName="init-config-reloader" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.160642 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="19455782-88b2-40bd-afcd-f6334de7013a" containerName="init-config-reloader" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.160869 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="19455782-88b2-40bd-afcd-f6334de7013a" containerName="prometheus" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.160917 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="19455782-88b2-40bd-afcd-f6334de7013a" containerName="config-reloader" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.160929 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="19455782-88b2-40bd-afcd-f6334de7013a" containerName="thanos-sidecar" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.161671 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-9cz4n" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.162785 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d5c10c0e-5a45-4741-a82f-985844fa105c-openstack-config-secret\") pod \"d5c10c0e-5a45-4741-a82f-985844fa105c\" (UID: \"d5c10c0e-5a45-4741-a82f-985844fa105c\") " Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.163145 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d5c10c0e-5a45-4741-a82f-985844fa105c-openstack-config\") pod \"d5c10c0e-5a45-4741-a82f-985844fa105c\" (UID: \"d5c10c0e-5a45-4741-a82f-985844fa105c\") " Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.163211 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5c10c0e-5a45-4741-a82f-985844fa105c-combined-ca-bundle\") pod \"d5c10c0e-5a45-4741-a82f-985844fa105c\" (UID: \"d5c10c0e-5a45-4741-a82f-985844fa105c\") " Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.163641 4812 generic.go:334] "Generic (PLEG): container finished" podID="d5c10c0e-5a45-4741-a82f-985844fa105c" containerID="09f2fa04d98ac84966e0a176fbcb820f8624c7ebc4121dc8d7c96b05cea6fdf3" exitCode=137 Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.163849 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.164916 4812 scope.go:117] "RemoveContainer" containerID="09f2fa04d98ac84966e0a176fbcb820f8624c7ebc4121dc8d7c96b05cea6fdf3" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.165135 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l4l4\" (UniqueName: \"kubernetes.io/projected/d5c10c0e-5a45-4741-a82f-985844fa105c-kube-api-access-2l4l4\") pod \"d5c10c0e-5a45-4741-a82f-985844fa105c\" (UID: \"d5c10c0e-5a45-4741-a82f-985844fa105c\") " Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.166491 4812 reconciler_common.go:293] "Volume detached for volume \"pvc-888926e7-09ea-417e-9dc3-e9734339eb4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-888926e7-09ea-417e-9dc3-e9734339eb4c\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.168988 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5c10c0e-5a45-4741-a82f-985844fa105c-kube-api-access-2l4l4" (OuterVolumeSpecName: "kube-api-access-2l4l4") pod "d5c10c0e-5a45-4741-a82f-985844fa105c" (UID: "d5c10c0e-5a45-4741-a82f-985844fa105c"). InnerVolumeSpecName "kube-api-access-2l4l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.169292 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-9cz4n"] Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.175176 4812 generic.go:334] "Generic (PLEG): container finished" podID="19455782-88b2-40bd-afcd-f6334de7013a" containerID="4661323ef7c2caf22fdc0baaf39dae473252b88add838e5cc01b802f18b83fd6" exitCode=0 Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.175436 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2c7jl" podUID="00594133-7443-4a52-b929-22e644c37f09" containerName="registry-server" containerID="cri-o://5f1352be8e9890a4b539c5e413bb6e047c463e5f82047d67f29a5401ee792af2" gracePeriod=2 Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.175542 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.176450 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"19455782-88b2-40bd-afcd-f6334de7013a","Type":"ContainerDied","Data":"4661323ef7c2caf22fdc0baaf39dae473252b88add838e5cc01b802f18b83fd6"} Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.176562 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"19455782-88b2-40bd-afcd-f6334de7013a","Type":"ContainerDied","Data":"0459ae699d76737adcc926bcfc7edaf60cd8f6d2612cbfa15f01d43f505e6d3f"} Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.242036 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5c10c0e-5a45-4741-a82f-985844fa105c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5c10c0e-5a45-4741-a82f-985844fa105c" (UID: "d5c10c0e-5a45-4741-a82f-985844fa105c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.268665 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6e44fd1-85cd-40a3-b528-bce9911ea477-operator-scripts\") pod \"aodh-db-create-9cz4n\" (UID: \"c6e44fd1-85cd-40a3-b528-bce9911ea477\") " pod="openstack/aodh-db-create-9cz4n" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.268716 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ctl6\" (UniqueName: \"kubernetes.io/projected/c6e44fd1-85cd-40a3-b528-bce9911ea477-kube-api-access-9ctl6\") pod \"aodh-db-create-9cz4n\" (UID: \"c6e44fd1-85cd-40a3-b528-bce9911ea477\") " pod="openstack/aodh-db-create-9cz4n" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.268814 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l4l4\" (UniqueName: \"kubernetes.io/projected/d5c10c0e-5a45-4741-a82f-985844fa105c-kube-api-access-2l4l4\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.268824 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5c10c0e-5a45-4741-a82f-985844fa105c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.291944 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5c10c0e-5a45-4741-a82f-985844fa105c-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d5c10c0e-5a45-4741-a82f-985844fa105c" (UID: "d5c10c0e-5a45-4741-a82f-985844fa105c"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.297891 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5c10c0e-5a45-4741-a82f-985844fa105c-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d5c10c0e-5a45-4741-a82f-985844fa105c" (UID: "d5c10c0e-5a45-4741-a82f-985844fa105c"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.343433 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-2823-account-create-qqmxf"] Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.362608 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2823-account-create-qqmxf" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.374206 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6e44fd1-85cd-40a3-b528-bce9911ea477-operator-scripts\") pod \"aodh-db-create-9cz4n\" (UID: \"c6e44fd1-85cd-40a3-b528-bce9911ea477\") " pod="openstack/aodh-db-create-9cz4n" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.374292 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ctl6\" (UniqueName: \"kubernetes.io/projected/c6e44fd1-85cd-40a3-b528-bce9911ea477-kube-api-access-9ctl6\") pod \"aodh-db-create-9cz4n\" (UID: \"c6e44fd1-85cd-40a3-b528-bce9911ea477\") " pod="openstack/aodh-db-create-9cz4n" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.374375 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b-operator-scripts\") pod \"aodh-2823-account-create-qqmxf\" (UID: \"50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b\") " pod="openstack/aodh-2823-account-create-qqmxf" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.374579 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrwt7\" (UniqueName: \"kubernetes.io/projected/50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b-kube-api-access-jrwt7\") pod \"aodh-2823-account-create-qqmxf\" (UID: \"50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b\") " pod="openstack/aodh-2823-account-create-qqmxf" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.374797 4812 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d5c10c0e-5a45-4741-a82f-985844fa105c-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.374812 4812 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d5c10c0e-5a45-4741-a82f-985844fa105c-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.377852 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6e44fd1-85cd-40a3-b528-bce9911ea477-operator-scripts\") pod \"aodh-db-create-9cz4n\" (UID: \"c6e44fd1-85cd-40a3-b528-bce9911ea477\") " pod="openstack/aodh-db-create-9cz4n" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.385085 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.390198 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-2823-account-create-qqmxf"] Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.394302 4812 scope.go:117] "RemoveContainer" containerID="09f2fa04d98ac84966e0a176fbcb820f8624c7ebc4121dc8d7c96b05cea6fdf3" Nov 24 21:04:06 crc kubenswrapper[4812]: E1124 21:04:06.400056 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09f2fa04d98ac84966e0a176fbcb820f8624c7ebc4121dc8d7c96b05cea6fdf3\": container with ID starting with 09f2fa04d98ac84966e0a176fbcb820f8624c7ebc4121dc8d7c96b05cea6fdf3 not found: ID does not exist" containerID="09f2fa04d98ac84966e0a176fbcb820f8624c7ebc4121dc8d7c96b05cea6fdf3" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.400104 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09f2fa04d98ac84966e0a176fbcb820f8624c7ebc4121dc8d7c96b05cea6fdf3"} err="failed to get container status \"09f2fa04d98ac84966e0a176fbcb820f8624c7ebc4121dc8d7c96b05cea6fdf3\": rpc error: code = NotFound desc = could not find container \"09f2fa04d98ac84966e0a176fbcb820f8624c7ebc4121dc8d7c96b05cea6fdf3\": container with ID starting with 09f2fa04d98ac84966e0a176fbcb820f8624c7ebc4121dc8d7c96b05cea6fdf3 not found: ID does not exist" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.400125 4812 scope.go:117] "RemoveContainer" containerID="1b111f25a89c02a165e9cb91dda97909f4030cfe1a51143738fde316fa2808fb" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.400676 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ctl6\" (UniqueName: \"kubernetes.io/projected/c6e44fd1-85cd-40a3-b528-bce9911ea477-kube-api-access-9ctl6\") pod \"aodh-db-create-9cz4n\" (UID: \"c6e44fd1-85cd-40a3-b528-bce9911ea477\") " pod="openstack/aodh-db-create-9cz4n" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.407035 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-9cz4n" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.447859 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.459063 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.473574 4812 scope.go:117] "RemoveContainer" containerID="4661323ef7c2caf22fdc0baaf39dae473252b88add838e5cc01b802f18b83fd6" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.476162 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrwt7\" (UniqueName: \"kubernetes.io/projected/50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b-kube-api-access-jrwt7\") pod \"aodh-2823-account-create-qqmxf\" (UID: \"50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b\") " pod="openstack/aodh-2823-account-create-qqmxf" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.476295 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b-operator-scripts\") pod \"aodh-2823-account-create-qqmxf\" (UID: \"50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b\") " pod="openstack/aodh-2823-account-create-qqmxf" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.477183 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b-operator-scripts\") pod \"aodh-2823-account-create-qqmxf\" (UID: \"50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b\") " pod="openstack/aodh-2823-account-create-qqmxf" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.482747 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.486715 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.491646 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.497367 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.497575 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8xp77" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.497693 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.498445 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.499368 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.519406 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.535624 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.541418 4812 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="d5c10c0e-5a45-4741-a82f-985844fa105c" podUID="db618fd5-6493-454f-aaa3-6800c4a15546" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.541875 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrwt7\" (UniqueName: \"kubernetes.io/projected/50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b-kube-api-access-jrwt7\") pod \"aodh-2823-account-create-qqmxf\" (UID: \"50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b\") " pod="openstack/aodh-2823-account-create-qqmxf" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.566995 4812 scope.go:117] "RemoveContainer" containerID="bf8abc54c3d65c7fe451c66c870108a235cff233e1bed221e0d686664e02a3e7" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.605840 4812 scope.go:117] "RemoveContainer" containerID="67c7f46232cd9e43116c7e1624ca0f80a1e206ecbc9397e3dcdbc4878887eb29" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.638625 4812 scope.go:117] "RemoveContainer" containerID="1b111f25a89c02a165e9cb91dda97909f4030cfe1a51143738fde316fa2808fb" Nov 24 21:04:06 crc kubenswrapper[4812]: E1124 21:04:06.645246 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b111f25a89c02a165e9cb91dda97909f4030cfe1a51143738fde316fa2808fb\": container with ID starting with 1b111f25a89c02a165e9cb91dda97909f4030cfe1a51143738fde316fa2808fb not found: ID does not exist" containerID="1b111f25a89c02a165e9cb91dda97909f4030cfe1a51143738fde316fa2808fb" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.645304 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b111f25a89c02a165e9cb91dda97909f4030cfe1a51143738fde316fa2808fb"} err="failed to get container status \"1b111f25a89c02a165e9cb91dda97909f4030cfe1a51143738fde316fa2808fb\": rpc error: code = NotFound desc = could not find container \"1b111f25a89c02a165e9cb91dda97909f4030cfe1a51143738fde316fa2808fb\": container with ID starting with 1b111f25a89c02a165e9cb91dda97909f4030cfe1a51143738fde316fa2808fb not found: ID does not exist" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.645354 4812 scope.go:117] "RemoveContainer" containerID="4661323ef7c2caf22fdc0baaf39dae473252b88add838e5cc01b802f18b83fd6" Nov 24 21:04:06 crc kubenswrapper[4812]: E1124 21:04:06.646723 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4661323ef7c2caf22fdc0baaf39dae473252b88add838e5cc01b802f18b83fd6\": container with ID starting with 4661323ef7c2caf22fdc0baaf39dae473252b88add838e5cc01b802f18b83fd6 not found: ID does not exist" containerID="4661323ef7c2caf22fdc0baaf39dae473252b88add838e5cc01b802f18b83fd6" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.646747 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4661323ef7c2caf22fdc0baaf39dae473252b88add838e5cc01b802f18b83fd6"} err="failed to get container status \"4661323ef7c2caf22fdc0baaf39dae473252b88add838e5cc01b802f18b83fd6\": rpc error: code = NotFound desc = could not find container \"4661323ef7c2caf22fdc0baaf39dae473252b88add838e5cc01b802f18b83fd6\": container with ID starting with 4661323ef7c2caf22fdc0baaf39dae473252b88add838e5cc01b802f18b83fd6 not found: ID does not exist" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.646761 4812 scope.go:117] "RemoveContainer" containerID="bf8abc54c3d65c7fe451c66c870108a235cff233e1bed221e0d686664e02a3e7" Nov 24 21:04:06 crc kubenswrapper[4812]: E1124 21:04:06.649210 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf8abc54c3d65c7fe451c66c870108a235cff233e1bed221e0d686664e02a3e7\": container with ID starting with bf8abc54c3d65c7fe451c66c870108a235cff233e1bed221e0d686664e02a3e7 not found: ID does not exist" containerID="bf8abc54c3d65c7fe451c66c870108a235cff233e1bed221e0d686664e02a3e7" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.649253 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8abc54c3d65c7fe451c66c870108a235cff233e1bed221e0d686664e02a3e7"} err="failed to get container status \"bf8abc54c3d65c7fe451c66c870108a235cff233e1bed221e0d686664e02a3e7\": rpc error: code = NotFound desc = could not find container \"bf8abc54c3d65c7fe451c66c870108a235cff233e1bed221e0d686664e02a3e7\": container with ID starting with bf8abc54c3d65c7fe451c66c870108a235cff233e1bed221e0d686664e02a3e7 not found: ID does not exist" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.649281 4812 scope.go:117] "RemoveContainer" containerID="67c7f46232cd9e43116c7e1624ca0f80a1e206ecbc9397e3dcdbc4878887eb29" Nov 24 21:04:06 crc kubenswrapper[4812]: E1124 21:04:06.651247 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67c7f46232cd9e43116c7e1624ca0f80a1e206ecbc9397e3dcdbc4878887eb29\": container with ID starting with 67c7f46232cd9e43116c7e1624ca0f80a1e206ecbc9397e3dcdbc4878887eb29 not found: ID does not exist" containerID="67c7f46232cd9e43116c7e1624ca0f80a1e206ecbc9397e3dcdbc4878887eb29" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.651293 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67c7f46232cd9e43116c7e1624ca0f80a1e206ecbc9397e3dcdbc4878887eb29"} err="failed to get container status \"67c7f46232cd9e43116c7e1624ca0f80a1e206ecbc9397e3dcdbc4878887eb29\": rpc error: code = NotFound desc = could not find container \"67c7f46232cd9e43116c7e1624ca0f80a1e206ecbc9397e3dcdbc4878887eb29\": container with ID starting with 67c7f46232cd9e43116c7e1624ca0f80a1e206ecbc9397e3dcdbc4878887eb29 not found: ID does not exist" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.682976 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b3547ac3-1f61-41e5-9674-317a8280dbfc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.683020 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b3547ac3-1f61-41e5-9674-317a8280dbfc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.683050 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b3547ac3-1f61-41e5-9674-317a8280dbfc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.683073 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vw7k\" (UniqueName: \"kubernetes.io/projected/b3547ac3-1f61-41e5-9674-317a8280dbfc-kube-api-access-7vw7k\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.683097 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b3547ac3-1f61-41e5-9674-317a8280dbfc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.683117 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b3547ac3-1f61-41e5-9674-317a8280dbfc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.683160 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b3547ac3-1f61-41e5-9674-317a8280dbfc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.683225 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3547ac3-1f61-41e5-9674-317a8280dbfc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.683251 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-888926e7-09ea-417e-9dc3-e9734339eb4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-888926e7-09ea-417e-9dc3-e9734339eb4c\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.683286 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b3547ac3-1f61-41e5-9674-317a8280dbfc-config\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.683304 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b3547ac3-1f61-41e5-9674-317a8280dbfc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.726593 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2823-account-create-qqmxf" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.784785 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b3547ac3-1f61-41e5-9674-317a8280dbfc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.784831 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b3547ac3-1f61-41e5-9674-317a8280dbfc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.784859 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b3547ac3-1f61-41e5-9674-317a8280dbfc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.784879 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vw7k\" (UniqueName: \"kubernetes.io/projected/b3547ac3-1f61-41e5-9674-317a8280dbfc-kube-api-access-7vw7k\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.784903 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b3547ac3-1f61-41e5-9674-317a8280dbfc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.784924 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b3547ac3-1f61-41e5-9674-317a8280dbfc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.784962 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b3547ac3-1f61-41e5-9674-317a8280dbfc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.785034 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3547ac3-1f61-41e5-9674-317a8280dbfc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.785063 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-888926e7-09ea-417e-9dc3-e9734339eb4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-888926e7-09ea-417e-9dc3-e9734339eb4c\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.785090 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b3547ac3-1f61-41e5-9674-317a8280dbfc-config\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.785108 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b3547ac3-1f61-41e5-9674-317a8280dbfc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.785767 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b3547ac3-1f61-41e5-9674-317a8280dbfc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.789094 4812 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.789148 4812 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-888926e7-09ea-417e-9dc3-e9734339eb4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-888926e7-09ea-417e-9dc3-e9734339eb4c\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e88d844691aa2a1fc69e7f251357e53e73a5ccd47896beb8c8f42aaa6dfed265/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.791607 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b3547ac3-1f61-41e5-9674-317a8280dbfc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.792356 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b3547ac3-1f61-41e5-9674-317a8280dbfc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.794286 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b3547ac3-1f61-41e5-9674-317a8280dbfc-config\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.794621 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3547ac3-1f61-41e5-9674-317a8280dbfc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.796127 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/b3547ac3-1f61-41e5-9674-317a8280dbfc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.796426 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b3547ac3-1f61-41e5-9674-317a8280dbfc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.797945 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b3547ac3-1f61-41e5-9674-317a8280dbfc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.801028 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b3547ac3-1f61-41e5-9674-317a8280dbfc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.804833 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vw7k\" (UniqueName: \"kubernetes.io/projected/b3547ac3-1f61-41e5-9674-317a8280dbfc-kube-api-access-7vw7k\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.814618 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2c7jl" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.871067 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-888926e7-09ea-417e-9dc3-e9734339eb4c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-888926e7-09ea-417e-9dc3-e9734339eb4c\") pod \"prometheus-metric-storage-0\" (UID: \"b3547ac3-1f61-41e5-9674-317a8280dbfc\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.981387 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19455782-88b2-40bd-afcd-f6334de7013a" path="/var/lib/kubelet/pods/19455782-88b2-40bd-afcd-f6334de7013a/volumes" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.982416 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5c10c0e-5a45-4741-a82f-985844fa105c" path="/var/lib/kubelet/pods/d5c10c0e-5a45-4741-a82f-985844fa105c/volumes" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.988412 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00594133-7443-4a52-b929-22e644c37f09-catalog-content\") pod \"00594133-7443-4a52-b929-22e644c37f09\" (UID: \"00594133-7443-4a52-b929-22e644c37f09\") " Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.988513 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6pp6\" (UniqueName: \"kubernetes.io/projected/00594133-7443-4a52-b929-22e644c37f09-kube-api-access-r6pp6\") pod \"00594133-7443-4a52-b929-22e644c37f09\" (UID: \"00594133-7443-4a52-b929-22e644c37f09\") " Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.988664 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00594133-7443-4a52-b929-22e644c37f09-utilities\") pod \"00594133-7443-4a52-b929-22e644c37f09\" (UID: \"00594133-7443-4a52-b929-22e644c37f09\") " Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.992101 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00594133-7443-4a52-b929-22e644c37f09-utilities" (OuterVolumeSpecName: "utilities") pod "00594133-7443-4a52-b929-22e644c37f09" (UID: "00594133-7443-4a52-b929-22e644c37f09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:04:06 crc kubenswrapper[4812]: I1124 21:04:06.995542 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00594133-7443-4a52-b929-22e644c37f09-kube-api-access-r6pp6" (OuterVolumeSpecName: "kube-api-access-r6pp6") pod "00594133-7443-4a52-b929-22e644c37f09" (UID: "00594133-7443-4a52-b929-22e644c37f09"). InnerVolumeSpecName "kube-api-access-r6pp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.075253 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-9cz4n"] Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.095501 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6pp6\" (UniqueName: \"kubernetes.io/projected/00594133-7443-4a52-b929-22e644c37f09-kube-api-access-r6pp6\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.095712 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00594133-7443-4a52-b929-22e644c37f09-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.117956 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00594133-7443-4a52-b929-22e644c37f09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00594133-7443-4a52-b929-22e644c37f09" (UID: "00594133-7443-4a52-b929-22e644c37f09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.150128 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.189128 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-9cz4n" event={"ID":"c6e44fd1-85cd-40a3-b528-bce9911ea477","Type":"ContainerStarted","Data":"d0198688e769dea98dfdd6be2c3a7b44289599fd78e0573c34e1d01d69ec5c60"} Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.195872 4812 generic.go:334] "Generic (PLEG): container finished" podID="00594133-7443-4a52-b929-22e644c37f09" containerID="5f1352be8e9890a4b539c5e413bb6e047c463e5f82047d67f29a5401ee792af2" exitCode=0 Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.195935 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c7jl" event={"ID":"00594133-7443-4a52-b929-22e644c37f09","Type":"ContainerDied","Data":"5f1352be8e9890a4b539c5e413bb6e047c463e5f82047d67f29a5401ee792af2"} Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.195966 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c7jl" event={"ID":"00594133-7443-4a52-b929-22e644c37f09","Type":"ContainerDied","Data":"f31a41bb8dc90422975091f693390b86eb83a0755b52ead5b0f6634930095952"} Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.195989 4812 scope.go:117] "RemoveContainer" containerID="5f1352be8e9890a4b539c5e413bb6e047c463e5f82047d67f29a5401ee792af2" Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.196135 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2c7jl" Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.199303 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00594133-7443-4a52-b929-22e644c37f09-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.225643 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-2823-account-create-qqmxf"] Nov 24 21:04:07 crc kubenswrapper[4812]: W1124 21:04:07.226715 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50c7c0aa_474c_48dd_9f19_0f2a9ed16e7b.slice/crio-0dd4fcd71c8cba33d93010b7f385c4b83e5c00ba8345c6b508571e37b24344c0 WatchSource:0}: Error finding container 0dd4fcd71c8cba33d93010b7f385c4b83e5c00ba8345c6b508571e37b24344c0: Status 404 returned error can't find the container with id 0dd4fcd71c8cba33d93010b7f385c4b83e5c00ba8345c6b508571e37b24344c0 Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.307223 4812 scope.go:117] "RemoveContainer" containerID="00f87180d2f93c1b449c1f2ba4490d81f4ca6e386d86d13e4a3921603a8f5ea3" Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.326686 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2c7jl"] Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.337549 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2c7jl"] Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.349358 4812 scope.go:117] "RemoveContainer" containerID="220b3fab276905fc618632d6ae0743524851921e35f067b35a7cbbd955ed2ead" Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.397869 4812 scope.go:117] "RemoveContainer" containerID="5f1352be8e9890a4b539c5e413bb6e047c463e5f82047d67f29a5401ee792af2" Nov 24 21:04:07 crc kubenswrapper[4812]: E1124 21:04:07.399923 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f1352be8e9890a4b539c5e413bb6e047c463e5f82047d67f29a5401ee792af2\": container with ID starting with 5f1352be8e9890a4b539c5e413bb6e047c463e5f82047d67f29a5401ee792af2 not found: ID does not exist" containerID="5f1352be8e9890a4b539c5e413bb6e047c463e5f82047d67f29a5401ee792af2" Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.399960 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f1352be8e9890a4b539c5e413bb6e047c463e5f82047d67f29a5401ee792af2"} err="failed to get container status \"5f1352be8e9890a4b539c5e413bb6e047c463e5f82047d67f29a5401ee792af2\": rpc error: code = NotFound desc = could not find container \"5f1352be8e9890a4b539c5e413bb6e047c463e5f82047d67f29a5401ee792af2\": container with ID starting with 5f1352be8e9890a4b539c5e413bb6e047c463e5f82047d67f29a5401ee792af2 not found: ID does not exist" Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.400026 4812 scope.go:117] "RemoveContainer" containerID="00f87180d2f93c1b449c1f2ba4490d81f4ca6e386d86d13e4a3921603a8f5ea3" Nov 24 21:04:07 crc kubenswrapper[4812]: E1124 21:04:07.406770 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00f87180d2f93c1b449c1f2ba4490d81f4ca6e386d86d13e4a3921603a8f5ea3\": container with ID starting with 00f87180d2f93c1b449c1f2ba4490d81f4ca6e386d86d13e4a3921603a8f5ea3 not found: ID does not exist" containerID="00f87180d2f93c1b449c1f2ba4490d81f4ca6e386d86d13e4a3921603a8f5ea3" Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.406838 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f87180d2f93c1b449c1f2ba4490d81f4ca6e386d86d13e4a3921603a8f5ea3"} err="failed to get container status \"00f87180d2f93c1b449c1f2ba4490d81f4ca6e386d86d13e4a3921603a8f5ea3\": rpc error: code = NotFound desc = could not find container \"00f87180d2f93c1b449c1f2ba4490d81f4ca6e386d86d13e4a3921603a8f5ea3\": container with ID starting with 00f87180d2f93c1b449c1f2ba4490d81f4ca6e386d86d13e4a3921603a8f5ea3 not found: ID does not exist" Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.406865 4812 scope.go:117] "RemoveContainer" containerID="220b3fab276905fc618632d6ae0743524851921e35f067b35a7cbbd955ed2ead" Nov 24 21:04:07 crc kubenswrapper[4812]: E1124 21:04:07.407402 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"220b3fab276905fc618632d6ae0743524851921e35f067b35a7cbbd955ed2ead\": container with ID starting with 220b3fab276905fc618632d6ae0743524851921e35f067b35a7cbbd955ed2ead not found: ID does not exist" containerID="220b3fab276905fc618632d6ae0743524851921e35f067b35a7cbbd955ed2ead" Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.407487 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"220b3fab276905fc618632d6ae0743524851921e35f067b35a7cbbd955ed2ead"} err="failed to get container status \"220b3fab276905fc618632d6ae0743524851921e35f067b35a7cbbd955ed2ead\": rpc error: code = NotFound desc = could not find container \"220b3fab276905fc618632d6ae0743524851921e35f067b35a7cbbd955ed2ead\": container with ID starting with 220b3fab276905fc618632d6ae0743524851921e35f067b35a7cbbd955ed2ead not found: ID does not exist" Nov 24 21:04:07 crc kubenswrapper[4812]: I1124 21:04:07.665383 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:04:07 crc kubenswrapper[4812]: W1124 21:04:07.754307 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3547ac3_1f61_41e5_9674_317a8280dbfc.slice/crio-589750a9b65c3233d4410500bd2dc98d6f5eca7e51429d8ed11eb48c14b153ce WatchSource:0}: Error finding container 589750a9b65c3233d4410500bd2dc98d6f5eca7e51429d8ed11eb48c14b153ce: Status 404 returned error can't find the container with id 589750a9b65c3233d4410500bd2dc98d6f5eca7e51429d8ed11eb48c14b153ce Nov 24 21:04:08 crc kubenswrapper[4812]: I1124 21:04:08.216531 4812 generic.go:334] "Generic (PLEG): container finished" podID="50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b" containerID="0a61db4434949811c58acd594a6e1e36191bda7cbcec247c9234fb322574300e" exitCode=0 Nov 24 21:04:08 crc kubenswrapper[4812]: I1124 21:04:08.216640 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2823-account-create-qqmxf" event={"ID":"50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b","Type":"ContainerDied","Data":"0a61db4434949811c58acd594a6e1e36191bda7cbcec247c9234fb322574300e"} Nov 24 21:04:08 crc kubenswrapper[4812]: I1124 21:04:08.216688 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2823-account-create-qqmxf" event={"ID":"50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b","Type":"ContainerStarted","Data":"0dd4fcd71c8cba33d93010b7f385c4b83e5c00ba8345c6b508571e37b24344c0"} Nov 24 21:04:08 crc kubenswrapper[4812]: I1124 21:04:08.218156 4812 generic.go:334] "Generic (PLEG): container finished" podID="c6e44fd1-85cd-40a3-b528-bce9911ea477" containerID="6d32cbf8b6e86dddf9e0cb350e00299f1430f31ac2f5ec949206f2cda184c19e" exitCode=0 Nov 24 21:04:08 crc kubenswrapper[4812]: I1124 21:04:08.218252 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-9cz4n" event={"ID":"c6e44fd1-85cd-40a3-b528-bce9911ea477","Type":"ContainerDied","Data":"6d32cbf8b6e86dddf9e0cb350e00299f1430f31ac2f5ec949206f2cda184c19e"} Nov 24 21:04:08 crc kubenswrapper[4812]: I1124 21:04:08.219505 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b3547ac3-1f61-41e5-9674-317a8280dbfc","Type":"ContainerStarted","Data":"589750a9b65c3233d4410500bd2dc98d6f5eca7e51429d8ed11eb48c14b153ce"} Nov 24 21:04:08 crc kubenswrapper[4812]: I1124 21:04:08.998254 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00594133-7443-4a52-b929-22e644c37f09" path="/var/lib/kubelet/pods/00594133-7443-4a52-b929-22e644c37f09/volumes" Nov 24 21:04:09 crc kubenswrapper[4812]: I1124 21:04:09.771928 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-9cz4n" Nov 24 21:04:09 crc kubenswrapper[4812]: I1124 21:04:09.778191 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2823-account-create-qqmxf" Nov 24 21:04:09 crc kubenswrapper[4812]: I1124 21:04:09.962660 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6e44fd1-85cd-40a3-b528-bce9911ea477-operator-scripts\") pod \"c6e44fd1-85cd-40a3-b528-bce9911ea477\" (UID: \"c6e44fd1-85cd-40a3-b528-bce9911ea477\") " Nov 24 21:04:09 crc kubenswrapper[4812]: I1124 21:04:09.962851 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ctl6\" (UniqueName: \"kubernetes.io/projected/c6e44fd1-85cd-40a3-b528-bce9911ea477-kube-api-access-9ctl6\") pod \"c6e44fd1-85cd-40a3-b528-bce9911ea477\" (UID: \"c6e44fd1-85cd-40a3-b528-bce9911ea477\") " Nov 24 21:04:09 crc kubenswrapper[4812]: I1124 21:04:09.962903 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrwt7\" (UniqueName: \"kubernetes.io/projected/50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b-kube-api-access-jrwt7\") pod \"50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b\" (UID: \"50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b\") " Nov 24 21:04:09 crc kubenswrapper[4812]: I1124 21:04:09.962958 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b-operator-scripts\") pod \"50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b\" (UID: \"50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b\") " Nov 24 21:04:10 crc kubenswrapper[4812]: I1124 21:04:10.114347 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6e44fd1-85cd-40a3-b528-bce9911ea477-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6e44fd1-85cd-40a3-b528-bce9911ea477" (UID: "c6e44fd1-85cd-40a3-b528-bce9911ea477"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:04:10 crc kubenswrapper[4812]: I1124 21:04:10.115016 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b" (UID: "50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:04:10 crc kubenswrapper[4812]: I1124 21:04:10.167161 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6e44fd1-85cd-40a3-b528-bce9911ea477-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:10 crc kubenswrapper[4812]: I1124 21:04:10.167429 4812 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:10 crc kubenswrapper[4812]: I1124 21:04:10.243366 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-9cz4n" event={"ID":"c6e44fd1-85cd-40a3-b528-bce9911ea477","Type":"ContainerDied","Data":"d0198688e769dea98dfdd6be2c3a7b44289599fd78e0573c34e1d01d69ec5c60"} Nov 24 21:04:10 crc kubenswrapper[4812]: I1124 21:04:10.243423 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0198688e769dea98dfdd6be2c3a7b44289599fd78e0573c34e1d01d69ec5c60" Nov 24 21:04:10 crc kubenswrapper[4812]: I1124 21:04:10.243382 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-9cz4n" Nov 24 21:04:10 crc kubenswrapper[4812]: I1124 21:04:10.245613 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2823-account-create-qqmxf" event={"ID":"50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b","Type":"ContainerDied","Data":"0dd4fcd71c8cba33d93010b7f385c4b83e5c00ba8345c6b508571e37b24344c0"} Nov 24 21:04:10 crc kubenswrapper[4812]: I1124 21:04:10.245683 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dd4fcd71c8cba33d93010b7f385c4b83e5c00ba8345c6b508571e37b24344c0" Nov 24 21:04:10 crc kubenswrapper[4812]: I1124 21:04:10.245710 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2823-account-create-qqmxf" Nov 24 21:04:10 crc kubenswrapper[4812]: I1124 21:04:10.316886 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b-kube-api-access-jrwt7" (OuterVolumeSpecName: "kube-api-access-jrwt7") pod "50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b" (UID: "50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b"). InnerVolumeSpecName "kube-api-access-jrwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:04:10 crc kubenswrapper[4812]: I1124 21:04:10.319044 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6e44fd1-85cd-40a3-b528-bce9911ea477-kube-api-access-9ctl6" (OuterVolumeSpecName: "kube-api-access-9ctl6") pod "c6e44fd1-85cd-40a3-b528-bce9911ea477" (UID: "c6e44fd1-85cd-40a3-b528-bce9911ea477"). InnerVolumeSpecName "kube-api-access-9ctl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:04:10 crc kubenswrapper[4812]: I1124 21:04:10.371658 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ctl6\" (UniqueName: \"kubernetes.io/projected/c6e44fd1-85cd-40a3-b528-bce9911ea477-kube-api-access-9ctl6\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:10 crc kubenswrapper[4812]: I1124 21:04:10.371913 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrwt7\" (UniqueName: \"kubernetes.io/projected/50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b-kube-api-access-jrwt7\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.585832 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-zf4x4"] Nov 24 21:04:11 crc kubenswrapper[4812]: E1124 21:04:11.592844 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00594133-7443-4a52-b929-22e644c37f09" containerName="registry-server" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.592881 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="00594133-7443-4a52-b929-22e644c37f09" containerName="registry-server" Nov 24 21:04:11 crc kubenswrapper[4812]: E1124 21:04:11.592897 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b" containerName="mariadb-account-create" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.592907 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b" containerName="mariadb-account-create" Nov 24 21:04:11 crc kubenswrapper[4812]: E1124 21:04:11.592932 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00594133-7443-4a52-b929-22e644c37f09" containerName="extract-content" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.592941 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="00594133-7443-4a52-b929-22e644c37f09" containerName="extract-content" Nov 24 21:04:11 crc kubenswrapper[4812]: E1124 21:04:11.592959 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e44fd1-85cd-40a3-b528-bce9911ea477" containerName="mariadb-database-create" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.592968 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e44fd1-85cd-40a3-b528-bce9911ea477" containerName="mariadb-database-create" Nov 24 21:04:11 crc kubenswrapper[4812]: E1124 21:04:11.592981 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00594133-7443-4a52-b929-22e644c37f09" containerName="extract-utilities" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.592989 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="00594133-7443-4a52-b929-22e644c37f09" containerName="extract-utilities" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.593268 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b" containerName="mariadb-account-create" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.593311 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e44fd1-85cd-40a3-b528-bce9911ea477" containerName="mariadb-database-create" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.593360 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="00594133-7443-4a52-b929-22e644c37f09" containerName="registry-server" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.594229 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zf4x4" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.594742 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-zf4x4"] Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.599543 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.600308 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.600513 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-7kwfq" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.600679 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.698728 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4scqh\" (UniqueName: \"kubernetes.io/projected/f1751d50-a273-4cdd-8f49-6db0f32f1a71-kube-api-access-4scqh\") pod \"aodh-db-sync-zf4x4\" (UID: \"f1751d50-a273-4cdd-8f49-6db0f32f1a71\") " pod="openstack/aodh-db-sync-zf4x4" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.698811 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1751d50-a273-4cdd-8f49-6db0f32f1a71-combined-ca-bundle\") pod \"aodh-db-sync-zf4x4\" (UID: \"f1751d50-a273-4cdd-8f49-6db0f32f1a71\") " pod="openstack/aodh-db-sync-zf4x4" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.699209 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1751d50-a273-4cdd-8f49-6db0f32f1a71-scripts\") pod \"aodh-db-sync-zf4x4\" (UID: \"f1751d50-a273-4cdd-8f49-6db0f32f1a71\") " pod="openstack/aodh-db-sync-zf4x4" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.699270 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1751d50-a273-4cdd-8f49-6db0f32f1a71-config-data\") pod \"aodh-db-sync-zf4x4\" (UID: \"f1751d50-a273-4cdd-8f49-6db0f32f1a71\") " pod="openstack/aodh-db-sync-zf4x4" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.801100 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1751d50-a273-4cdd-8f49-6db0f32f1a71-scripts\") pod \"aodh-db-sync-zf4x4\" (UID: \"f1751d50-a273-4cdd-8f49-6db0f32f1a71\") " pod="openstack/aodh-db-sync-zf4x4" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.801539 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1751d50-a273-4cdd-8f49-6db0f32f1a71-config-data\") pod \"aodh-db-sync-zf4x4\" (UID: \"f1751d50-a273-4cdd-8f49-6db0f32f1a71\") " pod="openstack/aodh-db-sync-zf4x4" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.801752 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4scqh\" (UniqueName: \"kubernetes.io/projected/f1751d50-a273-4cdd-8f49-6db0f32f1a71-kube-api-access-4scqh\") pod \"aodh-db-sync-zf4x4\" (UID: \"f1751d50-a273-4cdd-8f49-6db0f32f1a71\") " pod="openstack/aodh-db-sync-zf4x4" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.801953 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1751d50-a273-4cdd-8f49-6db0f32f1a71-combined-ca-bundle\") pod \"aodh-db-sync-zf4x4\" (UID: \"f1751d50-a273-4cdd-8f49-6db0f32f1a71\") " pod="openstack/aodh-db-sync-zf4x4" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.805349 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1751d50-a273-4cdd-8f49-6db0f32f1a71-scripts\") pod \"aodh-db-sync-zf4x4\" (UID: \"f1751d50-a273-4cdd-8f49-6db0f32f1a71\") " pod="openstack/aodh-db-sync-zf4x4" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.805563 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1751d50-a273-4cdd-8f49-6db0f32f1a71-combined-ca-bundle\") pod \"aodh-db-sync-zf4x4\" (UID: \"f1751d50-a273-4cdd-8f49-6db0f32f1a71\") " pod="openstack/aodh-db-sync-zf4x4" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.811325 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1751d50-a273-4cdd-8f49-6db0f32f1a71-config-data\") pod \"aodh-db-sync-zf4x4\" (UID: \"f1751d50-a273-4cdd-8f49-6db0f32f1a71\") " pod="openstack/aodh-db-sync-zf4x4" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.819151 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4scqh\" (UniqueName: \"kubernetes.io/projected/f1751d50-a273-4cdd-8f49-6db0f32f1a71-kube-api-access-4scqh\") pod \"aodh-db-sync-zf4x4\" (UID: \"f1751d50-a273-4cdd-8f49-6db0f32f1a71\") " pod="openstack/aodh-db-sync-zf4x4" Nov 24 21:04:11 crc kubenswrapper[4812]: I1124 21:04:11.924369 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zf4x4" Nov 24 21:04:12 crc kubenswrapper[4812]: I1124 21:04:12.059523 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zsgrz"] Nov 24 21:04:12 crc kubenswrapper[4812]: I1124 21:04:12.077104 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zsgrz"] Nov 24 21:04:12 crc kubenswrapper[4812]: I1124 21:04:12.273446 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b3547ac3-1f61-41e5-9674-317a8280dbfc","Type":"ContainerStarted","Data":"eb31e9a645853e5c5219850e96e402256d7ab279fe4817bbaf109f3ce2e7784e"} Nov 24 21:04:12 crc kubenswrapper[4812]: I1124 21:04:12.484636 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-zf4x4"] Nov 24 21:04:12 crc kubenswrapper[4812]: W1124 21:04:12.489429 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1751d50_a273_4cdd_8f49_6db0f32f1a71.slice/crio-f691fbd17e21622a8f44264875dabf7615b580a9f3c2071c62caa021a2581cce WatchSource:0}: Error finding container f691fbd17e21622a8f44264875dabf7615b580a9f3c2071c62caa021a2581cce: Status 404 returned error can't find the container with id f691fbd17e21622a8f44264875dabf7615b580a9f3c2071c62caa021a2581cce Nov 24 21:04:12 crc kubenswrapper[4812]: I1124 21:04:12.981825 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2af7fb62-3e16-45a7-87da-83d6ecc2b6b2" path="/var/lib/kubelet/pods/2af7fb62-3e16-45a7-87da-83d6ecc2b6b2/volumes" Nov 24 21:04:13 crc kubenswrapper[4812]: I1124 21:04:13.289174 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zf4x4" event={"ID":"f1751d50-a273-4cdd-8f49-6db0f32f1a71","Type":"ContainerStarted","Data":"f691fbd17e21622a8f44264875dabf7615b580a9f3c2071c62caa021a2581cce"} Nov 24 21:04:17 crc kubenswrapper[4812]: I1124 21:04:17.336186 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zf4x4" event={"ID":"f1751d50-a273-4cdd-8f49-6db0f32f1a71","Type":"ContainerStarted","Data":"bcfbb245d45796ad0291923829c4a5919df24fbbee8aa365059c79e9a3455b21"} Nov 24 21:04:17 crc kubenswrapper[4812]: I1124 21:04:17.351355 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-zf4x4" podStartSLOduration=2.484663054 podStartE2EDuration="6.351321115s" podCreationTimestamp="2025-11-24 21:04:11 +0000 UTC" firstStartedPulling="2025-11-24 21:04:12.492142717 +0000 UTC m=+6446.281095088" lastFinishedPulling="2025-11-24 21:04:16.358800778 +0000 UTC m=+6450.147753149" observedRunningTime="2025-11-24 21:04:17.347549948 +0000 UTC m=+6451.136502319" watchObservedRunningTime="2025-11-24 21:04:17.351321115 +0000 UTC m=+6451.140273476" Nov 24 21:04:19 crc kubenswrapper[4812]: I1124 21:04:19.360252 4812 generic.go:334] "Generic (PLEG): container finished" podID="f1751d50-a273-4cdd-8f49-6db0f32f1a71" containerID="bcfbb245d45796ad0291923829c4a5919df24fbbee8aa365059c79e9a3455b21" exitCode=0 Nov 24 21:04:19 crc kubenswrapper[4812]: I1124 21:04:19.360311 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zf4x4" event={"ID":"f1751d50-a273-4cdd-8f49-6db0f32f1a71","Type":"ContainerDied","Data":"bcfbb245d45796ad0291923829c4a5919df24fbbee8aa365059c79e9a3455b21"} Nov 24 21:04:19 crc kubenswrapper[4812]: I1124 21:04:19.363149 4812 generic.go:334] "Generic (PLEG): container finished" podID="b3547ac3-1f61-41e5-9674-317a8280dbfc" containerID="eb31e9a645853e5c5219850e96e402256d7ab279fe4817bbaf109f3ce2e7784e" exitCode=0 Nov 24 21:04:19 crc kubenswrapper[4812]: I1124 21:04:19.363175 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b3547ac3-1f61-41e5-9674-317a8280dbfc","Type":"ContainerDied","Data":"eb31e9a645853e5c5219850e96e402256d7ab279fe4817bbaf109f3ce2e7784e"} Nov 24 21:04:20 crc kubenswrapper[4812]: I1124 21:04:20.380003 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b3547ac3-1f61-41e5-9674-317a8280dbfc","Type":"ContainerStarted","Data":"b0ba059a3b143b1de587eb36d2a1102d1d4a82d99c6ae55531797fffd944331d"} Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.043032 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zf4x4" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.101049 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1751d50-a273-4cdd-8f49-6db0f32f1a71-combined-ca-bundle\") pod \"f1751d50-a273-4cdd-8f49-6db0f32f1a71\" (UID: \"f1751d50-a273-4cdd-8f49-6db0f32f1a71\") " Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.101391 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4scqh\" (UniqueName: \"kubernetes.io/projected/f1751d50-a273-4cdd-8f49-6db0f32f1a71-kube-api-access-4scqh\") pod \"f1751d50-a273-4cdd-8f49-6db0f32f1a71\" (UID: \"f1751d50-a273-4cdd-8f49-6db0f32f1a71\") " Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.101613 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1751d50-a273-4cdd-8f49-6db0f32f1a71-scripts\") pod \"f1751d50-a273-4cdd-8f49-6db0f32f1a71\" (UID: \"f1751d50-a273-4cdd-8f49-6db0f32f1a71\") " Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.101710 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1751d50-a273-4cdd-8f49-6db0f32f1a71-config-data\") pod \"f1751d50-a273-4cdd-8f49-6db0f32f1a71\" (UID: \"f1751d50-a273-4cdd-8f49-6db0f32f1a71\") " Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.127295 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1751d50-a273-4cdd-8f49-6db0f32f1a71-kube-api-access-4scqh" (OuterVolumeSpecName: "kube-api-access-4scqh") pod "f1751d50-a273-4cdd-8f49-6db0f32f1a71" (UID: "f1751d50-a273-4cdd-8f49-6db0f32f1a71"). InnerVolumeSpecName "kube-api-access-4scqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.129851 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1751d50-a273-4cdd-8f49-6db0f32f1a71-scripts" (OuterVolumeSpecName: "scripts") pod "f1751d50-a273-4cdd-8f49-6db0f32f1a71" (UID: "f1751d50-a273-4cdd-8f49-6db0f32f1a71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.145297 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1751d50-a273-4cdd-8f49-6db0f32f1a71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1751d50-a273-4cdd-8f49-6db0f32f1a71" (UID: "f1751d50-a273-4cdd-8f49-6db0f32f1a71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.169003 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1751d50-a273-4cdd-8f49-6db0f32f1a71-config-data" (OuterVolumeSpecName: "config-data") pod "f1751d50-a273-4cdd-8f49-6db0f32f1a71" (UID: "f1751d50-a273-4cdd-8f49-6db0f32f1a71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.204580 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4scqh\" (UniqueName: \"kubernetes.io/projected/f1751d50-a273-4cdd-8f49-6db0f32f1a71-kube-api-access-4scqh\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.204615 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1751d50-a273-4cdd-8f49-6db0f32f1a71-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.204630 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1751d50-a273-4cdd-8f49-6db0f32f1a71-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.204645 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1751d50-a273-4cdd-8f49-6db0f32f1a71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.396657 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-zf4x4" event={"ID":"f1751d50-a273-4cdd-8f49-6db0f32f1a71","Type":"ContainerDied","Data":"f691fbd17e21622a8f44264875dabf7615b580a9f3c2071c62caa021a2581cce"} Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.396717 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f691fbd17e21622a8f44264875dabf7615b580a9f3c2071c62caa021a2581cce" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.396738 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-zf4x4" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.741923 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 24 21:04:21 crc kubenswrapper[4812]: E1124 21:04:21.743160 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1751d50-a273-4cdd-8f49-6db0f32f1a71" containerName="aodh-db-sync" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.743189 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1751d50-a273-4cdd-8f49-6db0f32f1a71" containerName="aodh-db-sync" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.743526 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1751d50-a273-4cdd-8f49-6db0f32f1a71" containerName="aodh-db-sync" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.745927 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.748869 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.748944 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.748886 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-7kwfq" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.760262 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.821874 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-scripts\") pod \"aodh-0\" (UID: \"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221\") " pod="openstack/aodh-0" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.821940 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7gsp\" (UniqueName: \"kubernetes.io/projected/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-kube-api-access-h7gsp\") pod \"aodh-0\" (UID: \"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221\") " pod="openstack/aodh-0" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.821981 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-combined-ca-bundle\") pod \"aodh-0\" (UID: \"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221\") " pod="openstack/aodh-0" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.822148 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-config-data\") pod \"aodh-0\" (UID: \"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221\") " pod="openstack/aodh-0" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.924419 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-scripts\") pod \"aodh-0\" (UID: \"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221\") " pod="openstack/aodh-0" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.924479 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7gsp\" (UniqueName: \"kubernetes.io/projected/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-kube-api-access-h7gsp\") pod \"aodh-0\" (UID: \"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221\") " pod="openstack/aodh-0" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.924507 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-combined-ca-bundle\") pod \"aodh-0\" (UID: \"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221\") " pod="openstack/aodh-0" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.924583 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-config-data\") pod \"aodh-0\" (UID: \"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221\") " pod="openstack/aodh-0" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.936181 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-combined-ca-bundle\") pod \"aodh-0\" (UID: \"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221\") " pod="openstack/aodh-0" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.936516 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-config-data\") pod \"aodh-0\" (UID: \"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221\") " pod="openstack/aodh-0" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.945652 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7gsp\" (UniqueName: \"kubernetes.io/projected/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-kube-api-access-h7gsp\") pod \"aodh-0\" (UID: \"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221\") " pod="openstack/aodh-0" Nov 24 21:04:21 crc kubenswrapper[4812]: I1124 21:04:21.953063 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-scripts\") pod \"aodh-0\" (UID: \"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221\") " pod="openstack/aodh-0" Nov 24 21:04:22 crc kubenswrapper[4812]: I1124 21:04:22.073106 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:04:22 crc kubenswrapper[4812]: W1124 21:04:22.564988 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14b97d9b_00bd_49fb_b0e3_c3d5aec3a221.slice/crio-8f0a250a633f0d80254f0ab8a9fd8c87f6cb8222a131420874c4ad5b0d05c9e4 WatchSource:0}: Error finding container 8f0a250a633f0d80254f0ab8a9fd8c87f6cb8222a131420874c4ad5b0d05c9e4: Status 404 returned error can't find the container with id 8f0a250a633f0d80254f0ab8a9fd8c87f6cb8222a131420874c4ad5b0d05c9e4 Nov 24 21:04:22 crc kubenswrapper[4812]: I1124 21:04:22.581286 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 21:04:23 crc kubenswrapper[4812]: I1124 21:04:23.456116 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221","Type":"ContainerStarted","Data":"d90cbfc2b1ac105c42127437f086cc5d8b9f1d66bfb930f7b95a05a214d41106"} Nov 24 21:04:23 crc kubenswrapper[4812]: I1124 21:04:23.456467 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221","Type":"ContainerStarted","Data":"8f0a250a633f0d80254f0ab8a9fd8c87f6cb8222a131420874c4ad5b0d05c9e4"} Nov 24 21:04:24 crc kubenswrapper[4812]: I1124 21:04:24.348624 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:04:24 crc kubenswrapper[4812]: I1124 21:04:24.349941 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerName="ceilometer-central-agent" containerID="cri-o://e63e1edd3d002ad5dd5b272d594ce2b62c9fd19ad1a3ad959c12423a544a9784" gracePeriod=30 Nov 24 21:04:24 crc kubenswrapper[4812]: I1124 21:04:24.350709 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerName="proxy-httpd" containerID="cri-o://487a38ba5e33fcfd40c4d6c72e01762cdb6a0e3898fac6874171ee1464c8432e" gracePeriod=30 Nov 24 21:04:24 crc kubenswrapper[4812]: I1124 21:04:24.350777 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerName="sg-core" containerID="cri-o://ec284e90da49c7872d39a75a72e44ef46dcfe4a2ff6b8f4130f4202567d06867" gracePeriod=30 Nov 24 21:04:24 crc kubenswrapper[4812]: I1124 21:04:24.350825 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerName="ceilometer-notification-agent" containerID="cri-o://5f2c18a2db9957503a7e5f7ffb4c6e2d5089e901e2a3a7250aed5d258d027f84" gracePeriod=30 Nov 24 21:04:24 crc kubenswrapper[4812]: I1124 21:04:24.454263 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.1.152:3000/\": read tcp 10.217.0.2:35178->10.217.1.152:3000: read: connection reset by peer" Nov 24 21:04:24 crc kubenswrapper[4812]: I1124 21:04:24.472348 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b3547ac3-1f61-41e5-9674-317a8280dbfc","Type":"ContainerStarted","Data":"f0093719822a79dbed18c05b72ae891fdeb5798c8fc784814b8ef0605ad791b6"} Nov 24 21:04:24 crc kubenswrapper[4812]: I1124 21:04:24.472393 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"b3547ac3-1f61-41e5-9674-317a8280dbfc","Type":"ContainerStarted","Data":"b7ff09ded64ce9de7743aa620e21d595921f7bbf86408f621c90414bd1c62338"} Nov 24 21:04:24 crc kubenswrapper[4812]: I1124 21:04:24.515127 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.515086148 podStartE2EDuration="18.515086148s" podCreationTimestamp="2025-11-24 21:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:04:24.497913502 +0000 UTC m=+6458.286865873" watchObservedRunningTime="2025-11-24 21:04:24.515086148 +0000 UTC m=+6458.304038519" Nov 24 21:04:25 crc kubenswrapper[4812]: I1124 21:04:25.113561 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 24 21:04:25 crc kubenswrapper[4812]: I1124 21:04:25.485893 4812 generic.go:334] "Generic (PLEG): container finished" podID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerID="487a38ba5e33fcfd40c4d6c72e01762cdb6a0e3898fac6874171ee1464c8432e" exitCode=0 Nov 24 21:04:25 crc kubenswrapper[4812]: I1124 21:04:25.485924 4812 generic.go:334] "Generic (PLEG): container finished" podID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerID="ec284e90da49c7872d39a75a72e44ef46dcfe4a2ff6b8f4130f4202567d06867" exitCode=2 Nov 24 21:04:25 crc kubenswrapper[4812]: I1124 21:04:25.485932 4812 generic.go:334] "Generic (PLEG): container finished" podID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerID="e63e1edd3d002ad5dd5b272d594ce2b62c9fd19ad1a3ad959c12423a544a9784" exitCode=0 Nov 24 21:04:25 crc kubenswrapper[4812]: I1124 21:04:25.485969 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0178505f-e93d-49f9-aa8c-f68abc222d19","Type":"ContainerDied","Data":"487a38ba5e33fcfd40c4d6c72e01762cdb6a0e3898fac6874171ee1464c8432e"} Nov 24 21:04:25 crc kubenswrapper[4812]: I1124 21:04:25.486010 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0178505f-e93d-49f9-aa8c-f68abc222d19","Type":"ContainerDied","Data":"ec284e90da49c7872d39a75a72e44ef46dcfe4a2ff6b8f4130f4202567d06867"} Nov 24 21:04:25 crc kubenswrapper[4812]: I1124 21:04:25.486021 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0178505f-e93d-49f9-aa8c-f68abc222d19","Type":"ContainerDied","Data":"e63e1edd3d002ad5dd5b272d594ce2b62c9fd19ad1a3ad959c12423a544a9784"} Nov 24 21:04:25 crc kubenswrapper[4812]: I1124 21:04:25.489053 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221","Type":"ContainerStarted","Data":"2e102b5b1751183026e477458b8aba2119e6c6a602b0a97436eac2828b27fe80"} Nov 24 21:04:26 crc kubenswrapper[4812]: I1124 21:04:26.800406 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.1.152:3000/\": dial tcp 10.217.1.152:3000: connect: connection refused" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.150513 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.173031 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.250660 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-scripts\") pod \"0178505f-e93d-49f9-aa8c-f68abc222d19\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.250830 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0178505f-e93d-49f9-aa8c-f68abc222d19-run-httpd\") pod \"0178505f-e93d-49f9-aa8c-f68abc222d19\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.250937 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-sg-core-conf-yaml\") pod \"0178505f-e93d-49f9-aa8c-f68abc222d19\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.250979 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgtwn\" (UniqueName: \"kubernetes.io/projected/0178505f-e93d-49f9-aa8c-f68abc222d19-kube-api-access-rgtwn\") pod \"0178505f-e93d-49f9-aa8c-f68abc222d19\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.251161 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0178505f-e93d-49f9-aa8c-f68abc222d19-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0178505f-e93d-49f9-aa8c-f68abc222d19" (UID: "0178505f-e93d-49f9-aa8c-f68abc222d19"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.251006 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-config-data\") pod \"0178505f-e93d-49f9-aa8c-f68abc222d19\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.251667 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0178505f-e93d-49f9-aa8c-f68abc222d19-log-httpd\") pod \"0178505f-e93d-49f9-aa8c-f68abc222d19\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.251702 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-combined-ca-bundle\") pod \"0178505f-e93d-49f9-aa8c-f68abc222d19\" (UID: \"0178505f-e93d-49f9-aa8c-f68abc222d19\") " Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.252272 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0178505f-e93d-49f9-aa8c-f68abc222d19-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.253022 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0178505f-e93d-49f9-aa8c-f68abc222d19-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0178505f-e93d-49f9-aa8c-f68abc222d19" (UID: "0178505f-e93d-49f9-aa8c-f68abc222d19"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.256439 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-scripts" (OuterVolumeSpecName: "scripts") pod "0178505f-e93d-49f9-aa8c-f68abc222d19" (UID: "0178505f-e93d-49f9-aa8c-f68abc222d19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.256588 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0178505f-e93d-49f9-aa8c-f68abc222d19-kube-api-access-rgtwn" (OuterVolumeSpecName: "kube-api-access-rgtwn") pod "0178505f-e93d-49f9-aa8c-f68abc222d19" (UID: "0178505f-e93d-49f9-aa8c-f68abc222d19"). InnerVolumeSpecName "kube-api-access-rgtwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.280617 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0178505f-e93d-49f9-aa8c-f68abc222d19" (UID: "0178505f-e93d-49f9-aa8c-f68abc222d19"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.345631 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0178505f-e93d-49f9-aa8c-f68abc222d19" (UID: "0178505f-e93d-49f9-aa8c-f68abc222d19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.353791 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.353818 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.353827 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.353837 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgtwn\" (UniqueName: \"kubernetes.io/projected/0178505f-e93d-49f9-aa8c-f68abc222d19-kube-api-access-rgtwn\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.353847 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0178505f-e93d-49f9-aa8c-f68abc222d19-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.375901 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-config-data" (OuterVolumeSpecName: "config-data") pod "0178505f-e93d-49f9-aa8c-f68abc222d19" (UID: "0178505f-e93d-49f9-aa8c-f68abc222d19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.455355 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0178505f-e93d-49f9-aa8c-f68abc222d19-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.539923 4812 generic.go:334] "Generic (PLEG): container finished" podID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerID="5f2c18a2db9957503a7e5f7ffb4c6e2d5089e901e2a3a7250aed5d258d027f84" exitCode=0 Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.539978 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.539990 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0178505f-e93d-49f9-aa8c-f68abc222d19","Type":"ContainerDied","Data":"5f2c18a2db9957503a7e5f7ffb4c6e2d5089e901e2a3a7250aed5d258d027f84"} Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.540041 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0178505f-e93d-49f9-aa8c-f68abc222d19","Type":"ContainerDied","Data":"8c7c2267058699c04024c87eafda81c4e172db95d1428ea96b28a34f97a44099"} Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.540074 4812 scope.go:117] "RemoveContainer" containerID="487a38ba5e33fcfd40c4d6c72e01762cdb6a0e3898fac6874171ee1464c8432e" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.545772 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221","Type":"ContainerStarted","Data":"e3cd4ff7119797a15da1cdb99e930a6dbf04b4896bee8568068c1435cada1ec6"} Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.613021 4812 scope.go:117] "RemoveContainer" containerID="ec284e90da49c7872d39a75a72e44ef46dcfe4a2ff6b8f4130f4202567d06867" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.625166 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.655279 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.663557 4812 scope.go:117] "RemoveContainer" containerID="5f2c18a2db9957503a7e5f7ffb4c6e2d5089e901e2a3a7250aed5d258d027f84" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.666750 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:04:27 crc kubenswrapper[4812]: E1124 21:04:27.667156 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerName="ceilometer-notification-agent" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.667173 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerName="ceilometer-notification-agent" Nov 24 21:04:27 crc kubenswrapper[4812]: E1124 21:04:27.667185 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerName="sg-core" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.667191 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerName="sg-core" Nov 24 21:04:27 crc kubenswrapper[4812]: E1124 21:04:27.667211 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerName="ceilometer-central-agent" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.667218 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerName="ceilometer-central-agent" Nov 24 21:04:27 crc kubenswrapper[4812]: E1124 21:04:27.667226 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerName="proxy-httpd" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.667232 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerName="proxy-httpd" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.667416 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerName="proxy-httpd" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.667441 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerName="ceilometer-central-agent" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.667449 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerName="ceilometer-notification-agent" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.667459 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0178505f-e93d-49f9-aa8c-f68abc222d19" containerName="sg-core" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.669828 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.676032 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.676778 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.677014 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.686197 4812 scope.go:117] "RemoveContainer" containerID="e63e1edd3d002ad5dd5b272d594ce2b62c9fd19ad1a3ad959c12423a544a9784" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.714519 4812 scope.go:117] "RemoveContainer" containerID="487a38ba5e33fcfd40c4d6c72e01762cdb6a0e3898fac6874171ee1464c8432e" Nov 24 21:04:27 crc kubenswrapper[4812]: E1124 21:04:27.718647 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"487a38ba5e33fcfd40c4d6c72e01762cdb6a0e3898fac6874171ee1464c8432e\": container with ID starting with 487a38ba5e33fcfd40c4d6c72e01762cdb6a0e3898fac6874171ee1464c8432e not found: ID does not exist" containerID="487a38ba5e33fcfd40c4d6c72e01762cdb6a0e3898fac6874171ee1464c8432e" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.718681 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"487a38ba5e33fcfd40c4d6c72e01762cdb6a0e3898fac6874171ee1464c8432e"} err="failed to get container status \"487a38ba5e33fcfd40c4d6c72e01762cdb6a0e3898fac6874171ee1464c8432e\": rpc error: code = NotFound desc = could not find container \"487a38ba5e33fcfd40c4d6c72e01762cdb6a0e3898fac6874171ee1464c8432e\": container with ID starting with 487a38ba5e33fcfd40c4d6c72e01762cdb6a0e3898fac6874171ee1464c8432e not found: ID does not exist" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.718702 4812 scope.go:117] "RemoveContainer" containerID="ec284e90da49c7872d39a75a72e44ef46dcfe4a2ff6b8f4130f4202567d06867" Nov 24 21:04:27 crc kubenswrapper[4812]: E1124 21:04:27.718967 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec284e90da49c7872d39a75a72e44ef46dcfe4a2ff6b8f4130f4202567d06867\": container with ID starting with ec284e90da49c7872d39a75a72e44ef46dcfe4a2ff6b8f4130f4202567d06867 not found: ID does not exist" containerID="ec284e90da49c7872d39a75a72e44ef46dcfe4a2ff6b8f4130f4202567d06867" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.719000 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec284e90da49c7872d39a75a72e44ef46dcfe4a2ff6b8f4130f4202567d06867"} err="failed to get container status \"ec284e90da49c7872d39a75a72e44ef46dcfe4a2ff6b8f4130f4202567d06867\": rpc error: code = NotFound desc = could not find container \"ec284e90da49c7872d39a75a72e44ef46dcfe4a2ff6b8f4130f4202567d06867\": container with ID starting with ec284e90da49c7872d39a75a72e44ef46dcfe4a2ff6b8f4130f4202567d06867 not found: ID does not exist" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.719014 4812 scope.go:117] "RemoveContainer" containerID="5f2c18a2db9957503a7e5f7ffb4c6e2d5089e901e2a3a7250aed5d258d027f84" Nov 24 21:04:27 crc kubenswrapper[4812]: E1124 21:04:27.719210 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f2c18a2db9957503a7e5f7ffb4c6e2d5089e901e2a3a7250aed5d258d027f84\": container with ID starting with 5f2c18a2db9957503a7e5f7ffb4c6e2d5089e901e2a3a7250aed5d258d027f84 not found: ID does not exist" containerID="5f2c18a2db9957503a7e5f7ffb4c6e2d5089e901e2a3a7250aed5d258d027f84" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.719230 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2c18a2db9957503a7e5f7ffb4c6e2d5089e901e2a3a7250aed5d258d027f84"} err="failed to get container status \"5f2c18a2db9957503a7e5f7ffb4c6e2d5089e901e2a3a7250aed5d258d027f84\": rpc error: code = NotFound desc = could not find container \"5f2c18a2db9957503a7e5f7ffb4c6e2d5089e901e2a3a7250aed5d258d027f84\": container with ID starting with 5f2c18a2db9957503a7e5f7ffb4c6e2d5089e901e2a3a7250aed5d258d027f84 not found: ID does not exist" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.719252 4812 scope.go:117] "RemoveContainer" containerID="e63e1edd3d002ad5dd5b272d594ce2b62c9fd19ad1a3ad959c12423a544a9784" Nov 24 21:04:27 crc kubenswrapper[4812]: E1124 21:04:27.719596 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e63e1edd3d002ad5dd5b272d594ce2b62c9fd19ad1a3ad959c12423a544a9784\": container with ID starting with e63e1edd3d002ad5dd5b272d594ce2b62c9fd19ad1a3ad959c12423a544a9784 not found: ID does not exist" containerID="e63e1edd3d002ad5dd5b272d594ce2b62c9fd19ad1a3ad959c12423a544a9784" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.719661 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e63e1edd3d002ad5dd5b272d594ce2b62c9fd19ad1a3ad959c12423a544a9784"} err="failed to get container status \"e63e1edd3d002ad5dd5b272d594ce2b62c9fd19ad1a3ad959c12423a544a9784\": rpc error: code = NotFound desc = could not find container \"e63e1edd3d002ad5dd5b272d594ce2b62c9fd19ad1a3ad959c12423a544a9784\": container with ID starting with e63e1edd3d002ad5dd5b272d594ce2b62c9fd19ad1a3ad959c12423a544a9784 not found: ID does not exist" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.761651 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b88ae6b-17e4-4653-931a-4e9e6459d934-run-httpd\") pod \"ceilometer-0\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.761715 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b88ae6b-17e4-4653-931a-4e9e6459d934-log-httpd\") pod \"ceilometer-0\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.761915 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.761959 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-config-data\") pod \"ceilometer-0\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.761987 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-scripts\") pod \"ceilometer-0\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.762029 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wl7f\" (UniqueName: \"kubernetes.io/projected/6b88ae6b-17e4-4653-931a-4e9e6459d934-kube-api-access-7wl7f\") pod \"ceilometer-0\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.762068 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.864022 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b88ae6b-17e4-4653-931a-4e9e6459d934-run-httpd\") pod \"ceilometer-0\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.864098 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b88ae6b-17e4-4653-931a-4e9e6459d934-log-httpd\") pod \"ceilometer-0\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.864191 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.864210 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-config-data\") pod \"ceilometer-0\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.864224 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-scripts\") pod \"ceilometer-0\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.864242 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wl7f\" (UniqueName: \"kubernetes.io/projected/6b88ae6b-17e4-4653-931a-4e9e6459d934-kube-api-access-7wl7f\") pod \"ceilometer-0\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.864260 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.871066 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b88ae6b-17e4-4653-931a-4e9e6459d934-run-httpd\") pod \"ceilometer-0\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.871682 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b88ae6b-17e4-4653-931a-4e9e6459d934-log-httpd\") pod \"ceilometer-0\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.880007 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-config-data\") pod \"ceilometer-0\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.880878 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.881602 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.901243 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-scripts\") pod \"ceilometer-0\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " pod="openstack/ceilometer-0" Nov 24 21:04:27 crc kubenswrapper[4812]: I1124 21:04:27.907079 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wl7f\" (UniqueName: \"kubernetes.io/projected/6b88ae6b-17e4-4653-931a-4e9e6459d934-kube-api-access-7wl7f\") pod \"ceilometer-0\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " pod="openstack/ceilometer-0" Nov 24 21:04:28 crc kubenswrapper[4812]: I1124 21:04:28.013475 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:04:28 crc kubenswrapper[4812]: I1124 21:04:28.594303 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:04:28 crc kubenswrapper[4812]: I1124 21:04:28.838984 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:04:28 crc kubenswrapper[4812]: I1124 21:04:28.977306 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0178505f-e93d-49f9-aa8c-f68abc222d19" path="/var/lib/kubelet/pods/0178505f-e93d-49f9-aa8c-f68abc222d19/volumes" Nov 24 21:04:29 crc kubenswrapper[4812]: I1124 21:04:29.406134 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b69rk"] Nov 24 21:04:29 crc kubenswrapper[4812]: I1124 21:04:29.411858 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b69rk" Nov 24 21:04:29 crc kubenswrapper[4812]: I1124 21:04:29.416788 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b69rk"] Nov 24 21:04:29 crc kubenswrapper[4812]: I1124 21:04:29.502447 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjcp8\" (UniqueName: \"kubernetes.io/projected/ba9aef21-5aaa-4c9e-9db2-19f2d488da85-kube-api-access-sjcp8\") pod \"community-operators-b69rk\" (UID: \"ba9aef21-5aaa-4c9e-9db2-19f2d488da85\") " pod="openshift-marketplace/community-operators-b69rk" Nov 24 21:04:29 crc kubenswrapper[4812]: I1124 21:04:29.502622 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba9aef21-5aaa-4c9e-9db2-19f2d488da85-catalog-content\") pod \"community-operators-b69rk\" (UID: \"ba9aef21-5aaa-4c9e-9db2-19f2d488da85\") " pod="openshift-marketplace/community-operators-b69rk" Nov 24 21:04:29 crc kubenswrapper[4812]: I1124 21:04:29.502647 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba9aef21-5aaa-4c9e-9db2-19f2d488da85-utilities\") pod \"community-operators-b69rk\" (UID: \"ba9aef21-5aaa-4c9e-9db2-19f2d488da85\") " pod="openshift-marketplace/community-operators-b69rk" Nov 24 21:04:29 crc kubenswrapper[4812]: I1124 21:04:29.586205 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b88ae6b-17e4-4653-931a-4e9e6459d934","Type":"ContainerStarted","Data":"7f10999acd3409a25954f531c60d6ccdfbe6ba408c87511d9f32b24d1b07d7f0"} Nov 24 21:04:29 crc kubenswrapper[4812]: I1124 21:04:29.588002 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221","Type":"ContainerStarted","Data":"7decf14e369dbddb060c9643e449540e4f9635ef84dec04c30330c5d565ffb8c"} Nov 24 21:04:29 crc kubenswrapper[4812]: I1124 21:04:29.588132 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" containerName="aodh-api" containerID="cri-o://d90cbfc2b1ac105c42127437f086cc5d8b9f1d66bfb930f7b95a05a214d41106" gracePeriod=30 Nov 24 21:04:29 crc kubenswrapper[4812]: I1124 21:04:29.588491 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" containerName="aodh-listener" containerID="cri-o://7decf14e369dbddb060c9643e449540e4f9635ef84dec04c30330c5d565ffb8c" gracePeriod=30 Nov 24 21:04:29 crc kubenswrapper[4812]: I1124 21:04:29.588543 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" containerName="aodh-notifier" containerID="cri-o://e3cd4ff7119797a15da1cdb99e930a6dbf04b4896bee8568068c1435cada1ec6" gracePeriod=30 Nov 24 21:04:29 crc kubenswrapper[4812]: I1124 21:04:29.588576 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" containerName="aodh-evaluator" containerID="cri-o://2e102b5b1751183026e477458b8aba2119e6c6a602b0a97436eac2828b27fe80" gracePeriod=30 Nov 24 21:04:29 crc kubenswrapper[4812]: I1124 21:04:29.604607 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba9aef21-5aaa-4c9e-9db2-19f2d488da85-catalog-content\") pod \"community-operators-b69rk\" (UID: \"ba9aef21-5aaa-4c9e-9db2-19f2d488da85\") " pod="openshift-marketplace/community-operators-b69rk" Nov 24 21:04:29 crc kubenswrapper[4812]: I1124 21:04:29.604654 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba9aef21-5aaa-4c9e-9db2-19f2d488da85-utilities\") pod \"community-operators-b69rk\" (UID: \"ba9aef21-5aaa-4c9e-9db2-19f2d488da85\") " pod="openshift-marketplace/community-operators-b69rk" Nov 24 21:04:29 crc kubenswrapper[4812]: I1124 21:04:29.604814 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjcp8\" (UniqueName: \"kubernetes.io/projected/ba9aef21-5aaa-4c9e-9db2-19f2d488da85-kube-api-access-sjcp8\") pod \"community-operators-b69rk\" (UID: \"ba9aef21-5aaa-4c9e-9db2-19f2d488da85\") " pod="openshift-marketplace/community-operators-b69rk" Nov 24 21:04:29 crc kubenswrapper[4812]: I1124 21:04:29.605588 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba9aef21-5aaa-4c9e-9db2-19f2d488da85-catalog-content\") pod \"community-operators-b69rk\" (UID: \"ba9aef21-5aaa-4c9e-9db2-19f2d488da85\") " pod="openshift-marketplace/community-operators-b69rk" Nov 24 21:04:29 crc kubenswrapper[4812]: I1124 21:04:29.605809 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba9aef21-5aaa-4c9e-9db2-19f2d488da85-utilities\") pod \"community-operators-b69rk\" (UID: \"ba9aef21-5aaa-4c9e-9db2-19f2d488da85\") " pod="openshift-marketplace/community-operators-b69rk" Nov 24 21:04:29 crc kubenswrapper[4812]: I1124 21:04:29.629702 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.9602294759999999 podStartE2EDuration="8.62968491s" podCreationTimestamp="2025-11-24 21:04:21 +0000 UTC" firstStartedPulling="2025-11-24 21:04:22.567223966 +0000 UTC m=+6456.356176327" lastFinishedPulling="2025-11-24 21:04:29.23667938 +0000 UTC m=+6463.025631761" observedRunningTime="2025-11-24 21:04:29.623407582 +0000 UTC m=+6463.412359953" watchObservedRunningTime="2025-11-24 21:04:29.62968491 +0000 UTC m=+6463.418637281" Nov 24 21:04:29 crc kubenswrapper[4812]: I1124 21:04:29.641182 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjcp8\" (UniqueName: \"kubernetes.io/projected/ba9aef21-5aaa-4c9e-9db2-19f2d488da85-kube-api-access-sjcp8\") pod \"community-operators-b69rk\" (UID: \"ba9aef21-5aaa-4c9e-9db2-19f2d488da85\") " pod="openshift-marketplace/community-operators-b69rk" Nov 24 21:04:29 crc kubenswrapper[4812]: I1124 21:04:29.761844 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b69rk" Nov 24 21:04:30 crc kubenswrapper[4812]: I1124 21:04:30.065007 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7gkjk"] Nov 24 21:04:30 crc kubenswrapper[4812]: I1124 21:04:30.091661 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-7gkjk"] Nov 24 21:04:30 crc kubenswrapper[4812]: E1124 21:04:30.239874 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14b97d9b_00bd_49fb_b0e3_c3d5aec3a221.slice/crio-conmon-d90cbfc2b1ac105c42127437f086cc5d8b9f1d66bfb930f7b95a05a214d41106.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14b97d9b_00bd_49fb_b0e3_c3d5aec3a221.slice/crio-d90cbfc2b1ac105c42127437f086cc5d8b9f1d66bfb930f7b95a05a214d41106.scope\": RecentStats: unable to find data in memory cache]" Nov 24 21:04:30 crc kubenswrapper[4812]: I1124 21:04:30.425690 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b69rk"] Nov 24 21:04:30 crc kubenswrapper[4812]: I1124 21:04:30.600906 4812 generic.go:334] "Generic (PLEG): container finished" podID="14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" containerID="e3cd4ff7119797a15da1cdb99e930a6dbf04b4896bee8568068c1435cada1ec6" exitCode=0 Nov 24 21:04:30 crc kubenswrapper[4812]: I1124 21:04:30.601264 4812 generic.go:334] "Generic (PLEG): container finished" podID="14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" containerID="2e102b5b1751183026e477458b8aba2119e6c6a602b0a97436eac2828b27fe80" exitCode=0 Nov 24 21:04:30 crc kubenswrapper[4812]: I1124 21:04:30.601278 4812 generic.go:334] "Generic (PLEG): container finished" podID="14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" containerID="d90cbfc2b1ac105c42127437f086cc5d8b9f1d66bfb930f7b95a05a214d41106" exitCode=0 Nov 24 21:04:30 crc kubenswrapper[4812]: I1124 21:04:30.600975 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221","Type":"ContainerDied","Data":"e3cd4ff7119797a15da1cdb99e930a6dbf04b4896bee8568068c1435cada1ec6"} Nov 24 21:04:30 crc kubenswrapper[4812]: I1124 21:04:30.601331 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221","Type":"ContainerDied","Data":"2e102b5b1751183026e477458b8aba2119e6c6a602b0a97436eac2828b27fe80"} Nov 24 21:04:30 crc kubenswrapper[4812]: I1124 21:04:30.601380 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221","Type":"ContainerDied","Data":"d90cbfc2b1ac105c42127437f086cc5d8b9f1d66bfb930f7b95a05a214d41106"} Nov 24 21:04:30 crc kubenswrapper[4812]: I1124 21:04:30.603805 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b69rk" event={"ID":"ba9aef21-5aaa-4c9e-9db2-19f2d488da85","Type":"ContainerStarted","Data":"3947cef21ef829203442bee2accb4a2b07732f33a28b6c2441d4697710aa5ca0"} Nov 24 21:04:30 crc kubenswrapper[4812]: I1124 21:04:30.603849 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b69rk" event={"ID":"ba9aef21-5aaa-4c9e-9db2-19f2d488da85","Type":"ContainerStarted","Data":"53cad6727a0605eb8a0594171beb60e115b56dc0268ff0380ee7fe8ec131a351"} Nov 24 21:04:30 crc kubenswrapper[4812]: I1124 21:04:30.607445 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b88ae6b-17e4-4653-931a-4e9e6459d934","Type":"ContainerStarted","Data":"0fdb06df673c57cdfa52813c39f3b3e1b806f2f089aa2502352b00d2a7efa4d9"} Nov 24 21:04:30 crc kubenswrapper[4812]: I1124 21:04:30.979145 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8455425-64c6-4828-aba6-ef211a15a2ba" path="/var/lib/kubelet/pods/d8455425-64c6-4828-aba6-ef211a15a2ba/volumes" Nov 24 21:04:31 crc kubenswrapper[4812]: I1124 21:04:31.620489 4812 generic.go:334] "Generic (PLEG): container finished" podID="ba9aef21-5aaa-4c9e-9db2-19f2d488da85" containerID="3947cef21ef829203442bee2accb4a2b07732f33a28b6c2441d4697710aa5ca0" exitCode=0 Nov 24 21:04:31 crc kubenswrapper[4812]: I1124 21:04:31.620995 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b69rk" event={"ID":"ba9aef21-5aaa-4c9e-9db2-19f2d488da85","Type":"ContainerDied","Data":"3947cef21ef829203442bee2accb4a2b07732f33a28b6c2441d4697710aa5ca0"} Nov 24 21:04:31 crc kubenswrapper[4812]: I1124 21:04:31.660538 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b88ae6b-17e4-4653-931a-4e9e6459d934","Type":"ContainerStarted","Data":"09793a5b2449cf740d048bfc1f9a5730a0a49e71f269747ef3f077c0614824e5"} Nov 24 21:04:31 crc kubenswrapper[4812]: I1124 21:04:31.660584 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b88ae6b-17e4-4653-931a-4e9e6459d934","Type":"ContainerStarted","Data":"806cb69dc40e03a0c8511bdb6e45071da777593d9b8fb3dc129d944128ee8aa0"} Nov 24 21:04:32 crc kubenswrapper[4812]: I1124 21:04:32.029076 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-7xzss"] Nov 24 21:04:32 crc kubenswrapper[4812]: I1124 21:04:32.038529 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-7xzss"] Nov 24 21:04:32 crc kubenswrapper[4812]: I1124 21:04:32.671259 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b69rk" event={"ID":"ba9aef21-5aaa-4c9e-9db2-19f2d488da85","Type":"ContainerStarted","Data":"4989124f411bf12816797d8af9b333b33ff2eef4e5f860edc2c1240b1466dc76"} Nov 24 21:04:32 crc kubenswrapper[4812]: I1124 21:04:32.977995 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74b9a17c-1408-4008-a7ce-2805decab132" path="/var/lib/kubelet/pods/74b9a17c-1408-4008-a7ce-2805decab132/volumes" Nov 24 21:04:32 crc kubenswrapper[4812]: I1124 21:04:32.998279 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:04:32 crc kubenswrapper[4812]: I1124 21:04:32.998368 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:04:33 crc kubenswrapper[4812]: I1124 21:04:33.686604 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b88ae6b-17e4-4653-931a-4e9e6459d934" containerName="ceilometer-central-agent" containerID="cri-o://0fdb06df673c57cdfa52813c39f3b3e1b806f2f089aa2502352b00d2a7efa4d9" gracePeriod=30 Nov 24 21:04:33 crc kubenswrapper[4812]: I1124 21:04:33.686895 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b88ae6b-17e4-4653-931a-4e9e6459d934","Type":"ContainerStarted","Data":"18ab78b02c8dc0f65635636b889c4fb1e443aab2362270686d55bdbbc95aafbc"} Nov 24 21:04:33 crc kubenswrapper[4812]: I1124 21:04:33.686951 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:04:33 crc kubenswrapper[4812]: I1124 21:04:33.687230 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b88ae6b-17e4-4653-931a-4e9e6459d934" containerName="proxy-httpd" containerID="cri-o://18ab78b02c8dc0f65635636b889c4fb1e443aab2362270686d55bdbbc95aafbc" gracePeriod=30 Nov 24 21:04:33 crc kubenswrapper[4812]: I1124 21:04:33.687287 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b88ae6b-17e4-4653-931a-4e9e6459d934" containerName="sg-core" containerID="cri-o://09793a5b2449cf740d048bfc1f9a5730a0a49e71f269747ef3f077c0614824e5" gracePeriod=30 Nov 24 21:04:33 crc kubenswrapper[4812]: I1124 21:04:33.687346 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6b88ae6b-17e4-4653-931a-4e9e6459d934" containerName="ceilometer-notification-agent" containerID="cri-o://806cb69dc40e03a0c8511bdb6e45071da777593d9b8fb3dc129d944128ee8aa0" gracePeriod=30 Nov 24 21:04:33 crc kubenswrapper[4812]: I1124 21:04:33.728734 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.05729437 podStartE2EDuration="6.728709852s" podCreationTimestamp="2025-11-24 21:04:27 +0000 UTC" firstStartedPulling="2025-11-24 21:04:29.197759248 +0000 UTC m=+6462.986711629" lastFinishedPulling="2025-11-24 21:04:32.86917473 +0000 UTC m=+6466.658127111" observedRunningTime="2025-11-24 21:04:33.715431666 +0000 UTC m=+6467.504384047" watchObservedRunningTime="2025-11-24 21:04:33.728709852 +0000 UTC m=+6467.517662233" Nov 24 21:04:34 crc kubenswrapper[4812]: I1124 21:04:34.697310 4812 generic.go:334] "Generic (PLEG): container finished" podID="6b88ae6b-17e4-4653-931a-4e9e6459d934" containerID="18ab78b02c8dc0f65635636b889c4fb1e443aab2362270686d55bdbbc95aafbc" exitCode=0 Nov 24 21:04:34 crc kubenswrapper[4812]: I1124 21:04:34.698247 4812 generic.go:334] "Generic (PLEG): container finished" podID="6b88ae6b-17e4-4653-931a-4e9e6459d934" containerID="09793a5b2449cf740d048bfc1f9a5730a0a49e71f269747ef3f077c0614824e5" exitCode=2 Nov 24 21:04:34 crc kubenswrapper[4812]: I1124 21:04:34.698317 4812 generic.go:334] "Generic (PLEG): container finished" podID="6b88ae6b-17e4-4653-931a-4e9e6459d934" containerID="806cb69dc40e03a0c8511bdb6e45071da777593d9b8fb3dc129d944128ee8aa0" exitCode=0 Nov 24 21:04:34 crc kubenswrapper[4812]: I1124 21:04:34.697395 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b88ae6b-17e4-4653-931a-4e9e6459d934","Type":"ContainerDied","Data":"18ab78b02c8dc0f65635636b889c4fb1e443aab2362270686d55bdbbc95aafbc"} Nov 24 21:04:34 crc kubenswrapper[4812]: I1124 21:04:34.698491 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b88ae6b-17e4-4653-931a-4e9e6459d934","Type":"ContainerDied","Data":"09793a5b2449cf740d048bfc1f9a5730a0a49e71f269747ef3f077c0614824e5"} Nov 24 21:04:34 crc kubenswrapper[4812]: I1124 21:04:34.698536 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b88ae6b-17e4-4653-931a-4e9e6459d934","Type":"ContainerDied","Data":"806cb69dc40e03a0c8511bdb6e45071da777593d9b8fb3dc129d944128ee8aa0"} Nov 24 21:04:34 crc kubenswrapper[4812]: I1124 21:04:34.700880 4812 generic.go:334] "Generic (PLEG): container finished" podID="ba9aef21-5aaa-4c9e-9db2-19f2d488da85" containerID="4989124f411bf12816797d8af9b333b33ff2eef4e5f860edc2c1240b1466dc76" exitCode=0 Nov 24 21:04:34 crc kubenswrapper[4812]: I1124 21:04:34.700925 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b69rk" event={"ID":"ba9aef21-5aaa-4c9e-9db2-19f2d488da85","Type":"ContainerDied","Data":"4989124f411bf12816797d8af9b333b33ff2eef4e5f860edc2c1240b1466dc76"} Nov 24 21:04:35 crc kubenswrapper[4812]: I1124 21:04:35.710697 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b69rk" event={"ID":"ba9aef21-5aaa-4c9e-9db2-19f2d488da85","Type":"ContainerStarted","Data":"6c690d379a92cac6b4d362fe5d3e8b6405966711d4aa09bb45b2b7f707914cc2"} Nov 24 21:04:35 crc kubenswrapper[4812]: I1124 21:04:35.732866 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b69rk" podStartSLOduration=3.268327065 podStartE2EDuration="6.732849847s" podCreationTimestamp="2025-11-24 21:04:29 +0000 UTC" firstStartedPulling="2025-11-24 21:04:31.624662146 +0000 UTC m=+6465.413614517" lastFinishedPulling="2025-11-24 21:04:35.089184938 +0000 UTC m=+6468.878137299" observedRunningTime="2025-11-24 21:04:35.731229041 +0000 UTC m=+6469.520181412" watchObservedRunningTime="2025-11-24 21:04:35.732849847 +0000 UTC m=+6469.521802218" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.150889 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.161767 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.679165 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.749418 4812 generic.go:334] "Generic (PLEG): container finished" podID="6b88ae6b-17e4-4653-931a-4e9e6459d934" containerID="0fdb06df673c57cdfa52813c39f3b3e1b806f2f089aa2502352b00d2a7efa4d9" exitCode=0 Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.751022 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.751556 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b88ae6b-17e4-4653-931a-4e9e6459d934","Type":"ContainerDied","Data":"0fdb06df673c57cdfa52813c39f3b3e1b806f2f089aa2502352b00d2a7efa4d9"} Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.751585 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6b88ae6b-17e4-4653-931a-4e9e6459d934","Type":"ContainerDied","Data":"7f10999acd3409a25954f531c60d6ccdfbe6ba408c87511d9f32b24d1b07d7f0"} Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.751603 4812 scope.go:117] "RemoveContainer" containerID="18ab78b02c8dc0f65635636b889c4fb1e443aab2362270686d55bdbbc95aafbc" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.757173 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.774207 4812 scope.go:117] "RemoveContainer" containerID="09793a5b2449cf740d048bfc1f9a5730a0a49e71f269747ef3f077c0614824e5" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.797726 4812 scope.go:117] "RemoveContainer" containerID="806cb69dc40e03a0c8511bdb6e45071da777593d9b8fb3dc129d944128ee8aa0" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.800780 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-scripts\") pod \"6b88ae6b-17e4-4653-931a-4e9e6459d934\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.800841 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-combined-ca-bundle\") pod \"6b88ae6b-17e4-4653-931a-4e9e6459d934\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.800880 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-config-data\") pod \"6b88ae6b-17e4-4653-931a-4e9e6459d934\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.801910 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b88ae6b-17e4-4653-931a-4e9e6459d934-run-httpd\") pod \"6b88ae6b-17e4-4653-931a-4e9e6459d934\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.802007 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-sg-core-conf-yaml\") pod \"6b88ae6b-17e4-4653-931a-4e9e6459d934\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.802041 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wl7f\" (UniqueName: \"kubernetes.io/projected/6b88ae6b-17e4-4653-931a-4e9e6459d934-kube-api-access-7wl7f\") pod \"6b88ae6b-17e4-4653-931a-4e9e6459d934\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.802149 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b88ae6b-17e4-4653-931a-4e9e6459d934-log-httpd\") pod \"6b88ae6b-17e4-4653-931a-4e9e6459d934\" (UID: \"6b88ae6b-17e4-4653-931a-4e9e6459d934\") " Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.804092 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b88ae6b-17e4-4653-931a-4e9e6459d934-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6b88ae6b-17e4-4653-931a-4e9e6459d934" (UID: "6b88ae6b-17e4-4653-931a-4e9e6459d934"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.804745 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b88ae6b-17e4-4653-931a-4e9e6459d934-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6b88ae6b-17e4-4653-931a-4e9e6459d934" (UID: "6b88ae6b-17e4-4653-931a-4e9e6459d934"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.810829 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b88ae6b-17e4-4653-931a-4e9e6459d934-kube-api-access-7wl7f" (OuterVolumeSpecName: "kube-api-access-7wl7f") pod "6b88ae6b-17e4-4653-931a-4e9e6459d934" (UID: "6b88ae6b-17e4-4653-931a-4e9e6459d934"). InnerVolumeSpecName "kube-api-access-7wl7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.812203 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-scripts" (OuterVolumeSpecName: "scripts") pod "6b88ae6b-17e4-4653-931a-4e9e6459d934" (UID: "6b88ae6b-17e4-4653-931a-4e9e6459d934"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.854394 4812 scope.go:117] "RemoveContainer" containerID="0fdb06df673c57cdfa52813c39f3b3e1b806f2f089aa2502352b00d2a7efa4d9" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.861752 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6b88ae6b-17e4-4653-931a-4e9e6459d934" (UID: "6b88ae6b-17e4-4653-931a-4e9e6459d934"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.907479 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b88ae6b-17e4-4653-931a-4e9e6459d934-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.907505 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.907552 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wl7f\" (UniqueName: \"kubernetes.io/projected/6b88ae6b-17e4-4653-931a-4e9e6459d934-kube-api-access-7wl7f\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.907562 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6b88ae6b-17e4-4653-931a-4e9e6459d934-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.907571 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.909621 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b88ae6b-17e4-4653-931a-4e9e6459d934" (UID: "6b88ae6b-17e4-4653-931a-4e9e6459d934"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.944034 4812 scope.go:117] "RemoveContainer" containerID="18ab78b02c8dc0f65635636b889c4fb1e443aab2362270686d55bdbbc95aafbc" Nov 24 21:04:37 crc kubenswrapper[4812]: E1124 21:04:37.947957 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18ab78b02c8dc0f65635636b889c4fb1e443aab2362270686d55bdbbc95aafbc\": container with ID starting with 18ab78b02c8dc0f65635636b889c4fb1e443aab2362270686d55bdbbc95aafbc not found: ID does not exist" containerID="18ab78b02c8dc0f65635636b889c4fb1e443aab2362270686d55bdbbc95aafbc" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.948003 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18ab78b02c8dc0f65635636b889c4fb1e443aab2362270686d55bdbbc95aafbc"} err="failed to get container status \"18ab78b02c8dc0f65635636b889c4fb1e443aab2362270686d55bdbbc95aafbc\": rpc error: code = NotFound desc = could not find container \"18ab78b02c8dc0f65635636b889c4fb1e443aab2362270686d55bdbbc95aafbc\": container with ID starting with 18ab78b02c8dc0f65635636b889c4fb1e443aab2362270686d55bdbbc95aafbc not found: ID does not exist" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.948031 4812 scope.go:117] "RemoveContainer" containerID="09793a5b2449cf740d048bfc1f9a5730a0a49e71f269747ef3f077c0614824e5" Nov 24 21:04:37 crc kubenswrapper[4812]: E1124 21:04:37.948300 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09793a5b2449cf740d048bfc1f9a5730a0a49e71f269747ef3f077c0614824e5\": container with ID starting with 09793a5b2449cf740d048bfc1f9a5730a0a49e71f269747ef3f077c0614824e5 not found: ID does not exist" containerID="09793a5b2449cf740d048bfc1f9a5730a0a49e71f269747ef3f077c0614824e5" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.948319 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09793a5b2449cf740d048bfc1f9a5730a0a49e71f269747ef3f077c0614824e5"} err="failed to get container status \"09793a5b2449cf740d048bfc1f9a5730a0a49e71f269747ef3f077c0614824e5\": rpc error: code = NotFound desc = could not find container \"09793a5b2449cf740d048bfc1f9a5730a0a49e71f269747ef3f077c0614824e5\": container with ID starting with 09793a5b2449cf740d048bfc1f9a5730a0a49e71f269747ef3f077c0614824e5 not found: ID does not exist" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.948346 4812 scope.go:117] "RemoveContainer" containerID="806cb69dc40e03a0c8511bdb6e45071da777593d9b8fb3dc129d944128ee8aa0" Nov 24 21:04:37 crc kubenswrapper[4812]: E1124 21:04:37.948677 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"806cb69dc40e03a0c8511bdb6e45071da777593d9b8fb3dc129d944128ee8aa0\": container with ID starting with 806cb69dc40e03a0c8511bdb6e45071da777593d9b8fb3dc129d944128ee8aa0 not found: ID does not exist" containerID="806cb69dc40e03a0c8511bdb6e45071da777593d9b8fb3dc129d944128ee8aa0" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.948695 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"806cb69dc40e03a0c8511bdb6e45071da777593d9b8fb3dc129d944128ee8aa0"} err="failed to get container status \"806cb69dc40e03a0c8511bdb6e45071da777593d9b8fb3dc129d944128ee8aa0\": rpc error: code = NotFound desc = could not find container \"806cb69dc40e03a0c8511bdb6e45071da777593d9b8fb3dc129d944128ee8aa0\": container with ID starting with 806cb69dc40e03a0c8511bdb6e45071da777593d9b8fb3dc129d944128ee8aa0 not found: ID does not exist" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.948706 4812 scope.go:117] "RemoveContainer" containerID="0fdb06df673c57cdfa52813c39f3b3e1b806f2f089aa2502352b00d2a7efa4d9" Nov 24 21:04:37 crc kubenswrapper[4812]: E1124 21:04:37.949070 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fdb06df673c57cdfa52813c39f3b3e1b806f2f089aa2502352b00d2a7efa4d9\": container with ID starting with 0fdb06df673c57cdfa52813c39f3b3e1b806f2f089aa2502352b00d2a7efa4d9 not found: ID does not exist" containerID="0fdb06df673c57cdfa52813c39f3b3e1b806f2f089aa2502352b00d2a7efa4d9" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.949169 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fdb06df673c57cdfa52813c39f3b3e1b806f2f089aa2502352b00d2a7efa4d9"} err="failed to get container status \"0fdb06df673c57cdfa52813c39f3b3e1b806f2f089aa2502352b00d2a7efa4d9\": rpc error: code = NotFound desc = could not find container \"0fdb06df673c57cdfa52813c39f3b3e1b806f2f089aa2502352b00d2a7efa4d9\": container with ID starting with 0fdb06df673c57cdfa52813c39f3b3e1b806f2f089aa2502352b00d2a7efa4d9 not found: ID does not exist" Nov 24 21:04:37 crc kubenswrapper[4812]: I1124 21:04:37.982944 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-config-data" (OuterVolumeSpecName: "config-data") pod "6b88ae6b-17e4-4653-931a-4e9e6459d934" (UID: "6b88ae6b-17e4-4653-931a-4e9e6459d934"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.009566 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.009595 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b88ae6b-17e4-4653-931a-4e9e6459d934-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.104483 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.121673 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.131896 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:04:38 crc kubenswrapper[4812]: E1124 21:04:38.132421 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b88ae6b-17e4-4653-931a-4e9e6459d934" containerName="proxy-httpd" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.132449 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b88ae6b-17e4-4653-931a-4e9e6459d934" containerName="proxy-httpd" Nov 24 21:04:38 crc kubenswrapper[4812]: E1124 21:04:38.132487 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b88ae6b-17e4-4653-931a-4e9e6459d934" containerName="ceilometer-central-agent" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.132497 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b88ae6b-17e4-4653-931a-4e9e6459d934" containerName="ceilometer-central-agent" Nov 24 21:04:38 crc kubenswrapper[4812]: E1124 21:04:38.132526 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b88ae6b-17e4-4653-931a-4e9e6459d934" containerName="sg-core" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.132535 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b88ae6b-17e4-4653-931a-4e9e6459d934" containerName="sg-core" Nov 24 21:04:38 crc kubenswrapper[4812]: E1124 21:04:38.132555 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b88ae6b-17e4-4653-931a-4e9e6459d934" containerName="ceilometer-notification-agent" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.132564 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b88ae6b-17e4-4653-931a-4e9e6459d934" containerName="ceilometer-notification-agent" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.132868 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b88ae6b-17e4-4653-931a-4e9e6459d934" containerName="ceilometer-central-agent" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.132921 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b88ae6b-17e4-4653-931a-4e9e6459d934" containerName="sg-core" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.132938 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b88ae6b-17e4-4653-931a-4e9e6459d934" containerName="ceilometer-notification-agent" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.132977 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b88ae6b-17e4-4653-931a-4e9e6459d934" containerName="proxy-httpd" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.136061 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.140231 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.147629 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.147967 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.215317 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba808a86-0ae2-458a-9d75-68f7e4a3139b-log-httpd\") pod \"ceilometer-0\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.215375 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-config-data\") pod \"ceilometer-0\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.215425 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-scripts\") pod \"ceilometer-0\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.215466 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.215487 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.215515 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba808a86-0ae2-458a-9d75-68f7e4a3139b-run-httpd\") pod \"ceilometer-0\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.215585 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zwr4\" (UniqueName: \"kubernetes.io/projected/ba808a86-0ae2-458a-9d75-68f7e4a3139b-kube-api-access-2zwr4\") pod \"ceilometer-0\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.317459 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zwr4\" (UniqueName: \"kubernetes.io/projected/ba808a86-0ae2-458a-9d75-68f7e4a3139b-kube-api-access-2zwr4\") pod \"ceilometer-0\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.317579 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba808a86-0ae2-458a-9d75-68f7e4a3139b-log-httpd\") pod \"ceilometer-0\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.317614 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-config-data\") pod \"ceilometer-0\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.317671 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-scripts\") pod \"ceilometer-0\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.317727 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.317756 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.317793 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba808a86-0ae2-458a-9d75-68f7e4a3139b-run-httpd\") pod \"ceilometer-0\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.318089 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba808a86-0ae2-458a-9d75-68f7e4a3139b-log-httpd\") pod \"ceilometer-0\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.318697 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba808a86-0ae2-458a-9d75-68f7e4a3139b-run-httpd\") pod \"ceilometer-0\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.321273 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-config-data\") pod \"ceilometer-0\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.323213 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-scripts\") pod \"ceilometer-0\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.323611 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.334262 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.338851 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zwr4\" (UniqueName: \"kubernetes.io/projected/ba808a86-0ae2-458a-9d75-68f7e4a3139b-kube-api-access-2zwr4\") pod \"ceilometer-0\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.467826 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:04:38 crc kubenswrapper[4812]: W1124 21:04:38.981419 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba808a86_0ae2_458a_9d75_68f7e4a3139b.slice/crio-fb7b3be3a66373d3f463153e94ba90bdff6f859ba91064d9158ad67ecb8ceaf7 WatchSource:0}: Error finding container fb7b3be3a66373d3f463153e94ba90bdff6f859ba91064d9158ad67ecb8ceaf7: Status 404 returned error can't find the container with id fb7b3be3a66373d3f463153e94ba90bdff6f859ba91064d9158ad67ecb8ceaf7 Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.984257 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b88ae6b-17e4-4653-931a-4e9e6459d934" path="/var/lib/kubelet/pods/6b88ae6b-17e4-4653-931a-4e9e6459d934/volumes" Nov 24 21:04:38 crc kubenswrapper[4812]: I1124 21:04:38.985093 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:04:39 crc kubenswrapper[4812]: I1124 21:04:39.762556 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b69rk" Nov 24 21:04:39 crc kubenswrapper[4812]: I1124 21:04:39.762961 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b69rk" Nov 24 21:04:39 crc kubenswrapper[4812]: I1124 21:04:39.778942 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba808a86-0ae2-458a-9d75-68f7e4a3139b","Type":"ContainerStarted","Data":"a2bf6e6cc9014d4d02ca9705d3291c5a22583acb3896413876b8dcd106af7656"} Nov 24 21:04:39 crc kubenswrapper[4812]: I1124 21:04:39.778990 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba808a86-0ae2-458a-9d75-68f7e4a3139b","Type":"ContainerStarted","Data":"fb7b3be3a66373d3f463153e94ba90bdff6f859ba91064d9158ad67ecb8ceaf7"} Nov 24 21:04:39 crc kubenswrapper[4812]: I1124 21:04:39.828163 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b69rk" Nov 24 21:04:40 crc kubenswrapper[4812]: I1124 21:04:40.792613 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba808a86-0ae2-458a-9d75-68f7e4a3139b","Type":"ContainerStarted","Data":"3908652c1d4ff079a1ee5172d3b231803883dd7bffd22d0bef50285d935f3e4e"} Nov 24 21:04:40 crc kubenswrapper[4812]: I1124 21:04:40.865114 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b69rk" Nov 24 21:04:40 crc kubenswrapper[4812]: I1124 21:04:40.919223 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b69rk"] Nov 24 21:04:41 crc kubenswrapper[4812]: I1124 21:04:41.804194 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba808a86-0ae2-458a-9d75-68f7e4a3139b","Type":"ContainerStarted","Data":"01d4a8a54647ae57c5ba20a96678fcc9abe6993514f1782a121b183a21e1e783"} Nov 24 21:04:42 crc kubenswrapper[4812]: I1124 21:04:42.817381 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba808a86-0ae2-458a-9d75-68f7e4a3139b","Type":"ContainerStarted","Data":"85d785be262af3ded0dec528f5e2d910053d652e624f6745c77c829bf6d52f8d"} Nov 24 21:04:42 crc kubenswrapper[4812]: I1124 21:04:42.817499 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b69rk" podUID="ba9aef21-5aaa-4c9e-9db2-19f2d488da85" containerName="registry-server" containerID="cri-o://6c690d379a92cac6b4d362fe5d3e8b6405966711d4aa09bb45b2b7f707914cc2" gracePeriod=2 Nov 24 21:04:42 crc kubenswrapper[4812]: I1124 21:04:42.819534 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:04:42 crc kubenswrapper[4812]: I1124 21:04:42.872454 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.553083053 podStartE2EDuration="4.872438045s" podCreationTimestamp="2025-11-24 21:04:38 +0000 UTC" firstStartedPulling="2025-11-24 21:04:38.987079924 +0000 UTC m=+6472.776032315" lastFinishedPulling="2025-11-24 21:04:42.306434906 +0000 UTC m=+6476.095387307" observedRunningTime="2025-11-24 21:04:42.86627811 +0000 UTC m=+6476.655230491" watchObservedRunningTime="2025-11-24 21:04:42.872438045 +0000 UTC m=+6476.661390416" Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.394111 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b69rk" Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.460107 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjcp8\" (UniqueName: \"kubernetes.io/projected/ba9aef21-5aaa-4c9e-9db2-19f2d488da85-kube-api-access-sjcp8\") pod \"ba9aef21-5aaa-4c9e-9db2-19f2d488da85\" (UID: \"ba9aef21-5aaa-4c9e-9db2-19f2d488da85\") " Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.460146 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba9aef21-5aaa-4c9e-9db2-19f2d488da85-catalog-content\") pod \"ba9aef21-5aaa-4c9e-9db2-19f2d488da85\" (UID: \"ba9aef21-5aaa-4c9e-9db2-19f2d488da85\") " Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.460188 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba9aef21-5aaa-4c9e-9db2-19f2d488da85-utilities\") pod \"ba9aef21-5aaa-4c9e-9db2-19f2d488da85\" (UID: \"ba9aef21-5aaa-4c9e-9db2-19f2d488da85\") " Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.461211 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba9aef21-5aaa-4c9e-9db2-19f2d488da85-utilities" (OuterVolumeSpecName: "utilities") pod "ba9aef21-5aaa-4c9e-9db2-19f2d488da85" (UID: "ba9aef21-5aaa-4c9e-9db2-19f2d488da85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.466770 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba9aef21-5aaa-4c9e-9db2-19f2d488da85-kube-api-access-sjcp8" (OuterVolumeSpecName: "kube-api-access-sjcp8") pod "ba9aef21-5aaa-4c9e-9db2-19f2d488da85" (UID: "ba9aef21-5aaa-4c9e-9db2-19f2d488da85"). InnerVolumeSpecName "kube-api-access-sjcp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.546112 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba9aef21-5aaa-4c9e-9db2-19f2d488da85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba9aef21-5aaa-4c9e-9db2-19f2d488da85" (UID: "ba9aef21-5aaa-4c9e-9db2-19f2d488da85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.562554 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjcp8\" (UniqueName: \"kubernetes.io/projected/ba9aef21-5aaa-4c9e-9db2-19f2d488da85-kube-api-access-sjcp8\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.562759 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba9aef21-5aaa-4c9e-9db2-19f2d488da85-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.562842 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba9aef21-5aaa-4c9e-9db2-19f2d488da85-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.830964 4812 generic.go:334] "Generic (PLEG): container finished" podID="ba9aef21-5aaa-4c9e-9db2-19f2d488da85" containerID="6c690d379a92cac6b4d362fe5d3e8b6405966711d4aa09bb45b2b7f707914cc2" exitCode=0 Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.831152 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b69rk" Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.831229 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b69rk" event={"ID":"ba9aef21-5aaa-4c9e-9db2-19f2d488da85","Type":"ContainerDied","Data":"6c690d379a92cac6b4d362fe5d3e8b6405966711d4aa09bb45b2b7f707914cc2"} Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.831315 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b69rk" event={"ID":"ba9aef21-5aaa-4c9e-9db2-19f2d488da85","Type":"ContainerDied","Data":"53cad6727a0605eb8a0594171beb60e115b56dc0268ff0380ee7fe8ec131a351"} Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.831392 4812 scope.go:117] "RemoveContainer" containerID="6c690d379a92cac6b4d362fe5d3e8b6405966711d4aa09bb45b2b7f707914cc2" Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.872898 4812 scope.go:117] "RemoveContainer" containerID="4989124f411bf12816797d8af9b333b33ff2eef4e5f860edc2c1240b1466dc76" Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.892134 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b69rk"] Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.904746 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b69rk"] Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.928995 4812 scope.go:117] "RemoveContainer" containerID="3947cef21ef829203442bee2accb4a2b07732f33a28b6c2441d4697710aa5ca0" Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.988523 4812 scope.go:117] "RemoveContainer" containerID="6c690d379a92cac6b4d362fe5d3e8b6405966711d4aa09bb45b2b7f707914cc2" Nov 24 21:04:43 crc kubenswrapper[4812]: E1124 21:04:43.988948 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c690d379a92cac6b4d362fe5d3e8b6405966711d4aa09bb45b2b7f707914cc2\": container with ID starting with 6c690d379a92cac6b4d362fe5d3e8b6405966711d4aa09bb45b2b7f707914cc2 not found: ID does not exist" containerID="6c690d379a92cac6b4d362fe5d3e8b6405966711d4aa09bb45b2b7f707914cc2" Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.988995 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c690d379a92cac6b4d362fe5d3e8b6405966711d4aa09bb45b2b7f707914cc2"} err="failed to get container status \"6c690d379a92cac6b4d362fe5d3e8b6405966711d4aa09bb45b2b7f707914cc2\": rpc error: code = NotFound desc = could not find container \"6c690d379a92cac6b4d362fe5d3e8b6405966711d4aa09bb45b2b7f707914cc2\": container with ID starting with 6c690d379a92cac6b4d362fe5d3e8b6405966711d4aa09bb45b2b7f707914cc2 not found: ID does not exist" Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.989021 4812 scope.go:117] "RemoveContainer" containerID="4989124f411bf12816797d8af9b333b33ff2eef4e5f860edc2c1240b1466dc76" Nov 24 21:04:43 crc kubenswrapper[4812]: E1124 21:04:43.989301 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4989124f411bf12816797d8af9b333b33ff2eef4e5f860edc2c1240b1466dc76\": container with ID starting with 4989124f411bf12816797d8af9b333b33ff2eef4e5f860edc2c1240b1466dc76 not found: ID does not exist" containerID="4989124f411bf12816797d8af9b333b33ff2eef4e5f860edc2c1240b1466dc76" Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.989440 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4989124f411bf12816797d8af9b333b33ff2eef4e5f860edc2c1240b1466dc76"} err="failed to get container status \"4989124f411bf12816797d8af9b333b33ff2eef4e5f860edc2c1240b1466dc76\": rpc error: code = NotFound desc = could not find container \"4989124f411bf12816797d8af9b333b33ff2eef4e5f860edc2c1240b1466dc76\": container with ID starting with 4989124f411bf12816797d8af9b333b33ff2eef4e5f860edc2c1240b1466dc76 not found: ID does not exist" Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.989469 4812 scope.go:117] "RemoveContainer" containerID="3947cef21ef829203442bee2accb4a2b07732f33a28b6c2441d4697710aa5ca0" Nov 24 21:04:43 crc kubenswrapper[4812]: E1124 21:04:43.989798 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3947cef21ef829203442bee2accb4a2b07732f33a28b6c2441d4697710aa5ca0\": container with ID starting with 3947cef21ef829203442bee2accb4a2b07732f33a28b6c2441d4697710aa5ca0 not found: ID does not exist" containerID="3947cef21ef829203442bee2accb4a2b07732f33a28b6c2441d4697710aa5ca0" Nov 24 21:04:43 crc kubenswrapper[4812]: I1124 21:04:43.989828 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3947cef21ef829203442bee2accb4a2b07732f33a28b6c2441d4697710aa5ca0"} err="failed to get container status \"3947cef21ef829203442bee2accb4a2b07732f33a28b6c2441d4697710aa5ca0\": rpc error: code = NotFound desc = could not find container \"3947cef21ef829203442bee2accb4a2b07732f33a28b6c2441d4697710aa5ca0\": container with ID starting with 3947cef21ef829203442bee2accb4a2b07732f33a28b6c2441d4697710aa5ca0 not found: ID does not exist" Nov 24 21:04:44 crc kubenswrapper[4812]: I1124 21:04:44.993862 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba9aef21-5aaa-4c9e-9db2-19f2d488da85" path="/var/lib/kubelet/pods/ba9aef21-5aaa-4c9e-9db2-19f2d488da85/volumes" Nov 24 21:04:49 crc kubenswrapper[4812]: I1124 21:04:49.069833 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-bhd5n"] Nov 24 21:04:49 crc kubenswrapper[4812]: I1124 21:04:49.081683 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-bhd5n"] Nov 24 21:04:50 crc kubenswrapper[4812]: I1124 21:04:50.994965 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="390133e9-6e9e-4fe4-9d53-312c2c1ea999" path="/var/lib/kubelet/pods/390133e9-6e9e-4fe4-9d53-312c2c1ea999/volumes" Nov 24 21:04:59 crc kubenswrapper[4812]: E1124 21:04:59.903291 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14b97d9b_00bd_49fb_b0e3_c3d5aec3a221.slice/crio-conmon-7decf14e369dbddb060c9643e449540e4f9635ef84dec04c30330c5d565ffb8c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14b97d9b_00bd_49fb_b0e3_c3d5aec3a221.slice/crio-7decf14e369dbddb060c9643e449540e4f9635ef84dec04c30330c5d565ffb8c.scope\": RecentStats: unable to find data in memory cache]" Nov 24 21:05:00 crc kubenswrapper[4812]: I1124 21:05:00.071192 4812 generic.go:334] "Generic (PLEG): container finished" podID="14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" containerID="7decf14e369dbddb060c9643e449540e4f9635ef84dec04c30330c5d565ffb8c" exitCode=137 Nov 24 21:05:00 crc kubenswrapper[4812]: I1124 21:05:00.071268 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221","Type":"ContainerDied","Data":"7decf14e369dbddb060c9643e449540e4f9635ef84dec04c30330c5d565ffb8c"} Nov 24 21:05:00 crc kubenswrapper[4812]: I1124 21:05:00.613612 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:05:00 crc kubenswrapper[4812]: I1124 21:05:00.716799 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-combined-ca-bundle\") pod \"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221\" (UID: \"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221\") " Nov 24 21:05:00 crc kubenswrapper[4812]: I1124 21:05:00.717230 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7gsp\" (UniqueName: \"kubernetes.io/projected/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-kube-api-access-h7gsp\") pod \"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221\" (UID: \"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221\") " Nov 24 21:05:00 crc kubenswrapper[4812]: I1124 21:05:00.717382 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-scripts\") pod \"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221\" (UID: \"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221\") " Nov 24 21:05:00 crc kubenswrapper[4812]: I1124 21:05:00.717822 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-config-data\") pod \"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221\" (UID: \"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221\") " Nov 24 21:05:00 crc kubenswrapper[4812]: I1124 21:05:00.725721 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-kube-api-access-h7gsp" (OuterVolumeSpecName: "kube-api-access-h7gsp") pod "14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" (UID: "14b97d9b-00bd-49fb-b0e3-c3d5aec3a221"). InnerVolumeSpecName "kube-api-access-h7gsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:05:00 crc kubenswrapper[4812]: I1124 21:05:00.750545 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-scripts" (OuterVolumeSpecName: "scripts") pod "14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" (UID: "14b97d9b-00bd-49fb-b0e3-c3d5aec3a221"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:05:00 crc kubenswrapper[4812]: I1124 21:05:00.824872 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7gsp\" (UniqueName: \"kubernetes.io/projected/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-kube-api-access-h7gsp\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:00 crc kubenswrapper[4812]: I1124 21:05:00.824922 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:00 crc kubenswrapper[4812]: I1124 21:05:00.869730 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-config-data" (OuterVolumeSpecName: "config-data") pod "14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" (UID: "14b97d9b-00bd-49fb-b0e3-c3d5aec3a221"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:05:00 crc kubenswrapper[4812]: I1124 21:05:00.880648 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" (UID: "14b97d9b-00bd-49fb-b0e3-c3d5aec3a221"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:05:00 crc kubenswrapper[4812]: I1124 21:05:00.926877 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:00 crc kubenswrapper[4812]: I1124 21:05:00.926925 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.085741 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14b97d9b-00bd-49fb-b0e3-c3d5aec3a221","Type":"ContainerDied","Data":"8f0a250a633f0d80254f0ab8a9fd8c87f6cb8222a131420874c4ad5b0d05c9e4"} Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.085798 4812 scope.go:117] "RemoveContainer" containerID="7decf14e369dbddb060c9643e449540e4f9635ef84dec04c30330c5d565ffb8c" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.085956 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.110425 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.128537 4812 scope.go:117] "RemoveContainer" containerID="e3cd4ff7119797a15da1cdb99e930a6dbf04b4896bee8568068c1435cada1ec6" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.134548 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.151768 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 24 21:05:01 crc kubenswrapper[4812]: E1124 21:05:01.152458 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" containerName="aodh-evaluator" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.152471 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" containerName="aodh-evaluator" Nov 24 21:05:01 crc kubenswrapper[4812]: E1124 21:05:01.152514 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" containerName="aodh-notifier" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.152521 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" containerName="aodh-notifier" Nov 24 21:05:01 crc kubenswrapper[4812]: E1124 21:05:01.152537 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba9aef21-5aaa-4c9e-9db2-19f2d488da85" containerName="extract-content" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.152543 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba9aef21-5aaa-4c9e-9db2-19f2d488da85" containerName="extract-content" Nov 24 21:05:01 crc kubenswrapper[4812]: E1124 21:05:01.152553 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" containerName="aodh-api" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.152558 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" containerName="aodh-api" Nov 24 21:05:01 crc kubenswrapper[4812]: E1124 21:05:01.152598 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" containerName="aodh-listener" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.152605 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" containerName="aodh-listener" Nov 24 21:05:01 crc kubenswrapper[4812]: E1124 21:05:01.152619 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba9aef21-5aaa-4c9e-9db2-19f2d488da85" containerName="registry-server" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.152626 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba9aef21-5aaa-4c9e-9db2-19f2d488da85" containerName="registry-server" Nov 24 21:05:01 crc kubenswrapper[4812]: E1124 21:05:01.152639 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba9aef21-5aaa-4c9e-9db2-19f2d488da85" containerName="extract-utilities" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.152646 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba9aef21-5aaa-4c9e-9db2-19f2d488da85" containerName="extract-utilities" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.156148 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba9aef21-5aaa-4c9e-9db2-19f2d488da85" containerName="registry-server" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.156184 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" containerName="aodh-notifier" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.156208 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" containerName="aodh-api" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.156219 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" containerName="aodh-listener" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.156242 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" containerName="aodh-evaluator" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.158373 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.168566 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.214904 4812 scope.go:117] "RemoveContainer" containerID="2e102b5b1751183026e477458b8aba2119e6c6a602b0a97436eac2828b27fe80" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.215198 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.215700 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.215900 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.215949 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-7kwfq" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.216160 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.234239 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxpvx\" (UniqueName: \"kubernetes.io/projected/1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5-kube-api-access-dxpvx\") pod \"aodh-0\" (UID: \"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5\") " pod="openstack/aodh-0" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.234617 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5\") " pod="openstack/aodh-0" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.234972 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5-internal-tls-certs\") pod \"aodh-0\" (UID: \"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5\") " pod="openstack/aodh-0" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.235568 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5-scripts\") pod \"aodh-0\" (UID: \"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5\") " pod="openstack/aodh-0" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.235830 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5-config-data\") pod \"aodh-0\" (UID: \"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5\") " pod="openstack/aodh-0" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.236044 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5-public-tls-certs\") pod \"aodh-0\" (UID: \"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5\") " pod="openstack/aodh-0" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.271153 4812 scope.go:117] "RemoveContainer" containerID="d90cbfc2b1ac105c42127437f086cc5d8b9f1d66bfb930f7b95a05a214d41106" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.298032 4812 scope.go:117] "RemoveContainer" containerID="f3a58ce8cfc698c3431d4078a1d1609979f6147ef4070b57ef8bdeb25837698c" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.332475 4812 scope.go:117] "RemoveContainer" containerID="1ca892379c597a3c1f3242eb7ef0cf9f523784315ef3bb2969204feb27d835f9" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.338534 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5\") " pod="openstack/aodh-0" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.338711 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5-internal-tls-certs\") pod \"aodh-0\" (UID: \"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5\") " pod="openstack/aodh-0" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.338759 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5-scripts\") pod \"aodh-0\" (UID: \"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5\") " pod="openstack/aodh-0" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.339392 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5-config-data\") pod \"aodh-0\" (UID: \"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5\") " pod="openstack/aodh-0" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.339419 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5-public-tls-certs\") pod \"aodh-0\" (UID: \"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5\") " pod="openstack/aodh-0" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.339877 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxpvx\" (UniqueName: \"kubernetes.io/projected/1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5-kube-api-access-dxpvx\") pod \"aodh-0\" (UID: \"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5\") " pod="openstack/aodh-0" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.342781 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5-internal-tls-certs\") pod \"aodh-0\" (UID: \"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5\") " pod="openstack/aodh-0" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.343098 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5-config-data\") pod \"aodh-0\" (UID: \"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5\") " pod="openstack/aodh-0" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.345007 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5-public-tls-certs\") pod \"aodh-0\" (UID: \"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5\") " pod="openstack/aodh-0" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.357504 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5-scripts\") pod \"aodh-0\" (UID: \"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5\") " pod="openstack/aodh-0" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.357565 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5\") " pod="openstack/aodh-0" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.366230 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxpvx\" (UniqueName: \"kubernetes.io/projected/1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5-kube-api-access-dxpvx\") pod \"aodh-0\" (UID: \"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5\") " pod="openstack/aodh-0" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.427354 4812 scope.go:117] "RemoveContainer" containerID="4629baa85eb91dc0136a7f9506fb8adc45f602f34b50c24db6a5280582951643" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.459788 4812 scope.go:117] "RemoveContainer" containerID="47f44abb95b3ce798af38fe0d3df75085f2b605ec71855a5e05432caf8803b56" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.503716 4812 scope.go:117] "RemoveContainer" containerID="a76e9fd7ec26815a3de93409fd5fb61b449d07f9c7df95a987a13875c9a9bae9" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.536555 4812 scope.go:117] "RemoveContainer" containerID="254c9f945f2c0239303a1fd368eead6cdc04b2c52fcf59402657ccd0f40bfa2a" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.545938 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.576330 4812 scope.go:117] "RemoveContainer" containerID="15608135dcc6a73e7cfbdae69ad3801a267454d7f1ce13f1321f54905fae364a" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.626935 4812 scope.go:117] "RemoveContainer" containerID="dca6d2bfcc6eb630eccdae751299ff905121a69131433d98bce8554a1b34f26a" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.679830 4812 scope.go:117] "RemoveContainer" containerID="e52321729579f7f95db4226503697d2d671e79d26560c4f733aba57f97cc42c9" Nov 24 21:05:01 crc kubenswrapper[4812]: I1124 21:05:01.720805 4812 scope.go:117] "RemoveContainer" containerID="49bdb351682d0d46b34001a7176c753828e21609402ab547ea2ce429249bf2ec" Nov 24 21:05:02 crc kubenswrapper[4812]: I1124 21:05:02.095574 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 21:05:02 crc kubenswrapper[4812]: I1124 21:05:02.986873 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14b97d9b-00bd-49fb-b0e3-c3d5aec3a221" path="/var/lib/kubelet/pods/14b97d9b-00bd-49fb-b0e3-c3d5aec3a221/volumes" Nov 24 21:05:02 crc kubenswrapper[4812]: I1124 21:05:02.999175 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:05:02 crc kubenswrapper[4812]: I1124 21:05:02.999245 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:05:03 crc kubenswrapper[4812]: I1124 21:05:03.114300 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5","Type":"ContainerStarted","Data":"3e1010809447e793431e6124ae76c9894b381eb2155027e3567e2aa5e28a0058"} Nov 24 21:05:03 crc kubenswrapper[4812]: I1124 21:05:03.114871 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5","Type":"ContainerStarted","Data":"510db28a862c8c7ca386dfdd6d8df140e395c252f4cea1056668a33c289cb38e"} Nov 24 21:05:04 crc kubenswrapper[4812]: I1124 21:05:04.129774 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5","Type":"ContainerStarted","Data":"069e72430ac0c2dde42ee4baf545aa6bff0e40e555c7fedeb3484d0689696c0c"} Nov 24 21:05:05 crc kubenswrapper[4812]: I1124 21:05:05.149217 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5","Type":"ContainerStarted","Data":"759fd129c5af53b3fa0ec2faa26ccc5921bedac6d0eac144901e2c1291ee6478"} Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.167036 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5","Type":"ContainerStarted","Data":"04fef28d8e4900e268a547667ee65562d01503b1bd4d55cc022b0699542ddd82"} Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.208199 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-846677f8f9-t6vjp"] Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.210437 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.217281 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.640976007 podStartE2EDuration="5.217260744s" podCreationTimestamp="2025-11-24 21:05:01 +0000 UTC" firstStartedPulling="2025-11-24 21:05:02.154198571 +0000 UTC m=+6495.943150942" lastFinishedPulling="2025-11-24 21:05:05.730483318 +0000 UTC m=+6499.519435679" observedRunningTime="2025-11-24 21:05:06.193075379 +0000 UTC m=+6499.982027760" watchObservedRunningTime="2025-11-24 21:05:06.217260744 +0000 UTC m=+6500.006213125" Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.220225 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.252916 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-846677f8f9-t6vjp"] Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.369675 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-ovsdbserver-sb\") pod \"dnsmasq-dns-846677f8f9-t6vjp\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.369770 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5dqh\" (UniqueName: \"kubernetes.io/projected/85845ff2-82a4-4322-a54b-db5326e1c396-kube-api-access-j5dqh\") pod \"dnsmasq-dns-846677f8f9-t6vjp\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.369803 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-config\") pod \"dnsmasq-dns-846677f8f9-t6vjp\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.369927 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-ovsdbserver-nb\") pod \"dnsmasq-dns-846677f8f9-t6vjp\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.369967 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-dns-svc\") pod \"dnsmasq-dns-846677f8f9-t6vjp\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.370020 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-openstack-cell1\") pod \"dnsmasq-dns-846677f8f9-t6vjp\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.471806 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-dns-svc\") pod \"dnsmasq-dns-846677f8f9-t6vjp\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.471893 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-openstack-cell1\") pod \"dnsmasq-dns-846677f8f9-t6vjp\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.471927 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-ovsdbserver-sb\") pod \"dnsmasq-dns-846677f8f9-t6vjp\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.471984 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5dqh\" (UniqueName: \"kubernetes.io/projected/85845ff2-82a4-4322-a54b-db5326e1c396-kube-api-access-j5dqh\") pod \"dnsmasq-dns-846677f8f9-t6vjp\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.472012 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-config\") pod \"dnsmasq-dns-846677f8f9-t6vjp\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.472105 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-ovsdbserver-nb\") pod \"dnsmasq-dns-846677f8f9-t6vjp\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.472830 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-dns-svc\") pod \"dnsmasq-dns-846677f8f9-t6vjp\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.472856 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-openstack-cell1\") pod \"dnsmasq-dns-846677f8f9-t6vjp\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.473183 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-ovsdbserver-nb\") pod \"dnsmasq-dns-846677f8f9-t6vjp\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.473387 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-ovsdbserver-sb\") pod \"dnsmasq-dns-846677f8f9-t6vjp\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.473696 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-config\") pod \"dnsmasq-dns-846677f8f9-t6vjp\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.496620 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5dqh\" (UniqueName: \"kubernetes.io/projected/85845ff2-82a4-4322-a54b-db5326e1c396-kube-api-access-j5dqh\") pod \"dnsmasq-dns-846677f8f9-t6vjp\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:06 crc kubenswrapper[4812]: I1124 21:05:06.543578 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:07 crc kubenswrapper[4812]: I1124 21:05:07.100938 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-846677f8f9-t6vjp"] Nov 24 21:05:07 crc kubenswrapper[4812]: I1124 21:05:07.178735 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" event={"ID":"85845ff2-82a4-4322-a54b-db5326e1c396","Type":"ContainerStarted","Data":"90ec943cea258cf1be6294ffb878c4e51d42551d784d8dcb8abe6e45ae456e28"} Nov 24 21:05:08 crc kubenswrapper[4812]: I1124 21:05:08.187699 4812 generic.go:334] "Generic (PLEG): container finished" podID="85845ff2-82a4-4322-a54b-db5326e1c396" containerID="557aaf14591cfbd4b9efe86e029471fe1247aea3944b44f9b5bed3387190dcc9" exitCode=0 Nov 24 21:05:08 crc kubenswrapper[4812]: I1124 21:05:08.187755 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" event={"ID":"85845ff2-82a4-4322-a54b-db5326e1c396","Type":"ContainerDied","Data":"557aaf14591cfbd4b9efe86e029471fe1247aea3944b44f9b5bed3387190dcc9"} Nov 24 21:05:08 crc kubenswrapper[4812]: I1124 21:05:08.473654 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 21:05:09 crc kubenswrapper[4812]: I1124 21:05:09.202183 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" event={"ID":"85845ff2-82a4-4322-a54b-db5326e1c396","Type":"ContainerStarted","Data":"229f51991e0c99373f438cd7202f5d150643a5a83cc4101d997bdef637123d7a"} Nov 24 21:05:09 crc kubenswrapper[4812]: I1124 21:05:09.202390 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:09 crc kubenswrapper[4812]: I1124 21:05:09.234679 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" podStartSLOduration=3.234652873 podStartE2EDuration="3.234652873s" podCreationTimestamp="2025-11-24 21:05:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:05:09.223711523 +0000 UTC m=+6503.012663944" watchObservedRunningTime="2025-11-24 21:05:09.234652873 +0000 UTC m=+6503.023605264" Nov 24 21:05:12 crc kubenswrapper[4812]: I1124 21:05:12.539748 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:05:12 crc kubenswrapper[4812]: I1124 21:05:12.540539 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="bc18986f-71c1-4bac-a3cb-c8d60192cbe6" containerName="kube-state-metrics" containerID="cri-o://aa42ba1992ccf424c346fb1ee01c9e7a14593ff42fad48541822519cf08ad179" gracePeriod=30 Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.110350 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.157411 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9xmh\" (UniqueName: \"kubernetes.io/projected/bc18986f-71c1-4bac-a3cb-c8d60192cbe6-kube-api-access-h9xmh\") pod \"bc18986f-71c1-4bac-a3cb-c8d60192cbe6\" (UID: \"bc18986f-71c1-4bac-a3cb-c8d60192cbe6\") " Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.164294 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc18986f-71c1-4bac-a3cb-c8d60192cbe6-kube-api-access-h9xmh" (OuterVolumeSpecName: "kube-api-access-h9xmh") pod "bc18986f-71c1-4bac-a3cb-c8d60192cbe6" (UID: "bc18986f-71c1-4bac-a3cb-c8d60192cbe6"). InnerVolumeSpecName "kube-api-access-h9xmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.244312 4812 generic.go:334] "Generic (PLEG): container finished" podID="bc18986f-71c1-4bac-a3cb-c8d60192cbe6" containerID="aa42ba1992ccf424c346fb1ee01c9e7a14593ff42fad48541822519cf08ad179" exitCode=2 Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.244682 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bc18986f-71c1-4bac-a3cb-c8d60192cbe6","Type":"ContainerDied","Data":"aa42ba1992ccf424c346fb1ee01c9e7a14593ff42fad48541822519cf08ad179"} Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.244794 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bc18986f-71c1-4bac-a3cb-c8d60192cbe6","Type":"ContainerDied","Data":"2002349606d1ad7d970f1553957d1806c3a7641ce88c30c02b0453b216846d20"} Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.244950 4812 scope.go:117] "RemoveContainer" containerID="aa42ba1992ccf424c346fb1ee01c9e7a14593ff42fad48541822519cf08ad179" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.245147 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.260350 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9xmh\" (UniqueName: \"kubernetes.io/projected/bc18986f-71c1-4bac-a3cb-c8d60192cbe6-kube-api-access-h9xmh\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.271690 4812 scope.go:117] "RemoveContainer" containerID="aa42ba1992ccf424c346fb1ee01c9e7a14593ff42fad48541822519cf08ad179" Nov 24 21:05:13 crc kubenswrapper[4812]: E1124 21:05:13.272036 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa42ba1992ccf424c346fb1ee01c9e7a14593ff42fad48541822519cf08ad179\": container with ID starting with aa42ba1992ccf424c346fb1ee01c9e7a14593ff42fad48541822519cf08ad179 not found: ID does not exist" containerID="aa42ba1992ccf424c346fb1ee01c9e7a14593ff42fad48541822519cf08ad179" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.272067 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa42ba1992ccf424c346fb1ee01c9e7a14593ff42fad48541822519cf08ad179"} err="failed to get container status \"aa42ba1992ccf424c346fb1ee01c9e7a14593ff42fad48541822519cf08ad179\": rpc error: code = NotFound desc = could not find container \"aa42ba1992ccf424c346fb1ee01c9e7a14593ff42fad48541822519cf08ad179\": container with ID starting with aa42ba1992ccf424c346fb1ee01c9e7a14593ff42fad48541822519cf08ad179 not found: ID does not exist" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.292714 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.303541 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.313791 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:05:13 crc kubenswrapper[4812]: E1124 21:05:13.314375 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc18986f-71c1-4bac-a3cb-c8d60192cbe6" containerName="kube-state-metrics" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.314395 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc18986f-71c1-4bac-a3cb-c8d60192cbe6" containerName="kube-state-metrics" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.314694 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc18986f-71c1-4bac-a3cb-c8d60192cbe6" containerName="kube-state-metrics" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.315726 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.317072 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.317795 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.335125 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.361688 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdff58f4-afb9-42ce-8bce-89173d306577-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bdff58f4-afb9-42ce-8bce-89173d306577\") " pod="openstack/kube-state-metrics-0" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.361756 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgbbf\" (UniqueName: \"kubernetes.io/projected/bdff58f4-afb9-42ce-8bce-89173d306577-kube-api-access-cgbbf\") pod \"kube-state-metrics-0\" (UID: \"bdff58f4-afb9-42ce-8bce-89173d306577\") " pod="openstack/kube-state-metrics-0" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.361827 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bdff58f4-afb9-42ce-8bce-89173d306577-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bdff58f4-afb9-42ce-8bce-89173d306577\") " pod="openstack/kube-state-metrics-0" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.361849 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdff58f4-afb9-42ce-8bce-89173d306577-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bdff58f4-afb9-42ce-8bce-89173d306577\") " pod="openstack/kube-state-metrics-0" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.463280 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdff58f4-afb9-42ce-8bce-89173d306577-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bdff58f4-afb9-42ce-8bce-89173d306577\") " pod="openstack/kube-state-metrics-0" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.463589 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgbbf\" (UniqueName: \"kubernetes.io/projected/bdff58f4-afb9-42ce-8bce-89173d306577-kube-api-access-cgbbf\") pod \"kube-state-metrics-0\" (UID: \"bdff58f4-afb9-42ce-8bce-89173d306577\") " pod="openstack/kube-state-metrics-0" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.463663 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bdff58f4-afb9-42ce-8bce-89173d306577-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bdff58f4-afb9-42ce-8bce-89173d306577\") " pod="openstack/kube-state-metrics-0" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.463687 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdff58f4-afb9-42ce-8bce-89173d306577-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bdff58f4-afb9-42ce-8bce-89173d306577\") " pod="openstack/kube-state-metrics-0" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.480224 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/bdff58f4-afb9-42ce-8bce-89173d306577-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"bdff58f4-afb9-42ce-8bce-89173d306577\") " pod="openstack/kube-state-metrics-0" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.480602 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdff58f4-afb9-42ce-8bce-89173d306577-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"bdff58f4-afb9-42ce-8bce-89173d306577\") " pod="openstack/kube-state-metrics-0" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.483054 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdff58f4-afb9-42ce-8bce-89173d306577-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"bdff58f4-afb9-42ce-8bce-89173d306577\") " pod="openstack/kube-state-metrics-0" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.483149 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgbbf\" (UniqueName: \"kubernetes.io/projected/bdff58f4-afb9-42ce-8bce-89173d306577-kube-api-access-cgbbf\") pod \"kube-state-metrics-0\" (UID: \"bdff58f4-afb9-42ce-8bce-89173d306577\") " pod="openstack/kube-state-metrics-0" Nov 24 21:05:13 crc kubenswrapper[4812]: I1124 21:05:13.634940 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:05:14 crc kubenswrapper[4812]: I1124 21:05:14.117266 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:05:14 crc kubenswrapper[4812]: I1124 21:05:14.255212 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bdff58f4-afb9-42ce-8bce-89173d306577","Type":"ContainerStarted","Data":"f6745226f164b39dd5d14cc951ddae2c48a0bf9d0315d4c5c649c8a13153ae97"} Nov 24 21:05:14 crc kubenswrapper[4812]: I1124 21:05:14.627870 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:05:14 crc kubenswrapper[4812]: I1124 21:05:14.628553 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba808a86-0ae2-458a-9d75-68f7e4a3139b" containerName="ceilometer-central-agent" containerID="cri-o://a2bf6e6cc9014d4d02ca9705d3291c5a22583acb3896413876b8dcd106af7656" gracePeriod=30 Nov 24 21:05:14 crc kubenswrapper[4812]: I1124 21:05:14.628730 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba808a86-0ae2-458a-9d75-68f7e4a3139b" containerName="proxy-httpd" containerID="cri-o://85d785be262af3ded0dec528f5e2d910053d652e624f6745c77c829bf6d52f8d" gracePeriod=30 Nov 24 21:05:14 crc kubenswrapper[4812]: I1124 21:05:14.628801 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba808a86-0ae2-458a-9d75-68f7e4a3139b" containerName="ceilometer-notification-agent" containerID="cri-o://3908652c1d4ff079a1ee5172d3b231803883dd7bffd22d0bef50285d935f3e4e" gracePeriod=30 Nov 24 21:05:14 crc kubenswrapper[4812]: I1124 21:05:14.628835 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba808a86-0ae2-458a-9d75-68f7e4a3139b" containerName="sg-core" containerID="cri-o://01d4a8a54647ae57c5ba20a96678fcc9abe6993514f1782a121b183a21e1e783" gracePeriod=30 Nov 24 21:05:14 crc kubenswrapper[4812]: I1124 21:05:14.980185 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc18986f-71c1-4bac-a3cb-c8d60192cbe6" path="/var/lib/kubelet/pods/bc18986f-71c1-4bac-a3cb-c8d60192cbe6/volumes" Nov 24 21:05:15 crc kubenswrapper[4812]: I1124 21:05:15.288081 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"bdff58f4-afb9-42ce-8bce-89173d306577","Type":"ContainerStarted","Data":"e1f3cf70695a32dc13148a0e49799c6204240ac842be4d0a8487f5ae50abd4c5"} Nov 24 21:05:15 crc kubenswrapper[4812]: I1124 21:05:15.288759 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 21:05:15 crc kubenswrapper[4812]: I1124 21:05:15.293793 4812 generic.go:334] "Generic (PLEG): container finished" podID="ba808a86-0ae2-458a-9d75-68f7e4a3139b" containerID="85d785be262af3ded0dec528f5e2d910053d652e624f6745c77c829bf6d52f8d" exitCode=0 Nov 24 21:05:15 crc kubenswrapper[4812]: I1124 21:05:15.293843 4812 generic.go:334] "Generic (PLEG): container finished" podID="ba808a86-0ae2-458a-9d75-68f7e4a3139b" containerID="01d4a8a54647ae57c5ba20a96678fcc9abe6993514f1782a121b183a21e1e783" exitCode=2 Nov 24 21:05:15 crc kubenswrapper[4812]: I1124 21:05:15.293863 4812 generic.go:334] "Generic (PLEG): container finished" podID="ba808a86-0ae2-458a-9d75-68f7e4a3139b" containerID="a2bf6e6cc9014d4d02ca9705d3291c5a22583acb3896413876b8dcd106af7656" exitCode=0 Nov 24 21:05:15 crc kubenswrapper[4812]: I1124 21:05:15.293897 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba808a86-0ae2-458a-9d75-68f7e4a3139b","Type":"ContainerDied","Data":"85d785be262af3ded0dec528f5e2d910053d652e624f6745c77c829bf6d52f8d"} Nov 24 21:05:15 crc kubenswrapper[4812]: I1124 21:05:15.293933 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba808a86-0ae2-458a-9d75-68f7e4a3139b","Type":"ContainerDied","Data":"01d4a8a54647ae57c5ba20a96678fcc9abe6993514f1782a121b183a21e1e783"} Nov 24 21:05:15 crc kubenswrapper[4812]: I1124 21:05:15.293954 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba808a86-0ae2-458a-9d75-68f7e4a3139b","Type":"ContainerDied","Data":"a2bf6e6cc9014d4d02ca9705d3291c5a22583acb3896413876b8dcd106af7656"} Nov 24 21:05:15 crc kubenswrapper[4812]: I1124 21:05:15.321092 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.910949961 podStartE2EDuration="2.321056405s" podCreationTimestamp="2025-11-24 21:05:13 +0000 UTC" firstStartedPulling="2025-11-24 21:05:14.144921637 +0000 UTC m=+6507.933874008" lastFinishedPulling="2025-11-24 21:05:14.555028081 +0000 UTC m=+6508.343980452" observedRunningTime="2025-11-24 21:05:15.317887275 +0000 UTC m=+6509.106839686" watchObservedRunningTime="2025-11-24 21:05:15.321056405 +0000 UTC m=+6509.110008816" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.309662 4812 generic.go:334] "Generic (PLEG): container finished" podID="ba808a86-0ae2-458a-9d75-68f7e4a3139b" containerID="3908652c1d4ff079a1ee5172d3b231803883dd7bffd22d0bef50285d935f3e4e" exitCode=0 Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.309833 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba808a86-0ae2-458a-9d75-68f7e4a3139b","Type":"ContainerDied","Data":"3908652c1d4ff079a1ee5172d3b231803883dd7bffd22d0bef50285d935f3e4e"} Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.545579 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.606126 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-844f65d9f5-jrtsd"] Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.610405 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.613430 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" podUID="3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963" containerName="dnsmasq-dns" containerID="cri-o://6a55f7f09228881722e65bdbda5418a6dd4003cd21d5b86fb6bc1ceb26384854" gracePeriod=10 Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.631985 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zwr4\" (UniqueName: \"kubernetes.io/projected/ba808a86-0ae2-458a-9d75-68f7e4a3139b-kube-api-access-2zwr4\") pod \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.632136 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-config-data\") pod \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.632165 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba808a86-0ae2-458a-9d75-68f7e4a3139b-run-httpd\") pod \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.632242 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-combined-ca-bundle\") pod \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.632285 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-sg-core-conf-yaml\") pod \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.632389 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-scripts\") pod \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.632457 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba808a86-0ae2-458a-9d75-68f7e4a3139b-log-httpd\") pod \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\" (UID: \"ba808a86-0ae2-458a-9d75-68f7e4a3139b\") " Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.633993 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba808a86-0ae2-458a-9d75-68f7e4a3139b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ba808a86-0ae2-458a-9d75-68f7e4a3139b" (UID: "ba808a86-0ae2-458a-9d75-68f7e4a3139b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.634860 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba808a86-0ae2-458a-9d75-68f7e4a3139b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ba808a86-0ae2-458a-9d75-68f7e4a3139b" (UID: "ba808a86-0ae2-458a-9d75-68f7e4a3139b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.645623 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-scripts" (OuterVolumeSpecName: "scripts") pod "ba808a86-0ae2-458a-9d75-68f7e4a3139b" (UID: "ba808a86-0ae2-458a-9d75-68f7e4a3139b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.651679 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba808a86-0ae2-458a-9d75-68f7e4a3139b-kube-api-access-2zwr4" (OuterVolumeSpecName: "kube-api-access-2zwr4") pod "ba808a86-0ae2-458a-9d75-68f7e4a3139b" (UID: "ba808a86-0ae2-458a-9d75-68f7e4a3139b"). InnerVolumeSpecName "kube-api-access-2zwr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.705812 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ba808a86-0ae2-458a-9d75-68f7e4a3139b" (UID: "ba808a86-0ae2-458a-9d75-68f7e4a3139b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.735490 4812 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba808a86-0ae2-458a-9d75-68f7e4a3139b-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.735517 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zwr4\" (UniqueName: \"kubernetes.io/projected/ba808a86-0ae2-458a-9d75-68f7e4a3139b-kube-api-access-2zwr4\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.735531 4812 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba808a86-0ae2-458a-9d75-68f7e4a3139b-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.735539 4812 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.735547 4812 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.790822 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-854f6c55f5-qpp9f"] Nov 24 21:05:16 crc kubenswrapper[4812]: E1124 21:05:16.791571 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba808a86-0ae2-458a-9d75-68f7e4a3139b" containerName="ceilometer-notification-agent" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.791589 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba808a86-0ae2-458a-9d75-68f7e4a3139b" containerName="ceilometer-notification-agent" Nov 24 21:05:16 crc kubenswrapper[4812]: E1124 21:05:16.791603 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba808a86-0ae2-458a-9d75-68f7e4a3139b" containerName="ceilometer-central-agent" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.791610 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba808a86-0ae2-458a-9d75-68f7e4a3139b" containerName="ceilometer-central-agent" Nov 24 21:05:16 crc kubenswrapper[4812]: E1124 21:05:16.791635 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba808a86-0ae2-458a-9d75-68f7e4a3139b" containerName="proxy-httpd" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.791642 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba808a86-0ae2-458a-9d75-68f7e4a3139b" containerName="proxy-httpd" Nov 24 21:05:16 crc kubenswrapper[4812]: E1124 21:05:16.791655 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba808a86-0ae2-458a-9d75-68f7e4a3139b" containerName="sg-core" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.791660 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba808a86-0ae2-458a-9d75-68f7e4a3139b" containerName="sg-core" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.791834 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba808a86-0ae2-458a-9d75-68f7e4a3139b" containerName="proxy-httpd" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.791850 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba808a86-0ae2-458a-9d75-68f7e4a3139b" containerName="ceilometer-central-agent" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.791862 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba808a86-0ae2-458a-9d75-68f7e4a3139b" containerName="sg-core" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.791873 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba808a86-0ae2-458a-9d75-68f7e4a3139b" containerName="ceilometer-notification-agent" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.793973 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.809515 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-854f6c55f5-qpp9f"] Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.837157 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/757a62a8-91d7-48fd-86c8-ec131f003bc4-openstack-cell1\") pod \"dnsmasq-dns-854f6c55f5-qpp9f\" (UID: \"757a62a8-91d7-48fd-86c8-ec131f003bc4\") " pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.837452 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/757a62a8-91d7-48fd-86c8-ec131f003bc4-ovsdbserver-sb\") pod \"dnsmasq-dns-854f6c55f5-qpp9f\" (UID: \"757a62a8-91d7-48fd-86c8-ec131f003bc4\") " pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.837506 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/757a62a8-91d7-48fd-86c8-ec131f003bc4-ovsdbserver-nb\") pod \"dnsmasq-dns-854f6c55f5-qpp9f\" (UID: \"757a62a8-91d7-48fd-86c8-ec131f003bc4\") " pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.837541 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/757a62a8-91d7-48fd-86c8-ec131f003bc4-dns-svc\") pod \"dnsmasq-dns-854f6c55f5-qpp9f\" (UID: \"757a62a8-91d7-48fd-86c8-ec131f003bc4\") " pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.837623 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/757a62a8-91d7-48fd-86c8-ec131f003bc4-config\") pod \"dnsmasq-dns-854f6c55f5-qpp9f\" (UID: \"757a62a8-91d7-48fd-86c8-ec131f003bc4\") " pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.837681 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s8d4\" (UniqueName: \"kubernetes.io/projected/757a62a8-91d7-48fd-86c8-ec131f003bc4-kube-api-access-7s8d4\") pod \"dnsmasq-dns-854f6c55f5-qpp9f\" (UID: \"757a62a8-91d7-48fd-86c8-ec131f003bc4\") " pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.839476 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba808a86-0ae2-458a-9d75-68f7e4a3139b" (UID: "ba808a86-0ae2-458a-9d75-68f7e4a3139b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.864558 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-config-data" (OuterVolumeSpecName: "config-data") pod "ba808a86-0ae2-458a-9d75-68f7e4a3139b" (UID: "ba808a86-0ae2-458a-9d75-68f7e4a3139b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.938844 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/757a62a8-91d7-48fd-86c8-ec131f003bc4-openstack-cell1\") pod \"dnsmasq-dns-854f6c55f5-qpp9f\" (UID: \"757a62a8-91d7-48fd-86c8-ec131f003bc4\") " pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.938911 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/757a62a8-91d7-48fd-86c8-ec131f003bc4-ovsdbserver-sb\") pod \"dnsmasq-dns-854f6c55f5-qpp9f\" (UID: \"757a62a8-91d7-48fd-86c8-ec131f003bc4\") " pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.939763 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/757a62a8-91d7-48fd-86c8-ec131f003bc4-openstack-cell1\") pod \"dnsmasq-dns-854f6c55f5-qpp9f\" (UID: \"757a62a8-91d7-48fd-86c8-ec131f003bc4\") " pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.939771 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/757a62a8-91d7-48fd-86c8-ec131f003bc4-ovsdbserver-nb\") pod \"dnsmasq-dns-854f6c55f5-qpp9f\" (UID: \"757a62a8-91d7-48fd-86c8-ec131f003bc4\") " pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.939723 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/757a62a8-91d7-48fd-86c8-ec131f003bc4-ovsdbserver-sb\") pod \"dnsmasq-dns-854f6c55f5-qpp9f\" (UID: \"757a62a8-91d7-48fd-86c8-ec131f003bc4\") " pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.939870 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/757a62a8-91d7-48fd-86c8-ec131f003bc4-dns-svc\") pod \"dnsmasq-dns-854f6c55f5-qpp9f\" (UID: \"757a62a8-91d7-48fd-86c8-ec131f003bc4\") " pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.939990 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/757a62a8-91d7-48fd-86c8-ec131f003bc4-config\") pod \"dnsmasq-dns-854f6c55f5-qpp9f\" (UID: \"757a62a8-91d7-48fd-86c8-ec131f003bc4\") " pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.940041 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s8d4\" (UniqueName: \"kubernetes.io/projected/757a62a8-91d7-48fd-86c8-ec131f003bc4-kube-api-access-7s8d4\") pod \"dnsmasq-dns-854f6c55f5-qpp9f\" (UID: \"757a62a8-91d7-48fd-86c8-ec131f003bc4\") " pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.940222 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.940238 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba808a86-0ae2-458a-9d75-68f7e4a3139b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.940300 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/757a62a8-91d7-48fd-86c8-ec131f003bc4-ovsdbserver-nb\") pod \"dnsmasq-dns-854f6c55f5-qpp9f\" (UID: \"757a62a8-91d7-48fd-86c8-ec131f003bc4\") " pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.940833 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/757a62a8-91d7-48fd-86c8-ec131f003bc4-dns-svc\") pod \"dnsmasq-dns-854f6c55f5-qpp9f\" (UID: \"757a62a8-91d7-48fd-86c8-ec131f003bc4\") " pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.941058 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/757a62a8-91d7-48fd-86c8-ec131f003bc4-config\") pod \"dnsmasq-dns-854f6c55f5-qpp9f\" (UID: \"757a62a8-91d7-48fd-86c8-ec131f003bc4\") " pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:16 crc kubenswrapper[4812]: I1124 21:05:16.963156 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s8d4\" (UniqueName: \"kubernetes.io/projected/757a62a8-91d7-48fd-86c8-ec131f003bc4-kube-api-access-7s8d4\") pod \"dnsmasq-dns-854f6c55f5-qpp9f\" (UID: \"757a62a8-91d7-48fd-86c8-ec131f003bc4\") " pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.085306 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.124383 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.142997 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-ovsdbserver-sb\") pod \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\" (UID: \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\") " Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.143070 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-config\") pod \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\" (UID: \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\") " Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.143250 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z5zf\" (UniqueName: \"kubernetes.io/projected/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-kube-api-access-6z5zf\") pod \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\" (UID: \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\") " Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.143752 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-ovsdbserver-nb\") pod \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\" (UID: \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\") " Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.143923 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-dns-svc\") pod \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\" (UID: \"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963\") " Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.148981 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-kube-api-access-6z5zf" (OuterVolumeSpecName: "kube-api-access-6z5zf") pod "3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963" (UID: "3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963"). InnerVolumeSpecName "kube-api-access-6z5zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.199443 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963" (UID: "3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.204640 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963" (UID: "3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.208320 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963" (UID: "3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.243259 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-config" (OuterVolumeSpecName: "config") pod "3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963" (UID: "3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.246224 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.246376 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.246447 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.246521 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z5zf\" (UniqueName: \"kubernetes.io/projected/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-kube-api-access-6z5zf\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.246590 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.334098 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba808a86-0ae2-458a-9d75-68f7e4a3139b","Type":"ContainerDied","Data":"fb7b3be3a66373d3f463153e94ba90bdff6f859ba91064d9158ad67ecb8ceaf7"} Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.334187 4812 scope.go:117] "RemoveContainer" containerID="85d785be262af3ded0dec528f5e2d910053d652e624f6745c77c829bf6d52f8d" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.334184 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.339047 4812 generic.go:334] "Generic (PLEG): container finished" podID="3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963" containerID="6a55f7f09228881722e65bdbda5418a6dd4003cd21d5b86fb6bc1ceb26384854" exitCode=0 Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.339133 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" event={"ID":"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963","Type":"ContainerDied","Data":"6a55f7f09228881722e65bdbda5418a6dd4003cd21d5b86fb6bc1ceb26384854"} Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.339171 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" event={"ID":"3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963","Type":"ContainerDied","Data":"4c728eac596bc24c56ac988667df536fdc236249af856b98a1eb91df9656c092"} Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.339247 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844f65d9f5-jrtsd" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.365895 4812 scope.go:117] "RemoveContainer" containerID="01d4a8a54647ae57c5ba20a96678fcc9abe6993514f1782a121b183a21e1e783" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.373741 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.417881 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.427552 4812 scope.go:117] "RemoveContainer" containerID="3908652c1d4ff079a1ee5172d3b231803883dd7bffd22d0bef50285d935f3e4e" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.434032 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-844f65d9f5-jrtsd"] Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.443065 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-844f65d9f5-jrtsd"] Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.459385 4812 scope.go:117] "RemoveContainer" containerID="a2bf6e6cc9014d4d02ca9705d3291c5a22583acb3896413876b8dcd106af7656" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.460280 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:05:17 crc kubenswrapper[4812]: E1124 21:05:17.460825 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963" containerName="init" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.460843 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963" containerName="init" Nov 24 21:05:17 crc kubenswrapper[4812]: E1124 21:05:17.460879 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963" containerName="dnsmasq-dns" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.460886 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963" containerName="dnsmasq-dns" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.461073 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963" containerName="dnsmasq-dns" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.464114 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.466539 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.466818 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.467200 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.474827 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.496506 4812 scope.go:117] "RemoveContainer" containerID="6a55f7f09228881722e65bdbda5418a6dd4003cd21d5b86fb6bc1ceb26384854" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.519520 4812 scope.go:117] "RemoveContainer" containerID="0d7bcc7d1aae202171bb3c0f7696226292977308bfaf1c15237388a2a54c08bc" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.552466 4812 scope.go:117] "RemoveContainer" containerID="6a55f7f09228881722e65bdbda5418a6dd4003cd21d5b86fb6bc1ceb26384854" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.553097 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e813f31-5ed3-4175-90ea-a120cea31966-run-httpd\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.553170 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e813f31-5ed3-4175-90ea-a120cea31966-log-httpd\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: E1124 21:05:17.553183 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a55f7f09228881722e65bdbda5418a6dd4003cd21d5b86fb6bc1ceb26384854\": container with ID starting with 6a55f7f09228881722e65bdbda5418a6dd4003cd21d5b86fb6bc1ceb26384854 not found: ID does not exist" containerID="6a55f7f09228881722e65bdbda5418a6dd4003cd21d5b86fb6bc1ceb26384854" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.553214 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a55f7f09228881722e65bdbda5418a6dd4003cd21d5b86fb6bc1ceb26384854"} err="failed to get container status \"6a55f7f09228881722e65bdbda5418a6dd4003cd21d5b86fb6bc1ceb26384854\": rpc error: code = NotFound desc = could not find container \"6a55f7f09228881722e65bdbda5418a6dd4003cd21d5b86fb6bc1ceb26384854\": container with ID starting with 6a55f7f09228881722e65bdbda5418a6dd4003cd21d5b86fb6bc1ceb26384854 not found: ID does not exist" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.553241 4812 scope.go:117] "RemoveContainer" containerID="0d7bcc7d1aae202171bb3c0f7696226292977308bfaf1c15237388a2a54c08bc" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.553227 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e813f31-5ed3-4175-90ea-a120cea31966-config-data\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.553561 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e813f31-5ed3-4175-90ea-a120cea31966-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.553662 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e813f31-5ed3-4175-90ea-a120cea31966-scripts\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.553758 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e813f31-5ed3-4175-90ea-a120cea31966-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.553883 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqm9v\" (UniqueName: \"kubernetes.io/projected/2e813f31-5ed3-4175-90ea-a120cea31966-kube-api-access-qqm9v\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.553909 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e813f31-5ed3-4175-90ea-a120cea31966-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: E1124 21:05:17.554082 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d7bcc7d1aae202171bb3c0f7696226292977308bfaf1c15237388a2a54c08bc\": container with ID starting with 0d7bcc7d1aae202171bb3c0f7696226292977308bfaf1c15237388a2a54c08bc not found: ID does not exist" containerID="0d7bcc7d1aae202171bb3c0f7696226292977308bfaf1c15237388a2a54c08bc" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.554118 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d7bcc7d1aae202171bb3c0f7696226292977308bfaf1c15237388a2a54c08bc"} err="failed to get container status \"0d7bcc7d1aae202171bb3c0f7696226292977308bfaf1c15237388a2a54c08bc\": rpc error: code = NotFound desc = could not find container \"0d7bcc7d1aae202171bb3c0f7696226292977308bfaf1c15237388a2a54c08bc\": container with ID starting with 0d7bcc7d1aae202171bb3c0f7696226292977308bfaf1c15237388a2a54c08bc not found: ID does not exist" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.606021 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-854f6c55f5-qpp9f"] Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.654537 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqm9v\" (UniqueName: \"kubernetes.io/projected/2e813f31-5ed3-4175-90ea-a120cea31966-kube-api-access-qqm9v\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.654578 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e813f31-5ed3-4175-90ea-a120cea31966-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.654622 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e813f31-5ed3-4175-90ea-a120cea31966-run-httpd\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.654651 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e813f31-5ed3-4175-90ea-a120cea31966-log-httpd\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.654681 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e813f31-5ed3-4175-90ea-a120cea31966-config-data\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.654717 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e813f31-5ed3-4175-90ea-a120cea31966-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.654757 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e813f31-5ed3-4175-90ea-a120cea31966-scripts\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.654788 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e813f31-5ed3-4175-90ea-a120cea31966-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.655431 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e813f31-5ed3-4175-90ea-a120cea31966-run-httpd\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.655712 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2e813f31-5ed3-4175-90ea-a120cea31966-log-httpd\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.658855 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2e813f31-5ed3-4175-90ea-a120cea31966-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.658934 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e813f31-5ed3-4175-90ea-a120cea31966-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.658969 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e813f31-5ed3-4175-90ea-a120cea31966-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.659116 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e813f31-5ed3-4175-90ea-a120cea31966-config-data\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.662224 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e813f31-5ed3-4175-90ea-a120cea31966-scripts\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.679902 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqm9v\" (UniqueName: \"kubernetes.io/projected/2e813f31-5ed3-4175-90ea-a120cea31966-kube-api-access-qqm9v\") pod \"ceilometer-0\" (UID: \"2e813f31-5ed3-4175-90ea-a120cea31966\") " pod="openstack/ceilometer-0" Nov 24 21:05:17 crc kubenswrapper[4812]: I1124 21:05:17.795030 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:05:18 crc kubenswrapper[4812]: I1124 21:05:18.105603 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:05:18 crc kubenswrapper[4812]: W1124 21:05:18.110375 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e813f31_5ed3_4175_90ea_a120cea31966.slice/crio-acbb9febf642bfa6f99b19de6ab423c908aad4e5124ff1430f28752f9dc4fc1c WatchSource:0}: Error finding container acbb9febf642bfa6f99b19de6ab423c908aad4e5124ff1430f28752f9dc4fc1c: Status 404 returned error can't find the container with id acbb9febf642bfa6f99b19de6ab423c908aad4e5124ff1430f28752f9dc4fc1c Nov 24 21:05:18 crc kubenswrapper[4812]: I1124 21:05:18.351903 4812 generic.go:334] "Generic (PLEG): container finished" podID="757a62a8-91d7-48fd-86c8-ec131f003bc4" containerID="f05da7aea5435c8c741e67199553a33a31a16e97998e22882f6f4cc85b435fa5" exitCode=0 Nov 24 21:05:18 crc kubenswrapper[4812]: I1124 21:05:18.351980 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" event={"ID":"757a62a8-91d7-48fd-86c8-ec131f003bc4","Type":"ContainerDied","Data":"f05da7aea5435c8c741e67199553a33a31a16e97998e22882f6f4cc85b435fa5"} Nov 24 21:05:18 crc kubenswrapper[4812]: I1124 21:05:18.352047 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" event={"ID":"757a62a8-91d7-48fd-86c8-ec131f003bc4","Type":"ContainerStarted","Data":"92ca36a189fdc4b25c38faf9e285533fa5aa336b5afe80ee797f4783dfd43f1f"} Nov 24 21:05:18 crc kubenswrapper[4812]: I1124 21:05:18.355114 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e813f31-5ed3-4175-90ea-a120cea31966","Type":"ContainerStarted","Data":"acbb9febf642bfa6f99b19de6ab423c908aad4e5124ff1430f28752f9dc4fc1c"} Nov 24 21:05:18 crc kubenswrapper[4812]: I1124 21:05:18.979464 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963" path="/var/lib/kubelet/pods/3f4a39a1-49b0-43e6-8b8e-4c9c3f80d963/volumes" Nov 24 21:05:18 crc kubenswrapper[4812]: I1124 21:05:18.981052 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba808a86-0ae2-458a-9d75-68f7e4a3139b" path="/var/lib/kubelet/pods/ba808a86-0ae2-458a-9d75-68f7e4a3139b/volumes" Nov 24 21:05:19 crc kubenswrapper[4812]: I1124 21:05:19.374747 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" event={"ID":"757a62a8-91d7-48fd-86c8-ec131f003bc4","Type":"ContainerStarted","Data":"dd0e506d1e5485eabd983c1e437a2abcdde913e551c9a94139f11b267e97e3c6"} Nov 24 21:05:19 crc kubenswrapper[4812]: I1124 21:05:19.376522 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:19 crc kubenswrapper[4812]: I1124 21:05:19.386314 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e813f31-5ed3-4175-90ea-a120cea31966","Type":"ContainerStarted","Data":"d7a046eec8218c2213ed2ba4ac7d1ad9dc90560502625c600cc2ff0d7cc2cbea"} Nov 24 21:05:19 crc kubenswrapper[4812]: I1124 21:05:19.420350 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" podStartSLOduration=3.420308893 podStartE2EDuration="3.420308893s" podCreationTimestamp="2025-11-24 21:05:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:05:19.405792802 +0000 UTC m=+6513.194745213" watchObservedRunningTime="2025-11-24 21:05:19.420308893 +0000 UTC m=+6513.209261274" Nov 24 21:05:20 crc kubenswrapper[4812]: I1124 21:05:20.396906 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e813f31-5ed3-4175-90ea-a120cea31966","Type":"ContainerStarted","Data":"dd1b064880e03bc5eb545c50564f322c33791e3064ac530f12a5e879139d26ad"} Nov 24 21:05:20 crc kubenswrapper[4812]: I1124 21:05:20.397474 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e813f31-5ed3-4175-90ea-a120cea31966","Type":"ContainerStarted","Data":"007b04c2436a1bdc5504095dd83a0ea6d5a063dce92516009def21d645d49f74"} Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.453964 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz"] Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.459086 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz" Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.465814 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz"] Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.467446 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.468085 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.468301 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2x7tj" Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.468456 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.473538 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2e813f31-5ed3-4175-90ea-a120cea31966","Type":"ContainerStarted","Data":"c7bcacc4e3c31ddd23653a37754fbc44e59c9f909544d683f2a2948a93697b43"} Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.474255 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.500621 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.014427508 podStartE2EDuration="5.500602514s" podCreationTimestamp="2025-11-24 21:05:17 +0000 UTC" firstStartedPulling="2025-11-24 21:05:18.113546846 +0000 UTC m=+6511.902499217" lastFinishedPulling="2025-11-24 21:05:21.599721842 +0000 UTC m=+6515.388674223" observedRunningTime="2025-11-24 21:05:22.499890144 +0000 UTC m=+6516.288842535" watchObservedRunningTime="2025-11-24 21:05:22.500602514 +0000 UTC m=+6516.289554875" Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.538155 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99fed2e9-34ea-4ee4-b596-134d9190482f-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz\" (UID: \"99fed2e9-34ea-4ee4-b596-134d9190482f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz" Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.538229 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj89v\" (UniqueName: \"kubernetes.io/projected/99fed2e9-34ea-4ee4-b596-134d9190482f-kube-api-access-fj89v\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz\" (UID: \"99fed2e9-34ea-4ee4-b596-134d9190482f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz" Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.538257 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99fed2e9-34ea-4ee4-b596-134d9190482f-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz\" (UID: \"99fed2e9-34ea-4ee4-b596-134d9190482f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz" Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.538979 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99fed2e9-34ea-4ee4-b596-134d9190482f-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz\" (UID: \"99fed2e9-34ea-4ee4-b596-134d9190482f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz" Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.640586 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99fed2e9-34ea-4ee4-b596-134d9190482f-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz\" (UID: \"99fed2e9-34ea-4ee4-b596-134d9190482f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz" Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.640637 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj89v\" (UniqueName: \"kubernetes.io/projected/99fed2e9-34ea-4ee4-b596-134d9190482f-kube-api-access-fj89v\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz\" (UID: \"99fed2e9-34ea-4ee4-b596-134d9190482f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz" Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.640660 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99fed2e9-34ea-4ee4-b596-134d9190482f-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz\" (UID: \"99fed2e9-34ea-4ee4-b596-134d9190482f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz" Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.640734 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99fed2e9-34ea-4ee4-b596-134d9190482f-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz\" (UID: \"99fed2e9-34ea-4ee4-b596-134d9190482f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz" Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.645487 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99fed2e9-34ea-4ee4-b596-134d9190482f-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz\" (UID: \"99fed2e9-34ea-4ee4-b596-134d9190482f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz" Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.646155 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99fed2e9-34ea-4ee4-b596-134d9190482f-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz\" (UID: \"99fed2e9-34ea-4ee4-b596-134d9190482f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz" Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.663596 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99fed2e9-34ea-4ee4-b596-134d9190482f-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz\" (UID: \"99fed2e9-34ea-4ee4-b596-134d9190482f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz" Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.673236 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj89v\" (UniqueName: \"kubernetes.io/projected/99fed2e9-34ea-4ee4-b596-134d9190482f-kube-api-access-fj89v\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz\" (UID: \"99fed2e9-34ea-4ee4-b596-134d9190482f\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz" Nov 24 21:05:22 crc kubenswrapper[4812]: I1124 21:05:22.786786 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz" Nov 24 21:05:23 crc kubenswrapper[4812]: I1124 21:05:23.573566 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz"] Nov 24 21:05:23 crc kubenswrapper[4812]: I1124 21:05:23.652091 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 21:05:24 crc kubenswrapper[4812]: I1124 21:05:24.516739 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz" event={"ID":"99fed2e9-34ea-4ee4-b596-134d9190482f","Type":"ContainerStarted","Data":"34e078989e92a4bc0f2648b32b1f1bf0f31d2a87c39be5dfcb454e90e4ad56fb"} Nov 24 21:05:27 crc kubenswrapper[4812]: I1124 21:05:27.126501 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-854f6c55f5-qpp9f" Nov 24 21:05:27 crc kubenswrapper[4812]: I1124 21:05:27.197270 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-846677f8f9-t6vjp"] Nov 24 21:05:27 crc kubenswrapper[4812]: I1124 21:05:27.197802 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" podUID="85845ff2-82a4-4322-a54b-db5326e1c396" containerName="dnsmasq-dns" containerID="cri-o://229f51991e0c99373f438cd7202f5d150643a5a83cc4101d997bdef637123d7a" gracePeriod=10 Nov 24 21:05:27 crc kubenswrapper[4812]: I1124 21:05:27.570004 4812 generic.go:334] "Generic (PLEG): container finished" podID="85845ff2-82a4-4322-a54b-db5326e1c396" containerID="229f51991e0c99373f438cd7202f5d150643a5a83cc4101d997bdef637123d7a" exitCode=0 Nov 24 21:05:27 crc kubenswrapper[4812]: I1124 21:05:27.570041 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" event={"ID":"85845ff2-82a4-4322-a54b-db5326e1c396","Type":"ContainerDied","Data":"229f51991e0c99373f438cd7202f5d150643a5a83cc4101d997bdef637123d7a"} Nov 24 21:05:31 crc kubenswrapper[4812]: I1124 21:05:31.544418 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" podUID="85845ff2-82a4-4322-a54b-db5326e1c396" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.163:5353: connect: connection refused" Nov 24 21:05:32 crc kubenswrapper[4812]: I1124 21:05:32.998057 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:05:32 crc kubenswrapper[4812]: I1124 21:05:32.998447 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:05:32 crc kubenswrapper[4812]: I1124 21:05:32.998513 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 21:05:32 crc kubenswrapper[4812]: I1124 21:05:32.999475 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6358004cd03b55cd72e6ef5108e3f4db8274e1654f97564aedd7e31f6875f7ff"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:05:32 crc kubenswrapper[4812]: I1124 21:05:32.999553 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://6358004cd03b55cd72e6ef5108e3f4db8274e1654f97564aedd7e31f6875f7ff" gracePeriod=600 Nov 24 21:05:33 crc kubenswrapper[4812]: I1124 21:05:33.897875 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="6358004cd03b55cd72e6ef5108e3f4db8274e1654f97564aedd7e31f6875f7ff" exitCode=0 Nov 24 21:05:33 crc kubenswrapper[4812]: I1124 21:05:33.897956 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"6358004cd03b55cd72e6ef5108e3f4db8274e1654f97564aedd7e31f6875f7ff"} Nov 24 21:05:33 crc kubenswrapper[4812]: I1124 21:05:33.898246 4812 scope.go:117] "RemoveContainer" containerID="d3f053c0f61bf4b9cd5b85e165a85e22108ce28b174188d317946101b6633196" Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.349964 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.477550 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5dqh\" (UniqueName: \"kubernetes.io/projected/85845ff2-82a4-4322-a54b-db5326e1c396-kube-api-access-j5dqh\") pod \"85845ff2-82a4-4322-a54b-db5326e1c396\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.477882 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-dns-svc\") pod \"85845ff2-82a4-4322-a54b-db5326e1c396\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.477907 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-config\") pod \"85845ff2-82a4-4322-a54b-db5326e1c396\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.477936 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-ovsdbserver-sb\") pod \"85845ff2-82a4-4322-a54b-db5326e1c396\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.478121 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-ovsdbserver-nb\") pod \"85845ff2-82a4-4322-a54b-db5326e1c396\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.478169 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-openstack-cell1\") pod \"85845ff2-82a4-4322-a54b-db5326e1c396\" (UID: \"85845ff2-82a4-4322-a54b-db5326e1c396\") " Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.483528 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85845ff2-82a4-4322-a54b-db5326e1c396-kube-api-access-j5dqh" (OuterVolumeSpecName: "kube-api-access-j5dqh") pod "85845ff2-82a4-4322-a54b-db5326e1c396" (UID: "85845ff2-82a4-4322-a54b-db5326e1c396"). InnerVolumeSpecName "kube-api-access-j5dqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.563781 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "85845ff2-82a4-4322-a54b-db5326e1c396" (UID: "85845ff2-82a4-4322-a54b-db5326e1c396"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.577852 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "85845ff2-82a4-4322-a54b-db5326e1c396" (UID: "85845ff2-82a4-4322-a54b-db5326e1c396"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.580241 4812 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-openstack-cell1\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.580274 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5dqh\" (UniqueName: \"kubernetes.io/projected/85845ff2-82a4-4322-a54b-db5326e1c396-kube-api-access-j5dqh\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.580288 4812 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.582225 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-config" (OuterVolumeSpecName: "config") pod "85845ff2-82a4-4322-a54b-db5326e1c396" (UID: "85845ff2-82a4-4322-a54b-db5326e1c396"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.586894 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "85845ff2-82a4-4322-a54b-db5326e1c396" (UID: "85845ff2-82a4-4322-a54b-db5326e1c396"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.609469 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "85845ff2-82a4-4322-a54b-db5326e1c396" (UID: "85845ff2-82a4-4322-a54b-db5326e1c396"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.682087 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.682119 4812 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.682128 4812 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85845ff2-82a4-4322-a54b-db5326e1c396-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.913223 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7"} Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.915474 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz" event={"ID":"99fed2e9-34ea-4ee4-b596-134d9190482f","Type":"ContainerStarted","Data":"354d642ff94e2eee1e7691386e61c9a4e0b3f713dbd46a9e815e54d992883a0b"} Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.923464 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" event={"ID":"85845ff2-82a4-4322-a54b-db5326e1c396","Type":"ContainerDied","Data":"90ec943cea258cf1be6294ffb878c4e51d42551d784d8dcb8abe6e45ae456e28"} Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.923520 4812 scope.go:117] "RemoveContainer" containerID="229f51991e0c99373f438cd7202f5d150643a5a83cc4101d997bdef637123d7a" Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.923668 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-846677f8f9-t6vjp" Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.957038 4812 scope.go:117] "RemoveContainer" containerID="557aaf14591cfbd4b9efe86e029471fe1247aea3944b44f9b5bed3387190dcc9" Nov 24 21:05:34 crc kubenswrapper[4812]: I1124 21:05:34.968721 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz" podStartSLOduration=2.471643813 podStartE2EDuration="12.968698692s" podCreationTimestamp="2025-11-24 21:05:22 +0000 UTC" firstStartedPulling="2025-11-24 21:05:23.585185339 +0000 UTC m=+6517.374137700" lastFinishedPulling="2025-11-24 21:05:34.082240198 +0000 UTC m=+6527.871192579" observedRunningTime="2025-11-24 21:05:34.959053049 +0000 UTC m=+6528.748005440" watchObservedRunningTime="2025-11-24 21:05:34.968698692 +0000 UTC m=+6528.757651063" Nov 24 21:05:35 crc kubenswrapper[4812]: I1124 21:05:35.021274 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-846677f8f9-t6vjp"] Nov 24 21:05:35 crc kubenswrapper[4812]: I1124 21:05:35.021323 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-846677f8f9-t6vjp"] Nov 24 21:05:36 crc kubenswrapper[4812]: I1124 21:05:36.985389 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85845ff2-82a4-4322-a54b-db5326e1c396" path="/var/lib/kubelet/pods/85845ff2-82a4-4322-a54b-db5326e1c396/volumes" Nov 24 21:05:47 crc kubenswrapper[4812]: I1124 21:05:47.812914 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 21:05:48 crc kubenswrapper[4812]: I1124 21:05:48.119362 4812 generic.go:334] "Generic (PLEG): container finished" podID="99fed2e9-34ea-4ee4-b596-134d9190482f" containerID="354d642ff94e2eee1e7691386e61c9a4e0b3f713dbd46a9e815e54d992883a0b" exitCode=0 Nov 24 21:05:48 crc kubenswrapper[4812]: I1124 21:05:48.119411 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz" event={"ID":"99fed2e9-34ea-4ee4-b596-134d9190482f","Type":"ContainerDied","Data":"354d642ff94e2eee1e7691386e61c9a4e0b3f713dbd46a9e815e54d992883a0b"} Nov 24 21:05:49 crc kubenswrapper[4812]: I1124 21:05:49.744263 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz" Nov 24 21:05:49 crc kubenswrapper[4812]: I1124 21:05:49.863369 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj89v\" (UniqueName: \"kubernetes.io/projected/99fed2e9-34ea-4ee4-b596-134d9190482f-kube-api-access-fj89v\") pod \"99fed2e9-34ea-4ee4-b596-134d9190482f\" (UID: \"99fed2e9-34ea-4ee4-b596-134d9190482f\") " Nov 24 21:05:49 crc kubenswrapper[4812]: I1124 21:05:49.863441 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99fed2e9-34ea-4ee4-b596-134d9190482f-ssh-key\") pod \"99fed2e9-34ea-4ee4-b596-134d9190482f\" (UID: \"99fed2e9-34ea-4ee4-b596-134d9190482f\") " Nov 24 21:05:49 crc kubenswrapper[4812]: I1124 21:05:49.863543 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99fed2e9-34ea-4ee4-b596-134d9190482f-inventory\") pod \"99fed2e9-34ea-4ee4-b596-134d9190482f\" (UID: \"99fed2e9-34ea-4ee4-b596-134d9190482f\") " Nov 24 21:05:49 crc kubenswrapper[4812]: I1124 21:05:49.863602 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99fed2e9-34ea-4ee4-b596-134d9190482f-pre-adoption-validation-combined-ca-bundle\") pod \"99fed2e9-34ea-4ee4-b596-134d9190482f\" (UID: \"99fed2e9-34ea-4ee4-b596-134d9190482f\") " Nov 24 21:05:49 crc kubenswrapper[4812]: I1124 21:05:49.877534 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99fed2e9-34ea-4ee4-b596-134d9190482f-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "99fed2e9-34ea-4ee4-b596-134d9190482f" (UID: "99fed2e9-34ea-4ee4-b596-134d9190482f"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:05:49 crc kubenswrapper[4812]: I1124 21:05:49.877560 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99fed2e9-34ea-4ee4-b596-134d9190482f-kube-api-access-fj89v" (OuterVolumeSpecName: "kube-api-access-fj89v") pod "99fed2e9-34ea-4ee4-b596-134d9190482f" (UID: "99fed2e9-34ea-4ee4-b596-134d9190482f"). InnerVolumeSpecName "kube-api-access-fj89v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:05:49 crc kubenswrapper[4812]: I1124 21:05:49.898216 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99fed2e9-34ea-4ee4-b596-134d9190482f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "99fed2e9-34ea-4ee4-b596-134d9190482f" (UID: "99fed2e9-34ea-4ee4-b596-134d9190482f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:05:49 crc kubenswrapper[4812]: I1124 21:05:49.904511 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99fed2e9-34ea-4ee4-b596-134d9190482f-inventory" (OuterVolumeSpecName: "inventory") pod "99fed2e9-34ea-4ee4-b596-134d9190482f" (UID: "99fed2e9-34ea-4ee4-b596-134d9190482f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:05:49 crc kubenswrapper[4812]: I1124 21:05:49.966593 4812 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99fed2e9-34ea-4ee4-b596-134d9190482f-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:49 crc kubenswrapper[4812]: I1124 21:05:49.966748 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fj89v\" (UniqueName: \"kubernetes.io/projected/99fed2e9-34ea-4ee4-b596-134d9190482f-kube-api-access-fj89v\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:49 crc kubenswrapper[4812]: I1124 21:05:49.966770 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99fed2e9-34ea-4ee4-b596-134d9190482f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:49 crc kubenswrapper[4812]: I1124 21:05:49.966783 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99fed2e9-34ea-4ee4-b596-134d9190482f-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:05:50 crc kubenswrapper[4812]: I1124 21:05:50.144830 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz" event={"ID":"99fed2e9-34ea-4ee4-b596-134d9190482f","Type":"ContainerDied","Data":"34e078989e92a4bc0f2648b32b1f1bf0f31d2a87c39be5dfcb454e90e4ad56fb"} Nov 24 21:05:50 crc kubenswrapper[4812]: I1124 21:05:50.145154 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34e078989e92a4bc0f2648b32b1f1bf0f31d2a87c39be5dfcb454e90e4ad56fb" Nov 24 21:05:50 crc kubenswrapper[4812]: I1124 21:05:50.144892 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz" Nov 24 21:05:59 crc kubenswrapper[4812]: I1124 21:05:59.897827 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll"] Nov 24 21:05:59 crc kubenswrapper[4812]: E1124 21:05:59.898874 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99fed2e9-34ea-4ee4-b596-134d9190482f" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 24 21:05:59 crc kubenswrapper[4812]: I1124 21:05:59.898892 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="99fed2e9-34ea-4ee4-b596-134d9190482f" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 24 21:05:59 crc kubenswrapper[4812]: E1124 21:05:59.898922 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85845ff2-82a4-4322-a54b-db5326e1c396" containerName="dnsmasq-dns" Nov 24 21:05:59 crc kubenswrapper[4812]: I1124 21:05:59.898929 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="85845ff2-82a4-4322-a54b-db5326e1c396" containerName="dnsmasq-dns" Nov 24 21:05:59 crc kubenswrapper[4812]: E1124 21:05:59.898939 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85845ff2-82a4-4322-a54b-db5326e1c396" containerName="init" Nov 24 21:05:59 crc kubenswrapper[4812]: I1124 21:05:59.898945 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="85845ff2-82a4-4322-a54b-db5326e1c396" containerName="init" Nov 24 21:05:59 crc kubenswrapper[4812]: I1124 21:05:59.899151 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="85845ff2-82a4-4322-a54b-db5326e1c396" containerName="dnsmasq-dns" Nov 24 21:05:59 crc kubenswrapper[4812]: I1124 21:05:59.899172 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="99fed2e9-34ea-4ee4-b596-134d9190482f" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 24 21:05:59 crc kubenswrapper[4812]: I1124 21:05:59.900079 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll" Nov 24 21:05:59 crc kubenswrapper[4812]: I1124 21:05:59.903193 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:05:59 crc kubenswrapper[4812]: I1124 21:05:59.903418 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 24 21:05:59 crc kubenswrapper[4812]: I1124 21:05:59.903676 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2x7tj" Nov 24 21:05:59 crc kubenswrapper[4812]: I1124 21:05:59.904157 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 24 21:05:59 crc kubenswrapper[4812]: I1124 21:05:59.913164 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll"] Nov 24 21:05:59 crc kubenswrapper[4812]: I1124 21:05:59.948087 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92040984-aa11-4d6a-9069-58324eac2e33-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll\" (UID: \"92040984-aa11-4d6a-9069-58324eac2e33\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll" Nov 24 21:05:59 crc kubenswrapper[4812]: I1124 21:05:59.948229 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92040984-aa11-4d6a-9069-58324eac2e33-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll\" (UID: \"92040984-aa11-4d6a-9069-58324eac2e33\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll" Nov 24 21:05:59 crc kubenswrapper[4812]: I1124 21:05:59.948317 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/92040984-aa11-4d6a-9069-58324eac2e33-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll\" (UID: \"92040984-aa11-4d6a-9069-58324eac2e33\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll" Nov 24 21:05:59 crc kubenswrapper[4812]: I1124 21:05:59.948388 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv2q4\" (UniqueName: \"kubernetes.io/projected/92040984-aa11-4d6a-9069-58324eac2e33-kube-api-access-cv2q4\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll\" (UID: \"92040984-aa11-4d6a-9069-58324eac2e33\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll" Nov 24 21:06:00 crc kubenswrapper[4812]: I1124 21:06:00.051008 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92040984-aa11-4d6a-9069-58324eac2e33-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll\" (UID: \"92040984-aa11-4d6a-9069-58324eac2e33\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll" Nov 24 21:06:00 crc kubenswrapper[4812]: I1124 21:06:00.051845 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92040984-aa11-4d6a-9069-58324eac2e33-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll\" (UID: \"92040984-aa11-4d6a-9069-58324eac2e33\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll" Nov 24 21:06:00 crc kubenswrapper[4812]: I1124 21:06:00.052094 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/92040984-aa11-4d6a-9069-58324eac2e33-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll\" (UID: \"92040984-aa11-4d6a-9069-58324eac2e33\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll" Nov 24 21:06:00 crc kubenswrapper[4812]: I1124 21:06:00.052213 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv2q4\" (UniqueName: \"kubernetes.io/projected/92040984-aa11-4d6a-9069-58324eac2e33-kube-api-access-cv2q4\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll\" (UID: \"92040984-aa11-4d6a-9069-58324eac2e33\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll" Nov 24 21:06:00 crc kubenswrapper[4812]: I1124 21:06:00.058654 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92040984-aa11-4d6a-9069-58324eac2e33-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll\" (UID: \"92040984-aa11-4d6a-9069-58324eac2e33\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll" Nov 24 21:06:00 crc kubenswrapper[4812]: I1124 21:06:00.059921 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/92040984-aa11-4d6a-9069-58324eac2e33-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll\" (UID: \"92040984-aa11-4d6a-9069-58324eac2e33\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll" Nov 24 21:06:00 crc kubenswrapper[4812]: I1124 21:06:00.061575 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92040984-aa11-4d6a-9069-58324eac2e33-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll\" (UID: \"92040984-aa11-4d6a-9069-58324eac2e33\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll" Nov 24 21:06:00 crc kubenswrapper[4812]: I1124 21:06:00.069151 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv2q4\" (UniqueName: \"kubernetes.io/projected/92040984-aa11-4d6a-9069-58324eac2e33-kube-api-access-cv2q4\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll\" (UID: \"92040984-aa11-4d6a-9069-58324eac2e33\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll" Nov 24 21:06:00 crc kubenswrapper[4812]: I1124 21:06:00.233952 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll" Nov 24 21:06:00 crc kubenswrapper[4812]: I1124 21:06:00.863267 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll"] Nov 24 21:06:00 crc kubenswrapper[4812]: W1124 21:06:00.872243 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92040984_aa11_4d6a_9069_58324eac2e33.slice/crio-c993c92d1171f03ba7e8a7a41fc222cedd2d3e70518af65d0382d2c654290067 WatchSource:0}: Error finding container c993c92d1171f03ba7e8a7a41fc222cedd2d3e70518af65d0382d2c654290067: Status 404 returned error can't find the container with id c993c92d1171f03ba7e8a7a41fc222cedd2d3e70518af65d0382d2c654290067 Nov 24 21:06:01 crc kubenswrapper[4812]: I1124 21:06:01.299427 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll" event={"ID":"92040984-aa11-4d6a-9069-58324eac2e33","Type":"ContainerStarted","Data":"c993c92d1171f03ba7e8a7a41fc222cedd2d3e70518af65d0382d2c654290067"} Nov 24 21:06:02 crc kubenswrapper[4812]: I1124 21:06:02.317241 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll" event={"ID":"92040984-aa11-4d6a-9069-58324eac2e33","Type":"ContainerStarted","Data":"5138ac4851369d307f2b3d320ca3c93f12f0a9f732de4439c7e2e3500baf022b"} Nov 24 21:06:02 crc kubenswrapper[4812]: I1124 21:06:02.345118 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll" podStartSLOduration=2.890484114 podStartE2EDuration="3.345095219s" podCreationTimestamp="2025-11-24 21:05:59 +0000 UTC" firstStartedPulling="2025-11-24 21:06:00.876268312 +0000 UTC m=+6554.665220713" lastFinishedPulling="2025-11-24 21:06:01.330879417 +0000 UTC m=+6555.119831818" observedRunningTime="2025-11-24 21:06:02.338963465 +0000 UTC m=+6556.127915936" watchObservedRunningTime="2025-11-24 21:06:02.345095219 +0000 UTC m=+6556.134047600" Nov 24 21:06:12 crc kubenswrapper[4812]: I1124 21:06:12.052599 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-vp2pp"] Nov 24 21:06:12 crc kubenswrapper[4812]: I1124 21:06:12.066440 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-vp2pp"] Nov 24 21:06:12 crc kubenswrapper[4812]: I1124 21:06:12.986864 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51aa012f-3b22-4157-8f1c-5f3259e247cf" path="/var/lib/kubelet/pods/51aa012f-3b22-4157-8f1c-5f3259e247cf/volumes" Nov 24 21:06:13 crc kubenswrapper[4812]: I1124 21:06:13.046604 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-e71f-account-create-plb85"] Nov 24 21:06:13 crc kubenswrapper[4812]: I1124 21:06:13.056431 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-e71f-account-create-plb85"] Nov 24 21:06:15 crc kubenswrapper[4812]: I1124 21:06:15.010466 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00a5f747-977c-40b4-81a0-13aae8abac4b" path="/var/lib/kubelet/pods/00a5f747-977c-40b4-81a0-13aae8abac4b/volumes" Nov 24 21:06:19 crc kubenswrapper[4812]: I1124 21:06:19.033456 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-ngtmc"] Nov 24 21:06:19 crc kubenswrapper[4812]: I1124 21:06:19.043914 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-ngtmc"] Nov 24 21:06:20 crc kubenswrapper[4812]: I1124 21:06:20.038102 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-e98c-account-create-kr4vr"] Nov 24 21:06:20 crc kubenswrapper[4812]: I1124 21:06:20.049549 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-e98c-account-create-kr4vr"] Nov 24 21:06:20 crc kubenswrapper[4812]: I1124 21:06:20.980366 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c916ac76-e4bf-4630-9290-f98440a62829" path="/var/lib/kubelet/pods/c916ac76-e4bf-4630-9290-f98440a62829/volumes" Nov 24 21:06:20 crc kubenswrapper[4812]: I1124 21:06:20.981142 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2689091-8414-46ad-9ff3-a7ba09821ae0" path="/var/lib/kubelet/pods/f2689091-8414-46ad-9ff3-a7ba09821ae0/volumes" Nov 24 21:07:02 crc kubenswrapper[4812]: I1124 21:07:02.347801 4812 scope.go:117] "RemoveContainer" containerID="64ce64891f7dfb7fa6e4963d21f239138b26c94a48f6a665cf57ea61ab57bd4a" Nov 24 21:07:02 crc kubenswrapper[4812]: I1124 21:07:02.395543 4812 scope.go:117] "RemoveContainer" containerID="9be2e22325ce88be23babfabb1887ceba8b3851dbf9e41db7b9baae9e5576b93" Nov 24 21:07:02 crc kubenswrapper[4812]: I1124 21:07:02.469372 4812 scope.go:117] "RemoveContainer" containerID="a218716b939c05345035ce286cc5d597587d852fd31b35d6e8891cf6de31311f" Nov 24 21:07:02 crc kubenswrapper[4812]: I1124 21:07:02.528139 4812 scope.go:117] "RemoveContainer" containerID="3035ebcbea5a2f5adeb3b1aecfc1b950baf4eaf8cfeed1475779784e15de9d28" Nov 24 21:07:08 crc kubenswrapper[4812]: I1124 21:07:08.099854 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-rhkzt"] Nov 24 21:07:08 crc kubenswrapper[4812]: I1124 21:07:08.116959 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-rhkzt"] Nov 24 21:07:08 crc kubenswrapper[4812]: I1124 21:07:08.991461 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f2daa50-f7db-469b-bfc0-06d96f21dcef" path="/var/lib/kubelet/pods/0f2daa50-f7db-469b-bfc0-06d96f21dcef/volumes" Nov 24 21:08:02 crc kubenswrapper[4812]: I1124 21:08:02.707579 4812 scope.go:117] "RemoveContainer" containerID="6f4af1090dcf12d32a97e6fb73946f7f54cae9957aef393c2b90a5eefa03fccb" Nov 24 21:08:02 crc kubenswrapper[4812]: I1124 21:08:02.746068 4812 scope.go:117] "RemoveContainer" containerID="59e05967d6bc599acc21cdce59b2715d00ea220c8dd65649d00999b9a8baa908" Nov 24 21:08:03 crc kubenswrapper[4812]: I1124 21:08:03.004502 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:08:03 crc kubenswrapper[4812]: I1124 21:08:03.004860 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:08:33 crc kubenswrapper[4812]: I1124 21:08:32.999600 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:08:33 crc kubenswrapper[4812]: I1124 21:08:33.000331 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:09:02 crc kubenswrapper[4812]: I1124 21:09:02.998168 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:09:03 crc kubenswrapper[4812]: I1124 21:09:02.999111 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:09:03 crc kubenswrapper[4812]: I1124 21:09:02.999179 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 21:09:03 crc kubenswrapper[4812]: I1124 21:09:03.000507 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:09:03 crc kubenswrapper[4812]: I1124 21:09:03.000586 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" gracePeriod=600 Nov 24 21:09:03 crc kubenswrapper[4812]: E1124 21:09:03.123424 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:09:03 crc kubenswrapper[4812]: I1124 21:09:03.656773 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" exitCode=0 Nov 24 21:09:03 crc kubenswrapper[4812]: I1124 21:09:03.656821 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7"} Nov 24 21:09:03 crc kubenswrapper[4812]: I1124 21:09:03.656876 4812 scope.go:117] "RemoveContainer" containerID="6358004cd03b55cd72e6ef5108e3f4db8274e1654f97564aedd7e31f6875f7ff" Nov 24 21:09:03 crc kubenswrapper[4812]: I1124 21:09:03.657954 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:09:03 crc kubenswrapper[4812]: E1124 21:09:03.658467 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:09:15 crc kubenswrapper[4812]: I1124 21:09:15.965807 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:09:15 crc kubenswrapper[4812]: E1124 21:09:15.966586 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:09:27 crc kubenswrapper[4812]: I1124 21:09:27.965776 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:09:27 crc kubenswrapper[4812]: E1124 21:09:27.966518 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:09:42 crc kubenswrapper[4812]: I1124 21:09:42.966769 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:09:42 crc kubenswrapper[4812]: E1124 21:09:42.968059 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:09:54 crc kubenswrapper[4812]: I1124 21:09:54.966306 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:09:54 crc kubenswrapper[4812]: E1124 21:09:54.967198 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:10:06 crc kubenswrapper[4812]: I1124 21:10:06.007578 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:10:06 crc kubenswrapper[4812]: E1124 21:10:06.015818 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:10:06 crc kubenswrapper[4812]: I1124 21:10:06.022280 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bbjj5"] Nov 24 21:10:06 crc kubenswrapper[4812]: I1124 21:10:06.026211 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbjj5" Nov 24 21:10:06 crc kubenswrapper[4812]: I1124 21:10:06.045304 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bbjj5"] Nov 24 21:10:06 crc kubenswrapper[4812]: I1124 21:10:06.181202 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbfc8c65-ed28-464e-a924-74cf386445ee-catalog-content\") pod \"certified-operators-bbjj5\" (UID: \"bbfc8c65-ed28-464e-a924-74cf386445ee\") " pod="openshift-marketplace/certified-operators-bbjj5" Nov 24 21:10:06 crc kubenswrapper[4812]: I1124 21:10:06.181311 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmwts\" (UniqueName: \"kubernetes.io/projected/bbfc8c65-ed28-464e-a924-74cf386445ee-kube-api-access-nmwts\") pod \"certified-operators-bbjj5\" (UID: \"bbfc8c65-ed28-464e-a924-74cf386445ee\") " pod="openshift-marketplace/certified-operators-bbjj5" Nov 24 21:10:06 crc kubenswrapper[4812]: I1124 21:10:06.181432 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbfc8c65-ed28-464e-a924-74cf386445ee-utilities\") pod \"certified-operators-bbjj5\" (UID: \"bbfc8c65-ed28-464e-a924-74cf386445ee\") " pod="openshift-marketplace/certified-operators-bbjj5" Nov 24 21:10:06 crc kubenswrapper[4812]: I1124 21:10:06.282945 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbfc8c65-ed28-464e-a924-74cf386445ee-catalog-content\") pod \"certified-operators-bbjj5\" (UID: \"bbfc8c65-ed28-464e-a924-74cf386445ee\") " pod="openshift-marketplace/certified-operators-bbjj5" Nov 24 21:10:06 crc kubenswrapper[4812]: I1124 21:10:06.283028 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmwts\" (UniqueName: \"kubernetes.io/projected/bbfc8c65-ed28-464e-a924-74cf386445ee-kube-api-access-nmwts\") pod \"certified-operators-bbjj5\" (UID: \"bbfc8c65-ed28-464e-a924-74cf386445ee\") " pod="openshift-marketplace/certified-operators-bbjj5" Nov 24 21:10:06 crc kubenswrapper[4812]: I1124 21:10:06.283083 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbfc8c65-ed28-464e-a924-74cf386445ee-utilities\") pod \"certified-operators-bbjj5\" (UID: \"bbfc8c65-ed28-464e-a924-74cf386445ee\") " pod="openshift-marketplace/certified-operators-bbjj5" Nov 24 21:10:06 crc kubenswrapper[4812]: I1124 21:10:06.283596 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbfc8c65-ed28-464e-a924-74cf386445ee-utilities\") pod \"certified-operators-bbjj5\" (UID: \"bbfc8c65-ed28-464e-a924-74cf386445ee\") " pod="openshift-marketplace/certified-operators-bbjj5" Nov 24 21:10:06 crc kubenswrapper[4812]: I1124 21:10:06.283824 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbfc8c65-ed28-464e-a924-74cf386445ee-catalog-content\") pod \"certified-operators-bbjj5\" (UID: \"bbfc8c65-ed28-464e-a924-74cf386445ee\") " pod="openshift-marketplace/certified-operators-bbjj5" Nov 24 21:10:06 crc kubenswrapper[4812]: I1124 21:10:06.311510 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmwts\" (UniqueName: \"kubernetes.io/projected/bbfc8c65-ed28-464e-a924-74cf386445ee-kube-api-access-nmwts\") pod \"certified-operators-bbjj5\" (UID: \"bbfc8c65-ed28-464e-a924-74cf386445ee\") " pod="openshift-marketplace/certified-operators-bbjj5" Nov 24 21:10:06 crc kubenswrapper[4812]: I1124 21:10:06.353537 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbjj5" Nov 24 21:10:06 crc kubenswrapper[4812]: I1124 21:10:06.864233 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bbjj5"] Nov 24 21:10:07 crc kubenswrapper[4812]: I1124 21:10:07.485345 4812 generic.go:334] "Generic (PLEG): container finished" podID="bbfc8c65-ed28-464e-a924-74cf386445ee" containerID="fe05e96479abbe650cf2c0936671d4a15c38e117679f046353ffe465bbdfe8ba" exitCode=0 Nov 24 21:10:07 crc kubenswrapper[4812]: I1124 21:10:07.485426 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbjj5" event={"ID":"bbfc8c65-ed28-464e-a924-74cf386445ee","Type":"ContainerDied","Data":"fe05e96479abbe650cf2c0936671d4a15c38e117679f046353ffe465bbdfe8ba"} Nov 24 21:10:07 crc kubenswrapper[4812]: I1124 21:10:07.485646 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbjj5" event={"ID":"bbfc8c65-ed28-464e-a924-74cf386445ee","Type":"ContainerStarted","Data":"9a64f9d01bdfd7841cb0d120ee38bea7ae58193779b38328c8b6bb99621bd173"} Nov 24 21:10:07 crc kubenswrapper[4812]: I1124 21:10:07.488638 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:10:08 crc kubenswrapper[4812]: I1124 21:10:08.503939 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbjj5" event={"ID":"bbfc8c65-ed28-464e-a924-74cf386445ee","Type":"ContainerStarted","Data":"e47cd2d483f4ab146d6017e87b3942d2acf83be14590f14b766cc53da6260dd3"} Nov 24 21:10:10 crc kubenswrapper[4812]: I1124 21:10:10.529725 4812 generic.go:334] "Generic (PLEG): container finished" podID="bbfc8c65-ed28-464e-a924-74cf386445ee" containerID="e47cd2d483f4ab146d6017e87b3942d2acf83be14590f14b766cc53da6260dd3" exitCode=0 Nov 24 21:10:10 crc kubenswrapper[4812]: I1124 21:10:10.529821 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbjj5" event={"ID":"bbfc8c65-ed28-464e-a924-74cf386445ee","Type":"ContainerDied","Data":"e47cd2d483f4ab146d6017e87b3942d2acf83be14590f14b766cc53da6260dd3"} Nov 24 21:10:11 crc kubenswrapper[4812]: I1124 21:10:11.548121 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbjj5" event={"ID":"bbfc8c65-ed28-464e-a924-74cf386445ee","Type":"ContainerStarted","Data":"616566d293039336f92f88796306ec930addd8ac30fe8af80b3222d68397466e"} Nov 24 21:10:11 crc kubenswrapper[4812]: I1124 21:10:11.593983 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bbjj5" podStartSLOduration=3.149436022 podStartE2EDuration="6.593954548s" podCreationTimestamp="2025-11-24 21:10:05 +0000 UTC" firstStartedPulling="2025-11-24 21:10:07.488211147 +0000 UTC m=+6801.277163538" lastFinishedPulling="2025-11-24 21:10:10.932729663 +0000 UTC m=+6804.721682064" observedRunningTime="2025-11-24 21:10:11.580013933 +0000 UTC m=+6805.368966314" watchObservedRunningTime="2025-11-24 21:10:11.593954548 +0000 UTC m=+6805.382906959" Nov 24 21:10:16 crc kubenswrapper[4812]: I1124 21:10:16.354447 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bbjj5" Nov 24 21:10:16 crc kubenswrapper[4812]: I1124 21:10:16.354977 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bbjj5" Nov 24 21:10:16 crc kubenswrapper[4812]: I1124 21:10:16.434905 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bbjj5" Nov 24 21:10:16 crc kubenswrapper[4812]: I1124 21:10:16.715708 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bbjj5" Nov 24 21:10:16 crc kubenswrapper[4812]: I1124 21:10:16.786399 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bbjj5"] Nov 24 21:10:18 crc kubenswrapper[4812]: I1124 21:10:18.654027 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bbjj5" podUID="bbfc8c65-ed28-464e-a924-74cf386445ee" containerName="registry-server" containerID="cri-o://616566d293039336f92f88796306ec930addd8ac30fe8af80b3222d68397466e" gracePeriod=2 Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.303732 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbjj5" Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.434085 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmwts\" (UniqueName: \"kubernetes.io/projected/bbfc8c65-ed28-464e-a924-74cf386445ee-kube-api-access-nmwts\") pod \"bbfc8c65-ed28-464e-a924-74cf386445ee\" (UID: \"bbfc8c65-ed28-464e-a924-74cf386445ee\") " Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.434541 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbfc8c65-ed28-464e-a924-74cf386445ee-catalog-content\") pod \"bbfc8c65-ed28-464e-a924-74cf386445ee\" (UID: \"bbfc8c65-ed28-464e-a924-74cf386445ee\") " Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.434572 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbfc8c65-ed28-464e-a924-74cf386445ee-utilities\") pod \"bbfc8c65-ed28-464e-a924-74cf386445ee\" (UID: \"bbfc8c65-ed28-464e-a924-74cf386445ee\") " Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.435629 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbfc8c65-ed28-464e-a924-74cf386445ee-utilities" (OuterVolumeSpecName: "utilities") pod "bbfc8c65-ed28-464e-a924-74cf386445ee" (UID: "bbfc8c65-ed28-464e-a924-74cf386445ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.444022 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbfc8c65-ed28-464e-a924-74cf386445ee-kube-api-access-nmwts" (OuterVolumeSpecName: "kube-api-access-nmwts") pod "bbfc8c65-ed28-464e-a924-74cf386445ee" (UID: "bbfc8c65-ed28-464e-a924-74cf386445ee"). InnerVolumeSpecName "kube-api-access-nmwts". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.488710 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbfc8c65-ed28-464e-a924-74cf386445ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbfc8c65-ed28-464e-a924-74cf386445ee" (UID: "bbfc8c65-ed28-464e-a924-74cf386445ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.538606 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmwts\" (UniqueName: \"kubernetes.io/projected/bbfc8c65-ed28-464e-a924-74cf386445ee-kube-api-access-nmwts\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.538659 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbfc8c65-ed28-464e-a924-74cf386445ee-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.538680 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbfc8c65-ed28-464e-a924-74cf386445ee-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.669194 4812 generic.go:334] "Generic (PLEG): container finished" podID="bbfc8c65-ed28-464e-a924-74cf386445ee" containerID="616566d293039336f92f88796306ec930addd8ac30fe8af80b3222d68397466e" exitCode=0 Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.669237 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbjj5" event={"ID":"bbfc8c65-ed28-464e-a924-74cf386445ee","Type":"ContainerDied","Data":"616566d293039336f92f88796306ec930addd8ac30fe8af80b3222d68397466e"} Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.669263 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bbjj5" event={"ID":"bbfc8c65-ed28-464e-a924-74cf386445ee","Type":"ContainerDied","Data":"9a64f9d01bdfd7841cb0d120ee38bea7ae58193779b38328c8b6bb99621bd173"} Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.669280 4812 scope.go:117] "RemoveContainer" containerID="616566d293039336f92f88796306ec930addd8ac30fe8af80b3222d68397466e" Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.669310 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bbjj5" Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.715404 4812 scope.go:117] "RemoveContainer" containerID="e47cd2d483f4ab146d6017e87b3942d2acf83be14590f14b766cc53da6260dd3" Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.725134 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bbjj5"] Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.737074 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bbjj5"] Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.761849 4812 scope.go:117] "RemoveContainer" containerID="fe05e96479abbe650cf2c0936671d4a15c38e117679f046353ffe465bbdfe8ba" Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.838477 4812 scope.go:117] "RemoveContainer" containerID="616566d293039336f92f88796306ec930addd8ac30fe8af80b3222d68397466e" Nov 24 21:10:19 crc kubenswrapper[4812]: E1124 21:10:19.838913 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"616566d293039336f92f88796306ec930addd8ac30fe8af80b3222d68397466e\": container with ID starting with 616566d293039336f92f88796306ec930addd8ac30fe8af80b3222d68397466e not found: ID does not exist" containerID="616566d293039336f92f88796306ec930addd8ac30fe8af80b3222d68397466e" Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.838957 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"616566d293039336f92f88796306ec930addd8ac30fe8af80b3222d68397466e"} err="failed to get container status \"616566d293039336f92f88796306ec930addd8ac30fe8af80b3222d68397466e\": rpc error: code = NotFound desc = could not find container \"616566d293039336f92f88796306ec930addd8ac30fe8af80b3222d68397466e\": container with ID starting with 616566d293039336f92f88796306ec930addd8ac30fe8af80b3222d68397466e not found: ID does not exist" Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.838984 4812 scope.go:117] "RemoveContainer" containerID="e47cd2d483f4ab146d6017e87b3942d2acf83be14590f14b766cc53da6260dd3" Nov 24 21:10:19 crc kubenswrapper[4812]: E1124 21:10:19.839248 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e47cd2d483f4ab146d6017e87b3942d2acf83be14590f14b766cc53da6260dd3\": container with ID starting with e47cd2d483f4ab146d6017e87b3942d2acf83be14590f14b766cc53da6260dd3 not found: ID does not exist" containerID="e47cd2d483f4ab146d6017e87b3942d2acf83be14590f14b766cc53da6260dd3" Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.839285 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e47cd2d483f4ab146d6017e87b3942d2acf83be14590f14b766cc53da6260dd3"} err="failed to get container status \"e47cd2d483f4ab146d6017e87b3942d2acf83be14590f14b766cc53da6260dd3\": rpc error: code = NotFound desc = could not find container \"e47cd2d483f4ab146d6017e87b3942d2acf83be14590f14b766cc53da6260dd3\": container with ID starting with e47cd2d483f4ab146d6017e87b3942d2acf83be14590f14b766cc53da6260dd3 not found: ID does not exist" Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.839310 4812 scope.go:117] "RemoveContainer" containerID="fe05e96479abbe650cf2c0936671d4a15c38e117679f046353ffe465bbdfe8ba" Nov 24 21:10:19 crc kubenswrapper[4812]: E1124 21:10:19.839801 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe05e96479abbe650cf2c0936671d4a15c38e117679f046353ffe465bbdfe8ba\": container with ID starting with fe05e96479abbe650cf2c0936671d4a15c38e117679f046353ffe465bbdfe8ba not found: ID does not exist" containerID="fe05e96479abbe650cf2c0936671d4a15c38e117679f046353ffe465bbdfe8ba" Nov 24 21:10:19 crc kubenswrapper[4812]: I1124 21:10:19.839828 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe05e96479abbe650cf2c0936671d4a15c38e117679f046353ffe465bbdfe8ba"} err="failed to get container status \"fe05e96479abbe650cf2c0936671d4a15c38e117679f046353ffe465bbdfe8ba\": rpc error: code = NotFound desc = could not find container \"fe05e96479abbe650cf2c0936671d4a15c38e117679f046353ffe465bbdfe8ba\": container with ID starting with fe05e96479abbe650cf2c0936671d4a15c38e117679f046353ffe465bbdfe8ba not found: ID does not exist" Nov 24 21:10:20 crc kubenswrapper[4812]: I1124 21:10:20.966971 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:10:20 crc kubenswrapper[4812]: E1124 21:10:20.967862 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:10:20 crc kubenswrapper[4812]: I1124 21:10:20.984600 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbfc8c65-ed28-464e-a924-74cf386445ee" path="/var/lib/kubelet/pods/bbfc8c65-ed28-464e-a924-74cf386445ee/volumes" Nov 24 21:10:35 crc kubenswrapper[4812]: I1124 21:10:35.967793 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:10:35 crc kubenswrapper[4812]: E1124 21:10:35.972748 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:10:50 crc kubenswrapper[4812]: I1124 21:10:50.966469 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:10:50 crc kubenswrapper[4812]: E1124 21:10:50.971183 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:11:03 crc kubenswrapper[4812]: I1124 21:11:03.966964 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:11:03 crc kubenswrapper[4812]: E1124 21:11:03.968086 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:11:14 crc kubenswrapper[4812]: I1124 21:11:14.072758 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-3ae3-account-create-hzs8w"] Nov 24 21:11:14 crc kubenswrapper[4812]: I1124 21:11:14.089019 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-9qzj8"] Nov 24 21:11:14 crc kubenswrapper[4812]: I1124 21:11:14.099476 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-9qzj8"] Nov 24 21:11:14 crc kubenswrapper[4812]: I1124 21:11:14.109396 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-3ae3-account-create-hzs8w"] Nov 24 21:11:14 crc kubenswrapper[4812]: I1124 21:11:14.982900 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e8ab6b8-4ac3-479d-b176-30105aba1c8c" path="/var/lib/kubelet/pods/2e8ab6b8-4ac3-479d-b176-30105aba1c8c/volumes" Nov 24 21:11:14 crc kubenswrapper[4812]: I1124 21:11:14.985222 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dce7cffc-90f1-4d13-ace8-057618426139" path="/var/lib/kubelet/pods/dce7cffc-90f1-4d13-ace8-057618426139/volumes" Nov 24 21:11:18 crc kubenswrapper[4812]: I1124 21:11:18.965723 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:11:18 crc kubenswrapper[4812]: E1124 21:11:18.966914 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:11:29 crc kubenswrapper[4812]: I1124 21:11:29.043464 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-jdv9d"] Nov 24 21:11:29 crc kubenswrapper[4812]: I1124 21:11:29.062731 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-jdv9d"] Nov 24 21:11:30 crc kubenswrapper[4812]: I1124 21:11:30.967070 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:11:30 crc kubenswrapper[4812]: E1124 21:11:30.967868 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:11:30 crc kubenswrapper[4812]: I1124 21:11:30.983651 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1176b595-2448-400c-9e8c-ac98aef730fb" path="/var/lib/kubelet/pods/1176b595-2448-400c-9e8c-ac98aef730fb/volumes" Nov 24 21:11:44 crc kubenswrapper[4812]: I1124 21:11:44.965944 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:11:44 crc kubenswrapper[4812]: E1124 21:11:44.966918 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:11:57 crc kubenswrapper[4812]: I1124 21:11:57.966819 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:11:57 crc kubenswrapper[4812]: E1124 21:11:57.968095 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:12:02 crc kubenswrapper[4812]: I1124 21:12:02.953757 4812 scope.go:117] "RemoveContainer" containerID="b985c65af0c23ef4cc6998a0180bfdf3f35b9232a7859be420f5e3e8eee96b77" Nov 24 21:12:02 crc kubenswrapper[4812]: I1124 21:12:02.992627 4812 scope.go:117] "RemoveContainer" containerID="4eb33002975bea1e1d4aa9cafbc75a19af5549c53720350eaa8f43a286d95d0e" Nov 24 21:12:03 crc kubenswrapper[4812]: I1124 21:12:03.065838 4812 scope.go:117] "RemoveContainer" containerID="df80eb27e968167ecb1cdf8606ae99171329318e7ea207545a295bd01ea8cc0a" Nov 24 21:12:10 crc kubenswrapper[4812]: I1124 21:12:10.966888 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:12:10 crc kubenswrapper[4812]: E1124 21:12:10.967800 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:12:24 crc kubenswrapper[4812]: I1124 21:12:24.968807 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:12:24 crc kubenswrapper[4812]: E1124 21:12:24.969962 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:12:36 crc kubenswrapper[4812]: I1124 21:12:36.974313 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:12:36 crc kubenswrapper[4812]: E1124 21:12:36.975578 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:12:50 crc kubenswrapper[4812]: I1124 21:12:50.966238 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:12:50 crc kubenswrapper[4812]: E1124 21:12:50.966976 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:13:02 crc kubenswrapper[4812]: I1124 21:13:02.966217 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:13:02 crc kubenswrapper[4812]: E1124 21:13:02.967208 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:13:15 crc kubenswrapper[4812]: I1124 21:13:15.965710 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:13:15 crc kubenswrapper[4812]: E1124 21:13:15.966707 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:13:30 crc kubenswrapper[4812]: I1124 21:13:30.967000 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:13:30 crc kubenswrapper[4812]: E1124 21:13:30.968078 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:13:45 crc kubenswrapper[4812]: I1124 21:13:45.966014 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:13:45 crc kubenswrapper[4812]: E1124 21:13:45.967440 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:13:53 crc kubenswrapper[4812]: I1124 21:13:53.373937 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7kwjw"] Nov 24 21:13:53 crc kubenswrapper[4812]: E1124 21:13:53.375063 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfc8c65-ed28-464e-a924-74cf386445ee" containerName="registry-server" Nov 24 21:13:53 crc kubenswrapper[4812]: I1124 21:13:53.375086 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfc8c65-ed28-464e-a924-74cf386445ee" containerName="registry-server" Nov 24 21:13:53 crc kubenswrapper[4812]: E1124 21:13:53.375107 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfc8c65-ed28-464e-a924-74cf386445ee" containerName="extract-utilities" Nov 24 21:13:53 crc kubenswrapper[4812]: I1124 21:13:53.375118 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfc8c65-ed28-464e-a924-74cf386445ee" containerName="extract-utilities" Nov 24 21:13:53 crc kubenswrapper[4812]: E1124 21:13:53.375196 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfc8c65-ed28-464e-a924-74cf386445ee" containerName="extract-content" Nov 24 21:13:53 crc kubenswrapper[4812]: I1124 21:13:53.375208 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfc8c65-ed28-464e-a924-74cf386445ee" containerName="extract-content" Nov 24 21:13:53 crc kubenswrapper[4812]: I1124 21:13:53.375563 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfc8c65-ed28-464e-a924-74cf386445ee" containerName="registry-server" Nov 24 21:13:53 crc kubenswrapper[4812]: I1124 21:13:53.379328 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kwjw" Nov 24 21:13:53 crc kubenswrapper[4812]: I1124 21:13:53.386861 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7kwjw"] Nov 24 21:13:53 crc kubenswrapper[4812]: I1124 21:13:53.445573 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/260045ff-e637-4d07-a9ed-b71b97f04819-utilities\") pod \"redhat-operators-7kwjw\" (UID: \"260045ff-e637-4d07-a9ed-b71b97f04819\") " pod="openshift-marketplace/redhat-operators-7kwjw" Nov 24 21:13:53 crc kubenswrapper[4812]: I1124 21:13:53.445842 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/260045ff-e637-4d07-a9ed-b71b97f04819-catalog-content\") pod \"redhat-operators-7kwjw\" (UID: \"260045ff-e637-4d07-a9ed-b71b97f04819\") " pod="openshift-marketplace/redhat-operators-7kwjw" Nov 24 21:13:53 crc kubenswrapper[4812]: I1124 21:13:53.445954 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkp4f\" (UniqueName: \"kubernetes.io/projected/260045ff-e637-4d07-a9ed-b71b97f04819-kube-api-access-pkp4f\") pod \"redhat-operators-7kwjw\" (UID: \"260045ff-e637-4d07-a9ed-b71b97f04819\") " pod="openshift-marketplace/redhat-operators-7kwjw" Nov 24 21:13:53 crc kubenswrapper[4812]: I1124 21:13:53.549592 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/260045ff-e637-4d07-a9ed-b71b97f04819-catalog-content\") pod \"redhat-operators-7kwjw\" (UID: \"260045ff-e637-4d07-a9ed-b71b97f04819\") " pod="openshift-marketplace/redhat-operators-7kwjw" Nov 24 21:13:53 crc kubenswrapper[4812]: I1124 21:13:53.549653 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkp4f\" (UniqueName: \"kubernetes.io/projected/260045ff-e637-4d07-a9ed-b71b97f04819-kube-api-access-pkp4f\") pod \"redhat-operators-7kwjw\" (UID: \"260045ff-e637-4d07-a9ed-b71b97f04819\") " pod="openshift-marketplace/redhat-operators-7kwjw" Nov 24 21:13:53 crc kubenswrapper[4812]: I1124 21:13:53.549744 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/260045ff-e637-4d07-a9ed-b71b97f04819-utilities\") pod \"redhat-operators-7kwjw\" (UID: \"260045ff-e637-4d07-a9ed-b71b97f04819\") " pod="openshift-marketplace/redhat-operators-7kwjw" Nov 24 21:13:53 crc kubenswrapper[4812]: I1124 21:13:53.550170 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/260045ff-e637-4d07-a9ed-b71b97f04819-utilities\") pod \"redhat-operators-7kwjw\" (UID: \"260045ff-e637-4d07-a9ed-b71b97f04819\") " pod="openshift-marketplace/redhat-operators-7kwjw" Nov 24 21:13:53 crc kubenswrapper[4812]: I1124 21:13:53.550396 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/260045ff-e637-4d07-a9ed-b71b97f04819-catalog-content\") pod \"redhat-operators-7kwjw\" (UID: \"260045ff-e637-4d07-a9ed-b71b97f04819\") " pod="openshift-marketplace/redhat-operators-7kwjw" Nov 24 21:13:53 crc kubenswrapper[4812]: I1124 21:13:53.590521 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkp4f\" (UniqueName: \"kubernetes.io/projected/260045ff-e637-4d07-a9ed-b71b97f04819-kube-api-access-pkp4f\") pod \"redhat-operators-7kwjw\" (UID: \"260045ff-e637-4d07-a9ed-b71b97f04819\") " pod="openshift-marketplace/redhat-operators-7kwjw" Nov 24 21:13:53 crc kubenswrapper[4812]: I1124 21:13:53.710237 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kwjw" Nov 24 21:13:54 crc kubenswrapper[4812]: I1124 21:13:54.231568 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7kwjw"] Nov 24 21:13:54 crc kubenswrapper[4812]: I1124 21:13:54.368411 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kwjw" event={"ID":"260045ff-e637-4d07-a9ed-b71b97f04819","Type":"ContainerStarted","Data":"72875b54248c7e50657ac8554fc5126fd6ca9f0b45994ca4d5ce3265f3f433f7"} Nov 24 21:13:55 crc kubenswrapper[4812]: I1124 21:13:55.384402 4812 generic.go:334] "Generic (PLEG): container finished" podID="260045ff-e637-4d07-a9ed-b71b97f04819" containerID="68ee7b5930490b4b18e3bd0ce2ffdca33f5ce0c73561ba9dba221bb2f8e09797" exitCode=0 Nov 24 21:13:55 crc kubenswrapper[4812]: I1124 21:13:55.384521 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kwjw" event={"ID":"260045ff-e637-4d07-a9ed-b71b97f04819","Type":"ContainerDied","Data":"68ee7b5930490b4b18e3bd0ce2ffdca33f5ce0c73561ba9dba221bb2f8e09797"} Nov 24 21:13:56 crc kubenswrapper[4812]: I1124 21:13:56.395614 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kwjw" event={"ID":"260045ff-e637-4d07-a9ed-b71b97f04819","Type":"ContainerStarted","Data":"dfe61ddfd8d440807ba0daa01f808e10a5f279f8f3797be708d3d560138fa4bf"} Nov 24 21:13:57 crc kubenswrapper[4812]: I1124 21:13:57.967178 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:13:57 crc kubenswrapper[4812]: E1124 21:13:57.967715 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:14:02 crc kubenswrapper[4812]: I1124 21:14:02.466765 4812 generic.go:334] "Generic (PLEG): container finished" podID="260045ff-e637-4d07-a9ed-b71b97f04819" containerID="dfe61ddfd8d440807ba0daa01f808e10a5f279f8f3797be708d3d560138fa4bf" exitCode=0 Nov 24 21:14:02 crc kubenswrapper[4812]: I1124 21:14:02.466872 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kwjw" event={"ID":"260045ff-e637-4d07-a9ed-b71b97f04819","Type":"ContainerDied","Data":"dfe61ddfd8d440807ba0daa01f808e10a5f279f8f3797be708d3d560138fa4bf"} Nov 24 21:14:03 crc kubenswrapper[4812]: I1124 21:14:03.479172 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kwjw" event={"ID":"260045ff-e637-4d07-a9ed-b71b97f04819","Type":"ContainerStarted","Data":"13e7fe7c3836e216bfeded33b867af555e227154b6ecbec873b2138bd53d164c"} Nov 24 21:14:03 crc kubenswrapper[4812]: I1124 21:14:03.518011 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7kwjw" podStartSLOduration=3.018218854 podStartE2EDuration="10.517987998s" podCreationTimestamp="2025-11-24 21:13:53 +0000 UTC" firstStartedPulling="2025-11-24 21:13:55.388186851 +0000 UTC m=+7029.177139262" lastFinishedPulling="2025-11-24 21:14:02.887956005 +0000 UTC m=+7036.676908406" observedRunningTime="2025-11-24 21:14:03.508495619 +0000 UTC m=+7037.297448080" watchObservedRunningTime="2025-11-24 21:14:03.517987998 +0000 UTC m=+7037.306940389" Nov 24 21:14:03 crc kubenswrapper[4812]: I1124 21:14:03.711289 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7kwjw" Nov 24 21:14:03 crc kubenswrapper[4812]: I1124 21:14:03.711772 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7kwjw" Nov 24 21:14:04 crc kubenswrapper[4812]: I1124 21:14:04.782891 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7kwjw" podUID="260045ff-e637-4d07-a9ed-b71b97f04819" containerName="registry-server" probeResult="failure" output=< Nov 24 21:14:04 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 21:14:04 crc kubenswrapper[4812]: > Nov 24 21:14:07 crc kubenswrapper[4812]: I1124 21:14:07.549735 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qtm7x"] Nov 24 21:14:07 crc kubenswrapper[4812]: I1124 21:14:07.554873 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qtm7x" Nov 24 21:14:07 crc kubenswrapper[4812]: I1124 21:14:07.569541 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qtm7x"] Nov 24 21:14:07 crc kubenswrapper[4812]: I1124 21:14:07.662706 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82xxl\" (UniqueName: \"kubernetes.io/projected/aa54f605-fa96-47c9-97e8-cc1a8ebff5db-kube-api-access-82xxl\") pod \"redhat-marketplace-qtm7x\" (UID: \"aa54f605-fa96-47c9-97e8-cc1a8ebff5db\") " pod="openshift-marketplace/redhat-marketplace-qtm7x" Nov 24 21:14:07 crc kubenswrapper[4812]: I1124 21:14:07.662780 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa54f605-fa96-47c9-97e8-cc1a8ebff5db-catalog-content\") pod \"redhat-marketplace-qtm7x\" (UID: \"aa54f605-fa96-47c9-97e8-cc1a8ebff5db\") " pod="openshift-marketplace/redhat-marketplace-qtm7x" Nov 24 21:14:07 crc kubenswrapper[4812]: I1124 21:14:07.663021 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa54f605-fa96-47c9-97e8-cc1a8ebff5db-utilities\") pod \"redhat-marketplace-qtm7x\" (UID: \"aa54f605-fa96-47c9-97e8-cc1a8ebff5db\") " pod="openshift-marketplace/redhat-marketplace-qtm7x" Nov 24 21:14:07 crc kubenswrapper[4812]: I1124 21:14:07.765892 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82xxl\" (UniqueName: \"kubernetes.io/projected/aa54f605-fa96-47c9-97e8-cc1a8ebff5db-kube-api-access-82xxl\") pod \"redhat-marketplace-qtm7x\" (UID: \"aa54f605-fa96-47c9-97e8-cc1a8ebff5db\") " pod="openshift-marketplace/redhat-marketplace-qtm7x" Nov 24 21:14:07 crc kubenswrapper[4812]: I1124 21:14:07.766236 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa54f605-fa96-47c9-97e8-cc1a8ebff5db-catalog-content\") pod \"redhat-marketplace-qtm7x\" (UID: \"aa54f605-fa96-47c9-97e8-cc1a8ebff5db\") " pod="openshift-marketplace/redhat-marketplace-qtm7x" Nov 24 21:14:07 crc kubenswrapper[4812]: I1124 21:14:07.766472 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa54f605-fa96-47c9-97e8-cc1a8ebff5db-utilities\") pod \"redhat-marketplace-qtm7x\" (UID: \"aa54f605-fa96-47c9-97e8-cc1a8ebff5db\") " pod="openshift-marketplace/redhat-marketplace-qtm7x" Nov 24 21:14:07 crc kubenswrapper[4812]: I1124 21:14:07.766849 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa54f605-fa96-47c9-97e8-cc1a8ebff5db-catalog-content\") pod \"redhat-marketplace-qtm7x\" (UID: \"aa54f605-fa96-47c9-97e8-cc1a8ebff5db\") " pod="openshift-marketplace/redhat-marketplace-qtm7x" Nov 24 21:14:07 crc kubenswrapper[4812]: I1124 21:14:07.767548 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa54f605-fa96-47c9-97e8-cc1a8ebff5db-utilities\") pod \"redhat-marketplace-qtm7x\" (UID: \"aa54f605-fa96-47c9-97e8-cc1a8ebff5db\") " pod="openshift-marketplace/redhat-marketplace-qtm7x" Nov 24 21:14:07 crc kubenswrapper[4812]: I1124 21:14:07.794231 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82xxl\" (UniqueName: \"kubernetes.io/projected/aa54f605-fa96-47c9-97e8-cc1a8ebff5db-kube-api-access-82xxl\") pod \"redhat-marketplace-qtm7x\" (UID: \"aa54f605-fa96-47c9-97e8-cc1a8ebff5db\") " pod="openshift-marketplace/redhat-marketplace-qtm7x" Nov 24 21:14:07 crc kubenswrapper[4812]: I1124 21:14:07.888819 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qtm7x" Nov 24 21:14:08 crc kubenswrapper[4812]: I1124 21:14:08.558815 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qtm7x"] Nov 24 21:14:09 crc kubenswrapper[4812]: I1124 21:14:09.561027 4812 generic.go:334] "Generic (PLEG): container finished" podID="aa54f605-fa96-47c9-97e8-cc1a8ebff5db" containerID="5ccb7e32b56d118bf141724d24f08e00989573fe75b899ef26fb768dbd785a11" exitCode=0 Nov 24 21:14:09 crc kubenswrapper[4812]: I1124 21:14:09.561525 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtm7x" event={"ID":"aa54f605-fa96-47c9-97e8-cc1a8ebff5db","Type":"ContainerDied","Data":"5ccb7e32b56d118bf141724d24f08e00989573fe75b899ef26fb768dbd785a11"} Nov 24 21:14:09 crc kubenswrapper[4812]: I1124 21:14:09.563785 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtm7x" event={"ID":"aa54f605-fa96-47c9-97e8-cc1a8ebff5db","Type":"ContainerStarted","Data":"d6c7b6f20b1b538331e31e5225f9296e4482dcac21e85f4837a60b60e8aeda1e"} Nov 24 21:14:10 crc kubenswrapper[4812]: I1124 21:14:10.077675 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-9cz4n"] Nov 24 21:14:10 crc kubenswrapper[4812]: I1124 21:14:10.089671 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-2823-account-create-qqmxf"] Nov 24 21:14:10 crc kubenswrapper[4812]: I1124 21:14:10.099652 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-9cz4n"] Nov 24 21:14:10 crc kubenswrapper[4812]: I1124 21:14:10.112157 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-2823-account-create-qqmxf"] Nov 24 21:14:10 crc kubenswrapper[4812]: I1124 21:14:10.581590 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtm7x" event={"ID":"aa54f605-fa96-47c9-97e8-cc1a8ebff5db","Type":"ContainerStarted","Data":"8224b8e994143e702454e37fc9a92356ffb5881f2fbe766d75f0edd72798a009"} Nov 24 21:14:10 crc kubenswrapper[4812]: I1124 21:14:10.967387 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:14:10 crc kubenswrapper[4812]: I1124 21:14:10.983833 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b" path="/var/lib/kubelet/pods/50c7c0aa-474c-48dd-9f19-0f2a9ed16e7b/volumes" Nov 24 21:14:11 crc kubenswrapper[4812]: I1124 21:14:11.043233 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6e44fd1-85cd-40a3-b528-bce9911ea477" path="/var/lib/kubelet/pods/c6e44fd1-85cd-40a3-b528-bce9911ea477/volumes" Nov 24 21:14:11 crc kubenswrapper[4812]: I1124 21:14:11.599840 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"42da69ace901f1d1c1447c55d84bba6cad25de3b45c999d9dc8c61d5cd823f6d"} Nov 24 21:14:11 crc kubenswrapper[4812]: I1124 21:14:11.603768 4812 generic.go:334] "Generic (PLEG): container finished" podID="aa54f605-fa96-47c9-97e8-cc1a8ebff5db" containerID="8224b8e994143e702454e37fc9a92356ffb5881f2fbe766d75f0edd72798a009" exitCode=0 Nov 24 21:14:11 crc kubenswrapper[4812]: I1124 21:14:11.603869 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtm7x" event={"ID":"aa54f605-fa96-47c9-97e8-cc1a8ebff5db","Type":"ContainerDied","Data":"8224b8e994143e702454e37fc9a92356ffb5881f2fbe766d75f0edd72798a009"} Nov 24 21:14:13 crc kubenswrapper[4812]: I1124 21:14:13.629691 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtm7x" event={"ID":"aa54f605-fa96-47c9-97e8-cc1a8ebff5db","Type":"ContainerStarted","Data":"59173e5eff01b8836d5923375fb79676fa9696acf69184512bb6cbb7322e7efb"} Nov 24 21:14:13 crc kubenswrapper[4812]: I1124 21:14:13.659963 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qtm7x" podStartSLOduration=3.7931298509999998 podStartE2EDuration="6.65994217s" podCreationTimestamp="2025-11-24 21:14:07 +0000 UTC" firstStartedPulling="2025-11-24 21:14:09.566311707 +0000 UTC m=+7043.355264118" lastFinishedPulling="2025-11-24 21:14:12.433124056 +0000 UTC m=+7046.222076437" observedRunningTime="2025-11-24 21:14:13.651687756 +0000 UTC m=+7047.440640137" watchObservedRunningTime="2025-11-24 21:14:13.65994217 +0000 UTC m=+7047.448894541" Nov 24 21:14:13 crc kubenswrapper[4812]: I1124 21:14:13.760540 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7kwjw" Nov 24 21:14:13 crc kubenswrapper[4812]: I1124 21:14:13.845073 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7kwjw" Nov 24 21:14:15 crc kubenswrapper[4812]: I1124 21:14:15.740941 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7kwjw"] Nov 24 21:14:15 crc kubenswrapper[4812]: I1124 21:14:15.741464 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7kwjw" podUID="260045ff-e637-4d07-a9ed-b71b97f04819" containerName="registry-server" containerID="cri-o://13e7fe7c3836e216bfeded33b867af555e227154b6ecbec873b2138bd53d164c" gracePeriod=2 Nov 24 21:14:16 crc kubenswrapper[4812]: I1124 21:14:16.674589 4812 generic.go:334] "Generic (PLEG): container finished" podID="260045ff-e637-4d07-a9ed-b71b97f04819" containerID="13e7fe7c3836e216bfeded33b867af555e227154b6ecbec873b2138bd53d164c" exitCode=0 Nov 24 21:14:16 crc kubenswrapper[4812]: I1124 21:14:16.674752 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kwjw" event={"ID":"260045ff-e637-4d07-a9ed-b71b97f04819","Type":"ContainerDied","Data":"13e7fe7c3836e216bfeded33b867af555e227154b6ecbec873b2138bd53d164c"} Nov 24 21:14:16 crc kubenswrapper[4812]: I1124 21:14:16.919004 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kwjw" Nov 24 21:14:17 crc kubenswrapper[4812]: I1124 21:14:17.033092 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/260045ff-e637-4d07-a9ed-b71b97f04819-utilities\") pod \"260045ff-e637-4d07-a9ed-b71b97f04819\" (UID: \"260045ff-e637-4d07-a9ed-b71b97f04819\") " Nov 24 21:14:17 crc kubenswrapper[4812]: I1124 21:14:17.033276 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/260045ff-e637-4d07-a9ed-b71b97f04819-catalog-content\") pod \"260045ff-e637-4d07-a9ed-b71b97f04819\" (UID: \"260045ff-e637-4d07-a9ed-b71b97f04819\") " Nov 24 21:14:17 crc kubenswrapper[4812]: I1124 21:14:17.033541 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkp4f\" (UniqueName: \"kubernetes.io/projected/260045ff-e637-4d07-a9ed-b71b97f04819-kube-api-access-pkp4f\") pod \"260045ff-e637-4d07-a9ed-b71b97f04819\" (UID: \"260045ff-e637-4d07-a9ed-b71b97f04819\") " Nov 24 21:14:17 crc kubenswrapper[4812]: I1124 21:14:17.036410 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/260045ff-e637-4d07-a9ed-b71b97f04819-utilities" (OuterVolumeSpecName: "utilities") pod "260045ff-e637-4d07-a9ed-b71b97f04819" (UID: "260045ff-e637-4d07-a9ed-b71b97f04819"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:14:17 crc kubenswrapper[4812]: I1124 21:14:17.055023 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/260045ff-e637-4d07-a9ed-b71b97f04819-kube-api-access-pkp4f" (OuterVolumeSpecName: "kube-api-access-pkp4f") pod "260045ff-e637-4d07-a9ed-b71b97f04819" (UID: "260045ff-e637-4d07-a9ed-b71b97f04819"). InnerVolumeSpecName "kube-api-access-pkp4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:14:17 crc kubenswrapper[4812]: I1124 21:14:17.136797 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/260045ff-e637-4d07-a9ed-b71b97f04819-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:14:17 crc kubenswrapper[4812]: I1124 21:14:17.136831 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkp4f\" (UniqueName: \"kubernetes.io/projected/260045ff-e637-4d07-a9ed-b71b97f04819-kube-api-access-pkp4f\") on node \"crc\" DevicePath \"\"" Nov 24 21:14:17 crc kubenswrapper[4812]: I1124 21:14:17.145638 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/260045ff-e637-4d07-a9ed-b71b97f04819-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "260045ff-e637-4d07-a9ed-b71b97f04819" (UID: "260045ff-e637-4d07-a9ed-b71b97f04819"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:14:17 crc kubenswrapper[4812]: I1124 21:14:17.238864 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/260045ff-e637-4d07-a9ed-b71b97f04819-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:14:17 crc kubenswrapper[4812]: I1124 21:14:17.691008 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kwjw" event={"ID":"260045ff-e637-4d07-a9ed-b71b97f04819","Type":"ContainerDied","Data":"72875b54248c7e50657ac8554fc5126fd6ca9f0b45994ca4d5ce3265f3f433f7"} Nov 24 21:14:17 crc kubenswrapper[4812]: I1124 21:14:17.691125 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kwjw" Nov 24 21:14:17 crc kubenswrapper[4812]: I1124 21:14:17.691527 4812 scope.go:117] "RemoveContainer" containerID="13e7fe7c3836e216bfeded33b867af555e227154b6ecbec873b2138bd53d164c" Nov 24 21:14:17 crc kubenswrapper[4812]: I1124 21:14:17.730137 4812 scope.go:117] "RemoveContainer" containerID="dfe61ddfd8d440807ba0daa01f808e10a5f279f8f3797be708d3d560138fa4bf" Nov 24 21:14:17 crc kubenswrapper[4812]: I1124 21:14:17.761495 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7kwjw"] Nov 24 21:14:17 crc kubenswrapper[4812]: I1124 21:14:17.774163 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7kwjw"] Nov 24 21:14:17 crc kubenswrapper[4812]: I1124 21:14:17.777848 4812 scope.go:117] "RemoveContainer" containerID="68ee7b5930490b4b18e3bd0ce2ffdca33f5ce0c73561ba9dba221bb2f8e09797" Nov 24 21:14:17 crc kubenswrapper[4812]: I1124 21:14:17.889606 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qtm7x" Nov 24 21:14:17 crc kubenswrapper[4812]: I1124 21:14:17.889703 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qtm7x" Nov 24 21:14:17 crc kubenswrapper[4812]: I1124 21:14:17.946044 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qtm7x" Nov 24 21:14:18 crc kubenswrapper[4812]: I1124 21:14:18.778925 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qtm7x" Nov 24 21:14:18 crc kubenswrapper[4812]: I1124 21:14:18.988505 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="260045ff-e637-4d07-a9ed-b71b97f04819" path="/var/lib/kubelet/pods/260045ff-e637-4d07-a9ed-b71b97f04819/volumes" Nov 24 21:14:20 crc kubenswrapper[4812]: I1124 21:14:20.146780 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qtm7x"] Nov 24 21:14:20 crc kubenswrapper[4812]: I1124 21:14:20.731358 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qtm7x" podUID="aa54f605-fa96-47c9-97e8-cc1a8ebff5db" containerName="registry-server" containerID="cri-o://59173e5eff01b8836d5923375fb79676fa9696acf69184512bb6cbb7322e7efb" gracePeriod=2 Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.067370 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-zf4x4"] Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.083406 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-zf4x4"] Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.286863 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qtm7x" Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.342603 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa54f605-fa96-47c9-97e8-cc1a8ebff5db-utilities\") pod \"aa54f605-fa96-47c9-97e8-cc1a8ebff5db\" (UID: \"aa54f605-fa96-47c9-97e8-cc1a8ebff5db\") " Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.342771 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa54f605-fa96-47c9-97e8-cc1a8ebff5db-catalog-content\") pod \"aa54f605-fa96-47c9-97e8-cc1a8ebff5db\" (UID: \"aa54f605-fa96-47c9-97e8-cc1a8ebff5db\") " Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.343022 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82xxl\" (UniqueName: \"kubernetes.io/projected/aa54f605-fa96-47c9-97e8-cc1a8ebff5db-kube-api-access-82xxl\") pod \"aa54f605-fa96-47c9-97e8-cc1a8ebff5db\" (UID: \"aa54f605-fa96-47c9-97e8-cc1a8ebff5db\") " Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.344472 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa54f605-fa96-47c9-97e8-cc1a8ebff5db-utilities" (OuterVolumeSpecName: "utilities") pod "aa54f605-fa96-47c9-97e8-cc1a8ebff5db" (UID: "aa54f605-fa96-47c9-97e8-cc1a8ebff5db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.351629 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa54f605-fa96-47c9-97e8-cc1a8ebff5db-kube-api-access-82xxl" (OuterVolumeSpecName: "kube-api-access-82xxl") pod "aa54f605-fa96-47c9-97e8-cc1a8ebff5db" (UID: "aa54f605-fa96-47c9-97e8-cc1a8ebff5db"). InnerVolumeSpecName "kube-api-access-82xxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.372444 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa54f605-fa96-47c9-97e8-cc1a8ebff5db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa54f605-fa96-47c9-97e8-cc1a8ebff5db" (UID: "aa54f605-fa96-47c9-97e8-cc1a8ebff5db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.446257 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82xxl\" (UniqueName: \"kubernetes.io/projected/aa54f605-fa96-47c9-97e8-cc1a8ebff5db-kube-api-access-82xxl\") on node \"crc\" DevicePath \"\"" Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.446602 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa54f605-fa96-47c9-97e8-cc1a8ebff5db-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.446623 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa54f605-fa96-47c9-97e8-cc1a8ebff5db-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.750213 4812 generic.go:334] "Generic (PLEG): container finished" podID="aa54f605-fa96-47c9-97e8-cc1a8ebff5db" containerID="59173e5eff01b8836d5923375fb79676fa9696acf69184512bb6cbb7322e7efb" exitCode=0 Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.750307 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qtm7x" Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.750317 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtm7x" event={"ID":"aa54f605-fa96-47c9-97e8-cc1a8ebff5db","Type":"ContainerDied","Data":"59173e5eff01b8836d5923375fb79676fa9696acf69184512bb6cbb7322e7efb"} Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.750526 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtm7x" event={"ID":"aa54f605-fa96-47c9-97e8-cc1a8ebff5db","Type":"ContainerDied","Data":"d6c7b6f20b1b538331e31e5225f9296e4482dcac21e85f4837a60b60e8aeda1e"} Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.750594 4812 scope.go:117] "RemoveContainer" containerID="59173e5eff01b8836d5923375fb79676fa9696acf69184512bb6cbb7322e7efb" Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.793118 4812 scope.go:117] "RemoveContainer" containerID="8224b8e994143e702454e37fc9a92356ffb5881f2fbe766d75f0edd72798a009" Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.806313 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qtm7x"] Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.830323 4812 scope.go:117] "RemoveContainer" containerID="5ccb7e32b56d118bf141724d24f08e00989573fe75b899ef26fb768dbd785a11" Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.835665 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qtm7x"] Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.911823 4812 scope.go:117] "RemoveContainer" containerID="59173e5eff01b8836d5923375fb79676fa9696acf69184512bb6cbb7322e7efb" Nov 24 21:14:21 crc kubenswrapper[4812]: E1124 21:14:21.912801 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59173e5eff01b8836d5923375fb79676fa9696acf69184512bb6cbb7322e7efb\": container with ID starting with 59173e5eff01b8836d5923375fb79676fa9696acf69184512bb6cbb7322e7efb not found: ID does not exist" containerID="59173e5eff01b8836d5923375fb79676fa9696acf69184512bb6cbb7322e7efb" Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.912870 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59173e5eff01b8836d5923375fb79676fa9696acf69184512bb6cbb7322e7efb"} err="failed to get container status \"59173e5eff01b8836d5923375fb79676fa9696acf69184512bb6cbb7322e7efb\": rpc error: code = NotFound desc = could not find container \"59173e5eff01b8836d5923375fb79676fa9696acf69184512bb6cbb7322e7efb\": container with ID starting with 59173e5eff01b8836d5923375fb79676fa9696acf69184512bb6cbb7322e7efb not found: ID does not exist" Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.912895 4812 scope.go:117] "RemoveContainer" containerID="8224b8e994143e702454e37fc9a92356ffb5881f2fbe766d75f0edd72798a009" Nov 24 21:14:21 crc kubenswrapper[4812]: E1124 21:14:21.913460 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8224b8e994143e702454e37fc9a92356ffb5881f2fbe766d75f0edd72798a009\": container with ID starting with 8224b8e994143e702454e37fc9a92356ffb5881f2fbe766d75f0edd72798a009 not found: ID does not exist" containerID="8224b8e994143e702454e37fc9a92356ffb5881f2fbe766d75f0edd72798a009" Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.913502 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8224b8e994143e702454e37fc9a92356ffb5881f2fbe766d75f0edd72798a009"} err="failed to get container status \"8224b8e994143e702454e37fc9a92356ffb5881f2fbe766d75f0edd72798a009\": rpc error: code = NotFound desc = could not find container \"8224b8e994143e702454e37fc9a92356ffb5881f2fbe766d75f0edd72798a009\": container with ID starting with 8224b8e994143e702454e37fc9a92356ffb5881f2fbe766d75f0edd72798a009 not found: ID does not exist" Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.913530 4812 scope.go:117] "RemoveContainer" containerID="5ccb7e32b56d118bf141724d24f08e00989573fe75b899ef26fb768dbd785a11" Nov 24 21:14:21 crc kubenswrapper[4812]: E1124 21:14:21.914221 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ccb7e32b56d118bf141724d24f08e00989573fe75b899ef26fb768dbd785a11\": container with ID starting with 5ccb7e32b56d118bf141724d24f08e00989573fe75b899ef26fb768dbd785a11 not found: ID does not exist" containerID="5ccb7e32b56d118bf141724d24f08e00989573fe75b899ef26fb768dbd785a11" Nov 24 21:14:21 crc kubenswrapper[4812]: I1124 21:14:21.914242 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ccb7e32b56d118bf141724d24f08e00989573fe75b899ef26fb768dbd785a11"} err="failed to get container status \"5ccb7e32b56d118bf141724d24f08e00989573fe75b899ef26fb768dbd785a11\": rpc error: code = NotFound desc = could not find container \"5ccb7e32b56d118bf141724d24f08e00989573fe75b899ef26fb768dbd785a11\": container with ID starting with 5ccb7e32b56d118bf141724d24f08e00989573fe75b899ef26fb768dbd785a11 not found: ID does not exist" Nov 24 21:14:22 crc kubenswrapper[4812]: I1124 21:14:22.980067 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa54f605-fa96-47c9-97e8-cc1a8ebff5db" path="/var/lib/kubelet/pods/aa54f605-fa96-47c9-97e8-cc1a8ebff5db/volumes" Nov 24 21:14:22 crc kubenswrapper[4812]: I1124 21:14:22.982626 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1751d50-a273-4cdd-8f49-6db0f32f1a71" path="/var/lib/kubelet/pods/f1751d50-a273-4cdd-8f49-6db0f32f1a71/volumes" Nov 24 21:14:41 crc kubenswrapper[4812]: I1124 21:14:41.291997 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-49zjq"] Nov 24 21:14:41 crc kubenswrapper[4812]: E1124 21:14:41.292890 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="260045ff-e637-4d07-a9ed-b71b97f04819" containerName="extract-utilities" Nov 24 21:14:41 crc kubenswrapper[4812]: I1124 21:14:41.292904 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="260045ff-e637-4d07-a9ed-b71b97f04819" containerName="extract-utilities" Nov 24 21:14:41 crc kubenswrapper[4812]: E1124 21:14:41.292925 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa54f605-fa96-47c9-97e8-cc1a8ebff5db" containerName="registry-server" Nov 24 21:14:41 crc kubenswrapper[4812]: I1124 21:14:41.292933 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa54f605-fa96-47c9-97e8-cc1a8ebff5db" containerName="registry-server" Nov 24 21:14:41 crc kubenswrapper[4812]: E1124 21:14:41.292953 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="260045ff-e637-4d07-a9ed-b71b97f04819" containerName="registry-server" Nov 24 21:14:41 crc kubenswrapper[4812]: I1124 21:14:41.292960 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="260045ff-e637-4d07-a9ed-b71b97f04819" containerName="registry-server" Nov 24 21:14:41 crc kubenswrapper[4812]: E1124 21:14:41.292992 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="260045ff-e637-4d07-a9ed-b71b97f04819" containerName="extract-content" Nov 24 21:14:41 crc kubenswrapper[4812]: I1124 21:14:41.293000 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="260045ff-e637-4d07-a9ed-b71b97f04819" containerName="extract-content" Nov 24 21:14:41 crc kubenswrapper[4812]: E1124 21:14:41.293025 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa54f605-fa96-47c9-97e8-cc1a8ebff5db" containerName="extract-utilities" Nov 24 21:14:41 crc kubenswrapper[4812]: I1124 21:14:41.293034 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa54f605-fa96-47c9-97e8-cc1a8ebff5db" containerName="extract-utilities" Nov 24 21:14:41 crc kubenswrapper[4812]: E1124 21:14:41.293050 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa54f605-fa96-47c9-97e8-cc1a8ebff5db" containerName="extract-content" Nov 24 21:14:41 crc kubenswrapper[4812]: I1124 21:14:41.293058 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa54f605-fa96-47c9-97e8-cc1a8ebff5db" containerName="extract-content" Nov 24 21:14:41 crc kubenswrapper[4812]: I1124 21:14:41.293299 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa54f605-fa96-47c9-97e8-cc1a8ebff5db" containerName="registry-server" Nov 24 21:14:41 crc kubenswrapper[4812]: I1124 21:14:41.293319 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="260045ff-e637-4d07-a9ed-b71b97f04819" containerName="registry-server" Nov 24 21:14:41 crc kubenswrapper[4812]: I1124 21:14:41.295198 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49zjq" Nov 24 21:14:41 crc kubenswrapper[4812]: I1124 21:14:41.309914 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-49zjq"] Nov 24 21:14:41 crc kubenswrapper[4812]: I1124 21:14:41.387029 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt776\" (UniqueName: \"kubernetes.io/projected/32d08a01-dc2f-44f2-ac0e-353ad3559676-kube-api-access-wt776\") pod \"community-operators-49zjq\" (UID: \"32d08a01-dc2f-44f2-ac0e-353ad3559676\") " pod="openshift-marketplace/community-operators-49zjq" Nov 24 21:14:41 crc kubenswrapper[4812]: I1124 21:14:41.387499 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32d08a01-dc2f-44f2-ac0e-353ad3559676-catalog-content\") pod \"community-operators-49zjq\" (UID: \"32d08a01-dc2f-44f2-ac0e-353ad3559676\") " pod="openshift-marketplace/community-operators-49zjq" Nov 24 21:14:41 crc kubenswrapper[4812]: I1124 21:14:41.387600 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32d08a01-dc2f-44f2-ac0e-353ad3559676-utilities\") pod \"community-operators-49zjq\" (UID: \"32d08a01-dc2f-44f2-ac0e-353ad3559676\") " pod="openshift-marketplace/community-operators-49zjq" Nov 24 21:14:41 crc kubenswrapper[4812]: I1124 21:14:41.489747 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt776\" (UniqueName: \"kubernetes.io/projected/32d08a01-dc2f-44f2-ac0e-353ad3559676-kube-api-access-wt776\") pod \"community-operators-49zjq\" (UID: \"32d08a01-dc2f-44f2-ac0e-353ad3559676\") " pod="openshift-marketplace/community-operators-49zjq" Nov 24 21:14:41 crc kubenswrapper[4812]: I1124 21:14:41.490016 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32d08a01-dc2f-44f2-ac0e-353ad3559676-catalog-content\") pod \"community-operators-49zjq\" (UID: \"32d08a01-dc2f-44f2-ac0e-353ad3559676\") " pod="openshift-marketplace/community-operators-49zjq" Nov 24 21:14:41 crc kubenswrapper[4812]: I1124 21:14:41.490078 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32d08a01-dc2f-44f2-ac0e-353ad3559676-utilities\") pod \"community-operators-49zjq\" (UID: \"32d08a01-dc2f-44f2-ac0e-353ad3559676\") " pod="openshift-marketplace/community-operators-49zjq" Nov 24 21:14:41 crc kubenswrapper[4812]: I1124 21:14:41.490760 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32d08a01-dc2f-44f2-ac0e-353ad3559676-catalog-content\") pod \"community-operators-49zjq\" (UID: \"32d08a01-dc2f-44f2-ac0e-353ad3559676\") " pod="openshift-marketplace/community-operators-49zjq" Nov 24 21:14:41 crc kubenswrapper[4812]: I1124 21:14:41.490936 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32d08a01-dc2f-44f2-ac0e-353ad3559676-utilities\") pod \"community-operators-49zjq\" (UID: \"32d08a01-dc2f-44f2-ac0e-353ad3559676\") " pod="openshift-marketplace/community-operators-49zjq" Nov 24 21:14:41 crc kubenswrapper[4812]: I1124 21:14:41.511278 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt776\" (UniqueName: \"kubernetes.io/projected/32d08a01-dc2f-44f2-ac0e-353ad3559676-kube-api-access-wt776\") pod \"community-operators-49zjq\" (UID: \"32d08a01-dc2f-44f2-ac0e-353ad3559676\") " pod="openshift-marketplace/community-operators-49zjq" Nov 24 21:14:41 crc kubenswrapper[4812]: I1124 21:14:41.621904 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49zjq" Nov 24 21:14:42 crc kubenswrapper[4812]: I1124 21:14:42.173778 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-49zjq"] Nov 24 21:14:43 crc kubenswrapper[4812]: I1124 21:14:43.023927 4812 generic.go:334] "Generic (PLEG): container finished" podID="32d08a01-dc2f-44f2-ac0e-353ad3559676" containerID="b861454f037df10ea42691bab0b624e11227c3c2fd30333449b179393f00f006" exitCode=0 Nov 24 21:14:43 crc kubenswrapper[4812]: I1124 21:14:43.024249 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49zjq" event={"ID":"32d08a01-dc2f-44f2-ac0e-353ad3559676","Type":"ContainerDied","Data":"b861454f037df10ea42691bab0b624e11227c3c2fd30333449b179393f00f006"} Nov 24 21:14:43 crc kubenswrapper[4812]: I1124 21:14:43.024277 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49zjq" event={"ID":"32d08a01-dc2f-44f2-ac0e-353ad3559676","Type":"ContainerStarted","Data":"550af6fe7e79773536d5f19c9c55e22c1c17cbf8f723650952acce24db2c5f13"} Nov 24 21:14:44 crc kubenswrapper[4812]: I1124 21:14:44.042118 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49zjq" event={"ID":"32d08a01-dc2f-44f2-ac0e-353ad3559676","Type":"ContainerStarted","Data":"3c048cdb26f2231f99bbd89a83c85d5e164e8318dc250beadf407817da890701"} Nov 24 21:14:46 crc kubenswrapper[4812]: I1124 21:14:46.070128 4812 generic.go:334] "Generic (PLEG): container finished" podID="32d08a01-dc2f-44f2-ac0e-353ad3559676" containerID="3c048cdb26f2231f99bbd89a83c85d5e164e8318dc250beadf407817da890701" exitCode=0 Nov 24 21:14:46 crc kubenswrapper[4812]: I1124 21:14:46.070330 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49zjq" event={"ID":"32d08a01-dc2f-44f2-ac0e-353ad3559676","Type":"ContainerDied","Data":"3c048cdb26f2231f99bbd89a83c85d5e164e8318dc250beadf407817da890701"} Nov 24 21:14:47 crc kubenswrapper[4812]: I1124 21:14:47.087971 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49zjq" event={"ID":"32d08a01-dc2f-44f2-ac0e-353ad3559676","Type":"ContainerStarted","Data":"c5e124f240d110d6d216a3b5ad53f1c24b827fec38da68e376de7dfd8a82a593"} Nov 24 21:14:47 crc kubenswrapper[4812]: I1124 21:14:47.119481 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-49zjq" podStartSLOduration=2.638696781 podStartE2EDuration="6.119455757s" podCreationTimestamp="2025-11-24 21:14:41 +0000 UTC" firstStartedPulling="2025-11-24 21:14:43.026669439 +0000 UTC m=+7076.815621810" lastFinishedPulling="2025-11-24 21:14:46.507428385 +0000 UTC m=+7080.296380786" observedRunningTime="2025-11-24 21:14:47.108328122 +0000 UTC m=+7080.897280493" watchObservedRunningTime="2025-11-24 21:14:47.119455757 +0000 UTC m=+7080.908408138" Nov 24 21:14:51 crc kubenswrapper[4812]: I1124 21:14:51.622455 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-49zjq" Nov 24 21:14:51 crc kubenswrapper[4812]: I1124 21:14:51.623129 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-49zjq" Nov 24 21:14:51 crc kubenswrapper[4812]: I1124 21:14:51.713693 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-49zjq" Nov 24 21:14:52 crc kubenswrapper[4812]: I1124 21:14:52.240060 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-49zjq" Nov 24 21:14:52 crc kubenswrapper[4812]: I1124 21:14:52.308880 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-49zjq"] Nov 24 21:14:54 crc kubenswrapper[4812]: I1124 21:14:54.192789 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-49zjq" podUID="32d08a01-dc2f-44f2-ac0e-353ad3559676" containerName="registry-server" containerID="cri-o://c5e124f240d110d6d216a3b5ad53f1c24b827fec38da68e376de7dfd8a82a593" gracePeriod=2 Nov 24 21:14:54 crc kubenswrapper[4812]: I1124 21:14:54.763558 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49zjq" Nov 24 21:14:54 crc kubenswrapper[4812]: I1124 21:14:54.938590 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32d08a01-dc2f-44f2-ac0e-353ad3559676-catalog-content\") pod \"32d08a01-dc2f-44f2-ac0e-353ad3559676\" (UID: \"32d08a01-dc2f-44f2-ac0e-353ad3559676\") " Nov 24 21:14:54 crc kubenswrapper[4812]: I1124 21:14:54.938722 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt776\" (UniqueName: \"kubernetes.io/projected/32d08a01-dc2f-44f2-ac0e-353ad3559676-kube-api-access-wt776\") pod \"32d08a01-dc2f-44f2-ac0e-353ad3559676\" (UID: \"32d08a01-dc2f-44f2-ac0e-353ad3559676\") " Nov 24 21:14:54 crc kubenswrapper[4812]: I1124 21:14:54.938928 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32d08a01-dc2f-44f2-ac0e-353ad3559676-utilities\") pod \"32d08a01-dc2f-44f2-ac0e-353ad3559676\" (UID: \"32d08a01-dc2f-44f2-ac0e-353ad3559676\") " Nov 24 21:14:54 crc kubenswrapper[4812]: I1124 21:14:54.940381 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32d08a01-dc2f-44f2-ac0e-353ad3559676-utilities" (OuterVolumeSpecName: "utilities") pod "32d08a01-dc2f-44f2-ac0e-353ad3559676" (UID: "32d08a01-dc2f-44f2-ac0e-353ad3559676"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:14:54 crc kubenswrapper[4812]: I1124 21:14:54.947689 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32d08a01-dc2f-44f2-ac0e-353ad3559676-kube-api-access-wt776" (OuterVolumeSpecName: "kube-api-access-wt776") pod "32d08a01-dc2f-44f2-ac0e-353ad3559676" (UID: "32d08a01-dc2f-44f2-ac0e-353ad3559676"). InnerVolumeSpecName "kube-api-access-wt776". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:14:55 crc kubenswrapper[4812]: I1124 21:14:55.044235 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt776\" (UniqueName: \"kubernetes.io/projected/32d08a01-dc2f-44f2-ac0e-353ad3559676-kube-api-access-wt776\") on node \"crc\" DevicePath \"\"" Nov 24 21:14:55 crc kubenswrapper[4812]: I1124 21:14:55.044608 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32d08a01-dc2f-44f2-ac0e-353ad3559676-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:14:55 crc kubenswrapper[4812]: I1124 21:14:55.223946 4812 generic.go:334] "Generic (PLEG): container finished" podID="32d08a01-dc2f-44f2-ac0e-353ad3559676" containerID="c5e124f240d110d6d216a3b5ad53f1c24b827fec38da68e376de7dfd8a82a593" exitCode=0 Nov 24 21:14:55 crc kubenswrapper[4812]: I1124 21:14:55.224006 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49zjq" event={"ID":"32d08a01-dc2f-44f2-ac0e-353ad3559676","Type":"ContainerDied","Data":"c5e124f240d110d6d216a3b5ad53f1c24b827fec38da68e376de7dfd8a82a593"} Nov 24 21:14:55 crc kubenswrapper[4812]: I1124 21:14:55.224048 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-49zjq" Nov 24 21:14:55 crc kubenswrapper[4812]: I1124 21:14:55.224076 4812 scope.go:117] "RemoveContainer" containerID="c5e124f240d110d6d216a3b5ad53f1c24b827fec38da68e376de7dfd8a82a593" Nov 24 21:14:55 crc kubenswrapper[4812]: I1124 21:14:55.224056 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-49zjq" event={"ID":"32d08a01-dc2f-44f2-ac0e-353ad3559676","Type":"ContainerDied","Data":"550af6fe7e79773536d5f19c9c55e22c1c17cbf8f723650952acce24db2c5f13"} Nov 24 21:14:55 crc kubenswrapper[4812]: I1124 21:14:55.260927 4812 scope.go:117] "RemoveContainer" containerID="3c048cdb26f2231f99bbd89a83c85d5e164e8318dc250beadf407817da890701" Nov 24 21:14:55 crc kubenswrapper[4812]: I1124 21:14:55.307120 4812 scope.go:117] "RemoveContainer" containerID="b861454f037df10ea42691bab0b624e11227c3c2fd30333449b179393f00f006" Nov 24 21:14:55 crc kubenswrapper[4812]: I1124 21:14:55.387610 4812 scope.go:117] "RemoveContainer" containerID="c5e124f240d110d6d216a3b5ad53f1c24b827fec38da68e376de7dfd8a82a593" Nov 24 21:14:55 crc kubenswrapper[4812]: E1124 21:14:55.388107 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5e124f240d110d6d216a3b5ad53f1c24b827fec38da68e376de7dfd8a82a593\": container with ID starting with c5e124f240d110d6d216a3b5ad53f1c24b827fec38da68e376de7dfd8a82a593 not found: ID does not exist" containerID="c5e124f240d110d6d216a3b5ad53f1c24b827fec38da68e376de7dfd8a82a593" Nov 24 21:14:55 crc kubenswrapper[4812]: I1124 21:14:55.388159 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5e124f240d110d6d216a3b5ad53f1c24b827fec38da68e376de7dfd8a82a593"} err="failed to get container status \"c5e124f240d110d6d216a3b5ad53f1c24b827fec38da68e376de7dfd8a82a593\": rpc error: code = NotFound desc = could not find container \"c5e124f240d110d6d216a3b5ad53f1c24b827fec38da68e376de7dfd8a82a593\": container with ID starting with c5e124f240d110d6d216a3b5ad53f1c24b827fec38da68e376de7dfd8a82a593 not found: ID does not exist" Nov 24 21:14:55 crc kubenswrapper[4812]: I1124 21:14:55.388192 4812 scope.go:117] "RemoveContainer" containerID="3c048cdb26f2231f99bbd89a83c85d5e164e8318dc250beadf407817da890701" Nov 24 21:14:55 crc kubenswrapper[4812]: E1124 21:14:55.388951 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c048cdb26f2231f99bbd89a83c85d5e164e8318dc250beadf407817da890701\": container with ID starting with 3c048cdb26f2231f99bbd89a83c85d5e164e8318dc250beadf407817da890701 not found: ID does not exist" containerID="3c048cdb26f2231f99bbd89a83c85d5e164e8318dc250beadf407817da890701" Nov 24 21:14:55 crc kubenswrapper[4812]: I1124 21:14:55.388994 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c048cdb26f2231f99bbd89a83c85d5e164e8318dc250beadf407817da890701"} err="failed to get container status \"3c048cdb26f2231f99bbd89a83c85d5e164e8318dc250beadf407817da890701\": rpc error: code = NotFound desc = could not find container \"3c048cdb26f2231f99bbd89a83c85d5e164e8318dc250beadf407817da890701\": container with ID starting with 3c048cdb26f2231f99bbd89a83c85d5e164e8318dc250beadf407817da890701 not found: ID does not exist" Nov 24 21:14:55 crc kubenswrapper[4812]: I1124 21:14:55.389021 4812 scope.go:117] "RemoveContainer" containerID="b861454f037df10ea42691bab0b624e11227c3c2fd30333449b179393f00f006" Nov 24 21:14:55 crc kubenswrapper[4812]: E1124 21:14:55.392489 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b861454f037df10ea42691bab0b624e11227c3c2fd30333449b179393f00f006\": container with ID starting with b861454f037df10ea42691bab0b624e11227c3c2fd30333449b179393f00f006 not found: ID does not exist" containerID="b861454f037df10ea42691bab0b624e11227c3c2fd30333449b179393f00f006" Nov 24 21:14:55 crc kubenswrapper[4812]: I1124 21:14:55.392534 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b861454f037df10ea42691bab0b624e11227c3c2fd30333449b179393f00f006"} err="failed to get container status \"b861454f037df10ea42691bab0b624e11227c3c2fd30333449b179393f00f006\": rpc error: code = NotFound desc = could not find container \"b861454f037df10ea42691bab0b624e11227c3c2fd30333449b179393f00f006\": container with ID starting with b861454f037df10ea42691bab0b624e11227c3c2fd30333449b179393f00f006 not found: ID does not exist" Nov 24 21:14:55 crc kubenswrapper[4812]: I1124 21:14:55.771974 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32d08a01-dc2f-44f2-ac0e-353ad3559676-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32d08a01-dc2f-44f2-ac0e-353ad3559676" (UID: "32d08a01-dc2f-44f2-ac0e-353ad3559676"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:14:55 crc kubenswrapper[4812]: I1124 21:14:55.875164 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32d08a01-dc2f-44f2-ac0e-353ad3559676-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:14:55 crc kubenswrapper[4812]: I1124 21:14:55.897200 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-49zjq"] Nov 24 21:14:55 crc kubenswrapper[4812]: I1124 21:14:55.917802 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-49zjq"] Nov 24 21:14:57 crc kubenswrapper[4812]: I1124 21:14:56.999801 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32d08a01-dc2f-44f2-ac0e-353ad3559676" path="/var/lib/kubelet/pods/32d08a01-dc2f-44f2-ac0e-353ad3559676/volumes" Nov 24 21:15:00 crc kubenswrapper[4812]: I1124 21:15:00.197906 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj"] Nov 24 21:15:00 crc kubenswrapper[4812]: E1124 21:15:00.199737 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d08a01-dc2f-44f2-ac0e-353ad3559676" containerName="registry-server" Nov 24 21:15:00 crc kubenswrapper[4812]: I1124 21:15:00.199776 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d08a01-dc2f-44f2-ac0e-353ad3559676" containerName="registry-server" Nov 24 21:15:00 crc kubenswrapper[4812]: E1124 21:15:00.199866 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d08a01-dc2f-44f2-ac0e-353ad3559676" containerName="extract-content" Nov 24 21:15:00 crc kubenswrapper[4812]: I1124 21:15:00.199886 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d08a01-dc2f-44f2-ac0e-353ad3559676" containerName="extract-content" Nov 24 21:15:00 crc kubenswrapper[4812]: E1124 21:15:00.199934 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d08a01-dc2f-44f2-ac0e-353ad3559676" containerName="extract-utilities" Nov 24 21:15:00 crc kubenswrapper[4812]: I1124 21:15:00.199951 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d08a01-dc2f-44f2-ac0e-353ad3559676" containerName="extract-utilities" Nov 24 21:15:00 crc kubenswrapper[4812]: I1124 21:15:00.200529 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="32d08a01-dc2f-44f2-ac0e-353ad3559676" containerName="registry-server" Nov 24 21:15:00 crc kubenswrapper[4812]: I1124 21:15:00.202097 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj" Nov 24 21:15:00 crc kubenswrapper[4812]: I1124 21:15:00.205232 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 21:15:00 crc kubenswrapper[4812]: I1124 21:15:00.205326 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 21:15:00 crc kubenswrapper[4812]: I1124 21:15:00.215226 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj"] Nov 24 21:15:00 crc kubenswrapper[4812]: I1124 21:15:00.316417 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1cb8a30-be67-4462-8e91-dc626153f19e-config-volume\") pod \"collect-profiles-29400315-j5khj\" (UID: \"b1cb8a30-be67-4462-8e91-dc626153f19e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj" Nov 24 21:15:00 crc kubenswrapper[4812]: I1124 21:15:00.316644 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1cb8a30-be67-4462-8e91-dc626153f19e-secret-volume\") pod \"collect-profiles-29400315-j5khj\" (UID: \"b1cb8a30-be67-4462-8e91-dc626153f19e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj" Nov 24 21:15:00 crc kubenswrapper[4812]: I1124 21:15:00.316714 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vcc4\" (UniqueName: \"kubernetes.io/projected/b1cb8a30-be67-4462-8e91-dc626153f19e-kube-api-access-2vcc4\") pod \"collect-profiles-29400315-j5khj\" (UID: \"b1cb8a30-be67-4462-8e91-dc626153f19e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj" Nov 24 21:15:00 crc kubenswrapper[4812]: I1124 21:15:00.419167 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1cb8a30-be67-4462-8e91-dc626153f19e-config-volume\") pod \"collect-profiles-29400315-j5khj\" (UID: \"b1cb8a30-be67-4462-8e91-dc626153f19e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj" Nov 24 21:15:00 crc kubenswrapper[4812]: I1124 21:15:00.419366 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1cb8a30-be67-4462-8e91-dc626153f19e-secret-volume\") pod \"collect-profiles-29400315-j5khj\" (UID: \"b1cb8a30-be67-4462-8e91-dc626153f19e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj" Nov 24 21:15:00 crc kubenswrapper[4812]: I1124 21:15:00.419410 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vcc4\" (UniqueName: \"kubernetes.io/projected/b1cb8a30-be67-4462-8e91-dc626153f19e-kube-api-access-2vcc4\") pod \"collect-profiles-29400315-j5khj\" (UID: \"b1cb8a30-be67-4462-8e91-dc626153f19e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj" Nov 24 21:15:00 crc kubenswrapper[4812]: I1124 21:15:00.421263 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1cb8a30-be67-4462-8e91-dc626153f19e-config-volume\") pod \"collect-profiles-29400315-j5khj\" (UID: \"b1cb8a30-be67-4462-8e91-dc626153f19e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj" Nov 24 21:15:00 crc kubenswrapper[4812]: I1124 21:15:00.436188 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1cb8a30-be67-4462-8e91-dc626153f19e-secret-volume\") pod \"collect-profiles-29400315-j5khj\" (UID: \"b1cb8a30-be67-4462-8e91-dc626153f19e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj" Nov 24 21:15:00 crc kubenswrapper[4812]: I1124 21:15:00.442159 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vcc4\" (UniqueName: \"kubernetes.io/projected/b1cb8a30-be67-4462-8e91-dc626153f19e-kube-api-access-2vcc4\") pod \"collect-profiles-29400315-j5khj\" (UID: \"b1cb8a30-be67-4462-8e91-dc626153f19e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj" Nov 24 21:15:00 crc kubenswrapper[4812]: I1124 21:15:00.535602 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj" Nov 24 21:15:01 crc kubenswrapper[4812]: I1124 21:15:01.033125 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj"] Nov 24 21:15:01 crc kubenswrapper[4812]: I1124 21:15:01.326971 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj" event={"ID":"b1cb8a30-be67-4462-8e91-dc626153f19e","Type":"ContainerStarted","Data":"cdb0c81f9f88ff471816efdea96c2565b89945f0ce53654ba913c40c487d60ae"} Nov 24 21:15:01 crc kubenswrapper[4812]: I1124 21:15:01.328994 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj" event={"ID":"b1cb8a30-be67-4462-8e91-dc626153f19e","Type":"ContainerStarted","Data":"49d0627792196c20c71029751b6f0547090a6aad3c3825f46444dffe66260aee"} Nov 24 21:15:01 crc kubenswrapper[4812]: I1124 21:15:01.355217 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj" podStartSLOduration=1.355181305 podStartE2EDuration="1.355181305s" podCreationTimestamp="2025-11-24 21:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:15:01.350117821 +0000 UTC m=+7095.139070192" watchObservedRunningTime="2025-11-24 21:15:01.355181305 +0000 UTC m=+7095.144133676" Nov 24 21:15:02 crc kubenswrapper[4812]: I1124 21:15:02.337142 4812 generic.go:334] "Generic (PLEG): container finished" podID="b1cb8a30-be67-4462-8e91-dc626153f19e" containerID="cdb0c81f9f88ff471816efdea96c2565b89945f0ce53654ba913c40c487d60ae" exitCode=0 Nov 24 21:15:02 crc kubenswrapper[4812]: I1124 21:15:02.337388 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj" event={"ID":"b1cb8a30-be67-4462-8e91-dc626153f19e","Type":"ContainerDied","Data":"cdb0c81f9f88ff471816efdea96c2565b89945f0ce53654ba913c40c487d60ae"} Nov 24 21:15:03 crc kubenswrapper[4812]: I1124 21:15:03.248718 4812 scope.go:117] "RemoveContainer" containerID="6d32cbf8b6e86dddf9e0cb350e00299f1430f31ac2f5ec949206f2cda184c19e" Nov 24 21:15:03 crc kubenswrapper[4812]: I1124 21:15:03.314200 4812 scope.go:117] "RemoveContainer" containerID="0a61db4434949811c58acd594a6e1e36191bda7cbcec247c9234fb322574300e" Nov 24 21:15:03 crc kubenswrapper[4812]: I1124 21:15:03.384898 4812 scope.go:117] "RemoveContainer" containerID="bcfbb245d45796ad0291923829c4a5919df24fbbee8aa365059c79e9a3455b21" Nov 24 21:15:03 crc kubenswrapper[4812]: I1124 21:15:03.702612 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj" Nov 24 21:15:03 crc kubenswrapper[4812]: I1124 21:15:03.808107 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vcc4\" (UniqueName: \"kubernetes.io/projected/b1cb8a30-be67-4462-8e91-dc626153f19e-kube-api-access-2vcc4\") pod \"b1cb8a30-be67-4462-8e91-dc626153f19e\" (UID: \"b1cb8a30-be67-4462-8e91-dc626153f19e\") " Nov 24 21:15:03 crc kubenswrapper[4812]: I1124 21:15:03.808446 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1cb8a30-be67-4462-8e91-dc626153f19e-secret-volume\") pod \"b1cb8a30-be67-4462-8e91-dc626153f19e\" (UID: \"b1cb8a30-be67-4462-8e91-dc626153f19e\") " Nov 24 21:15:03 crc kubenswrapper[4812]: I1124 21:15:03.808482 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1cb8a30-be67-4462-8e91-dc626153f19e-config-volume\") pod \"b1cb8a30-be67-4462-8e91-dc626153f19e\" (UID: \"b1cb8a30-be67-4462-8e91-dc626153f19e\") " Nov 24 21:15:03 crc kubenswrapper[4812]: I1124 21:15:03.809250 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1cb8a30-be67-4462-8e91-dc626153f19e-config-volume" (OuterVolumeSpecName: "config-volume") pod "b1cb8a30-be67-4462-8e91-dc626153f19e" (UID: "b1cb8a30-be67-4462-8e91-dc626153f19e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:15:03 crc kubenswrapper[4812]: I1124 21:15:03.815073 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1cb8a30-be67-4462-8e91-dc626153f19e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b1cb8a30-be67-4462-8e91-dc626153f19e" (UID: "b1cb8a30-be67-4462-8e91-dc626153f19e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:15:03 crc kubenswrapper[4812]: I1124 21:15:03.815269 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1cb8a30-be67-4462-8e91-dc626153f19e-kube-api-access-2vcc4" (OuterVolumeSpecName: "kube-api-access-2vcc4") pod "b1cb8a30-be67-4462-8e91-dc626153f19e" (UID: "b1cb8a30-be67-4462-8e91-dc626153f19e"). InnerVolumeSpecName "kube-api-access-2vcc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:15:03 crc kubenswrapper[4812]: I1124 21:15:03.911072 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b1cb8a30-be67-4462-8e91-dc626153f19e-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:15:03 crc kubenswrapper[4812]: I1124 21:15:03.911439 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1cb8a30-be67-4462-8e91-dc626153f19e-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:15:03 crc kubenswrapper[4812]: I1124 21:15:03.911577 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vcc4\" (UniqueName: \"kubernetes.io/projected/b1cb8a30-be67-4462-8e91-dc626153f19e-kube-api-access-2vcc4\") on node \"crc\" DevicePath \"\"" Nov 24 21:15:04 crc kubenswrapper[4812]: I1124 21:15:04.366013 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj" event={"ID":"b1cb8a30-be67-4462-8e91-dc626153f19e","Type":"ContainerDied","Data":"49d0627792196c20c71029751b6f0547090a6aad3c3825f46444dffe66260aee"} Nov 24 21:15:04 crc kubenswrapper[4812]: I1124 21:15:04.366053 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49d0627792196c20c71029751b6f0547090a6aad3c3825f46444dffe66260aee" Nov 24 21:15:04 crc kubenswrapper[4812]: I1124 21:15:04.366052 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj" Nov 24 21:15:04 crc kubenswrapper[4812]: I1124 21:15:04.421969 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400270-qz6hm"] Nov 24 21:15:04 crc kubenswrapper[4812]: I1124 21:15:04.431203 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400270-qz6hm"] Nov 24 21:15:04 crc kubenswrapper[4812]: I1124 21:15:04.985644 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76a68241-269a-450a-9b60-90ff42711616" path="/var/lib/kubelet/pods/76a68241-269a-450a-9b60-90ff42711616/volumes" Nov 24 21:16:03 crc kubenswrapper[4812]: I1124 21:16:03.610973 4812 scope.go:117] "RemoveContainer" containerID="5133d57387cea94020b839a33999afaf60ebc70a772f179235f80b89559d78c1" Nov 24 21:16:32 crc kubenswrapper[4812]: I1124 21:16:32.998421 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:16:33 crc kubenswrapper[4812]: I1124 21:16:32.999039 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:17:02 crc kubenswrapper[4812]: I1124 21:17:02.998924 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:17:03 crc kubenswrapper[4812]: I1124 21:17:02.999670 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:17:32 crc kubenswrapper[4812]: I1124 21:17:32.998983 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:17:33 crc kubenswrapper[4812]: I1124 21:17:32.999768 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:17:33 crc kubenswrapper[4812]: I1124 21:17:32.999837 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 21:17:33 crc kubenswrapper[4812]: I1124 21:17:33.000768 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"42da69ace901f1d1c1447c55d84bba6cad25de3b45c999d9dc8c61d5cd823f6d"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:17:33 crc kubenswrapper[4812]: I1124 21:17:33.000871 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://42da69ace901f1d1c1447c55d84bba6cad25de3b45c999d9dc8c61d5cd823f6d" gracePeriod=600 Nov 24 21:17:33 crc kubenswrapper[4812]: I1124 21:17:33.178554 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="42da69ace901f1d1c1447c55d84bba6cad25de3b45c999d9dc8c61d5cd823f6d" exitCode=0 Nov 24 21:17:33 crc kubenswrapper[4812]: I1124 21:17:33.178612 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"42da69ace901f1d1c1447c55d84bba6cad25de3b45c999d9dc8c61d5cd823f6d"} Nov 24 21:17:33 crc kubenswrapper[4812]: I1124 21:17:33.178656 4812 scope.go:117] "RemoveContainer" containerID="d43396aac969e811509f120cf73f6f169202b14fdb63a160e223678ff4ad21e7" Nov 24 21:17:34 crc kubenswrapper[4812]: I1124 21:17:34.191089 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586"} Nov 24 21:18:43 crc kubenswrapper[4812]: I1124 21:18:43.774928 4812 generic.go:334] "Generic (PLEG): container finished" podID="92040984-aa11-4d6a-9069-58324eac2e33" containerID="5138ac4851369d307f2b3d320ca3c93f12f0a9f732de4439c7e2e3500baf022b" exitCode=0 Nov 24 21:18:43 crc kubenswrapper[4812]: I1124 21:18:43.775018 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll" event={"ID":"92040984-aa11-4d6a-9069-58324eac2e33","Type":"ContainerDied","Data":"5138ac4851369d307f2b3d320ca3c93f12f0a9f732de4439c7e2e3500baf022b"} Nov 24 21:18:45 crc kubenswrapper[4812]: I1124 21:18:45.250323 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll" Nov 24 21:18:45 crc kubenswrapper[4812]: I1124 21:18:45.352818 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92040984-aa11-4d6a-9069-58324eac2e33-inventory\") pod \"92040984-aa11-4d6a-9069-58324eac2e33\" (UID: \"92040984-aa11-4d6a-9069-58324eac2e33\") " Nov 24 21:18:45 crc kubenswrapper[4812]: I1124 21:18:45.352951 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92040984-aa11-4d6a-9069-58324eac2e33-tripleo-cleanup-combined-ca-bundle\") pod \"92040984-aa11-4d6a-9069-58324eac2e33\" (UID: \"92040984-aa11-4d6a-9069-58324eac2e33\") " Nov 24 21:18:45 crc kubenswrapper[4812]: I1124 21:18:45.353034 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/92040984-aa11-4d6a-9069-58324eac2e33-ssh-key\") pod \"92040984-aa11-4d6a-9069-58324eac2e33\" (UID: \"92040984-aa11-4d6a-9069-58324eac2e33\") " Nov 24 21:18:45 crc kubenswrapper[4812]: I1124 21:18:45.353071 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv2q4\" (UniqueName: \"kubernetes.io/projected/92040984-aa11-4d6a-9069-58324eac2e33-kube-api-access-cv2q4\") pod \"92040984-aa11-4d6a-9069-58324eac2e33\" (UID: \"92040984-aa11-4d6a-9069-58324eac2e33\") " Nov 24 21:18:45 crc kubenswrapper[4812]: I1124 21:18:45.358857 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92040984-aa11-4d6a-9069-58324eac2e33-kube-api-access-cv2q4" (OuterVolumeSpecName: "kube-api-access-cv2q4") pod "92040984-aa11-4d6a-9069-58324eac2e33" (UID: "92040984-aa11-4d6a-9069-58324eac2e33"). InnerVolumeSpecName "kube-api-access-cv2q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:18:45 crc kubenswrapper[4812]: I1124 21:18:45.366543 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92040984-aa11-4d6a-9069-58324eac2e33-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "92040984-aa11-4d6a-9069-58324eac2e33" (UID: "92040984-aa11-4d6a-9069-58324eac2e33"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:18:45 crc kubenswrapper[4812]: I1124 21:18:45.412622 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92040984-aa11-4d6a-9069-58324eac2e33-inventory" (OuterVolumeSpecName: "inventory") pod "92040984-aa11-4d6a-9069-58324eac2e33" (UID: "92040984-aa11-4d6a-9069-58324eac2e33"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:18:45 crc kubenswrapper[4812]: I1124 21:18:45.413187 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92040984-aa11-4d6a-9069-58324eac2e33-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "92040984-aa11-4d6a-9069-58324eac2e33" (UID: "92040984-aa11-4d6a-9069-58324eac2e33"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:18:45 crc kubenswrapper[4812]: I1124 21:18:45.455622 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92040984-aa11-4d6a-9069-58324eac2e33-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:18:45 crc kubenswrapper[4812]: I1124 21:18:45.455671 4812 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92040984-aa11-4d6a-9069-58324eac2e33-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:18:45 crc kubenswrapper[4812]: I1124 21:18:45.455686 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/92040984-aa11-4d6a-9069-58324eac2e33-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:18:45 crc kubenswrapper[4812]: I1124 21:18:45.455698 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv2q4\" (UniqueName: \"kubernetes.io/projected/92040984-aa11-4d6a-9069-58324eac2e33-kube-api-access-cv2q4\") on node \"crc\" DevicePath \"\"" Nov 24 21:18:45 crc kubenswrapper[4812]: I1124 21:18:45.801780 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll" event={"ID":"92040984-aa11-4d6a-9069-58324eac2e33","Type":"ContainerDied","Data":"c993c92d1171f03ba7e8a7a41fc222cedd2d3e70518af65d0382d2c654290067"} Nov 24 21:18:45 crc kubenswrapper[4812]: I1124 21:18:45.801840 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c993c92d1171f03ba7e8a7a41fc222cedd2d3e70518af65d0382d2c654290067" Nov 24 21:18:45 crc kubenswrapper[4812]: I1124 21:18:45.801949 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.643968 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-mlwwr"] Nov 24 21:18:54 crc kubenswrapper[4812]: E1124 21:18:54.648629 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1cb8a30-be67-4462-8e91-dc626153f19e" containerName="collect-profiles" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.648654 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1cb8a30-be67-4462-8e91-dc626153f19e" containerName="collect-profiles" Nov 24 21:18:54 crc kubenswrapper[4812]: E1124 21:18:54.648691 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92040984-aa11-4d6a-9069-58324eac2e33" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.648699 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="92040984-aa11-4d6a-9069-58324eac2e33" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.648995 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="92040984-aa11-4d6a-9069-58324eac2e33" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.649019 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1cb8a30-be67-4462-8e91-dc626153f19e" containerName="collect-profiles" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.649758 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-mlwwr" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.654388 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.654571 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2x7tj" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.654593 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.654767 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.664532 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-mlwwr"] Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.719205 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d632362c-f1f0-4d67-a5aa-4521e500ae30-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-mlwwr\" (UID: \"d632362c-f1f0-4d67-a5aa-4521e500ae30\") " pod="openstack/bootstrap-openstack-openstack-cell1-mlwwr" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.719409 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d632362c-f1f0-4d67-a5aa-4521e500ae30-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-mlwwr\" (UID: \"d632362c-f1f0-4d67-a5aa-4521e500ae30\") " pod="openstack/bootstrap-openstack-openstack-cell1-mlwwr" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.719460 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nht29\" (UniqueName: \"kubernetes.io/projected/d632362c-f1f0-4d67-a5aa-4521e500ae30-kube-api-access-nht29\") pod \"bootstrap-openstack-openstack-cell1-mlwwr\" (UID: \"d632362c-f1f0-4d67-a5aa-4521e500ae30\") " pod="openstack/bootstrap-openstack-openstack-cell1-mlwwr" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.719690 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d632362c-f1f0-4d67-a5aa-4521e500ae30-inventory\") pod \"bootstrap-openstack-openstack-cell1-mlwwr\" (UID: \"d632362c-f1f0-4d67-a5aa-4521e500ae30\") " pod="openstack/bootstrap-openstack-openstack-cell1-mlwwr" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.821444 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d632362c-f1f0-4d67-a5aa-4521e500ae30-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-mlwwr\" (UID: \"d632362c-f1f0-4d67-a5aa-4521e500ae30\") " pod="openstack/bootstrap-openstack-openstack-cell1-mlwwr" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.821546 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nht29\" (UniqueName: \"kubernetes.io/projected/d632362c-f1f0-4d67-a5aa-4521e500ae30-kube-api-access-nht29\") pod \"bootstrap-openstack-openstack-cell1-mlwwr\" (UID: \"d632362c-f1f0-4d67-a5aa-4521e500ae30\") " pod="openstack/bootstrap-openstack-openstack-cell1-mlwwr" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.821666 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d632362c-f1f0-4d67-a5aa-4521e500ae30-inventory\") pod \"bootstrap-openstack-openstack-cell1-mlwwr\" (UID: \"d632362c-f1f0-4d67-a5aa-4521e500ae30\") " pod="openstack/bootstrap-openstack-openstack-cell1-mlwwr" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.821927 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d632362c-f1f0-4d67-a5aa-4521e500ae30-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-mlwwr\" (UID: \"d632362c-f1f0-4d67-a5aa-4521e500ae30\") " pod="openstack/bootstrap-openstack-openstack-cell1-mlwwr" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.829608 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d632362c-f1f0-4d67-a5aa-4521e500ae30-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-mlwwr\" (UID: \"d632362c-f1f0-4d67-a5aa-4521e500ae30\") " pod="openstack/bootstrap-openstack-openstack-cell1-mlwwr" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.830235 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d632362c-f1f0-4d67-a5aa-4521e500ae30-inventory\") pod \"bootstrap-openstack-openstack-cell1-mlwwr\" (UID: \"d632362c-f1f0-4d67-a5aa-4521e500ae30\") " pod="openstack/bootstrap-openstack-openstack-cell1-mlwwr" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.838723 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d632362c-f1f0-4d67-a5aa-4521e500ae30-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-mlwwr\" (UID: \"d632362c-f1f0-4d67-a5aa-4521e500ae30\") " pod="openstack/bootstrap-openstack-openstack-cell1-mlwwr" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.848995 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nht29\" (UniqueName: \"kubernetes.io/projected/d632362c-f1f0-4d67-a5aa-4521e500ae30-kube-api-access-nht29\") pod \"bootstrap-openstack-openstack-cell1-mlwwr\" (UID: \"d632362c-f1f0-4d67-a5aa-4521e500ae30\") " pod="openstack/bootstrap-openstack-openstack-cell1-mlwwr" Nov 24 21:18:54 crc kubenswrapper[4812]: I1124 21:18:54.984797 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-mlwwr" Nov 24 21:18:55 crc kubenswrapper[4812]: I1124 21:18:55.578898 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-mlwwr"] Nov 24 21:18:55 crc kubenswrapper[4812]: I1124 21:18:55.580791 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:18:55 crc kubenswrapper[4812]: I1124 21:18:55.915500 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-mlwwr" event={"ID":"d632362c-f1f0-4d67-a5aa-4521e500ae30","Type":"ContainerStarted","Data":"ae5b5616164b2f0c5e619f5e3a7701c4c25035df25a05e100782e2207495fb95"} Nov 24 21:18:56 crc kubenswrapper[4812]: I1124 21:18:56.937806 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-mlwwr" event={"ID":"d632362c-f1f0-4d67-a5aa-4521e500ae30","Type":"ContainerStarted","Data":"d732db0b2dca5e6bc2b4c3b11934a3b2eb81718bf05265f483cfb3437b2ce1af"} Nov 24 21:18:56 crc kubenswrapper[4812]: I1124 21:18:56.978216 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-mlwwr" podStartSLOduration=2.526988993 podStartE2EDuration="2.978191821s" podCreationTimestamp="2025-11-24 21:18:54 +0000 UTC" firstStartedPulling="2025-11-24 21:18:55.58059172 +0000 UTC m=+7329.369544091" lastFinishedPulling="2025-11-24 21:18:56.031794548 +0000 UTC m=+7329.820746919" observedRunningTime="2025-11-24 21:18:56.97462349 +0000 UTC m=+7330.763575891" watchObservedRunningTime="2025-11-24 21:18:56.978191821 +0000 UTC m=+7330.767144202" Nov 24 21:20:02 crc kubenswrapper[4812]: I1124 21:20:02.998593 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:20:02 crc kubenswrapper[4812]: I1124 21:20:02.999124 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:20:10 crc kubenswrapper[4812]: I1124 21:20:10.424031 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lmx4r"] Nov 24 21:20:10 crc kubenswrapper[4812]: I1124 21:20:10.430555 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmx4r" Nov 24 21:20:10 crc kubenswrapper[4812]: I1124 21:20:10.435539 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lmx4r"] Nov 24 21:20:10 crc kubenswrapper[4812]: I1124 21:20:10.596171 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54bd4f01-93b3-4414-bd38-f62a7e530654-utilities\") pod \"certified-operators-lmx4r\" (UID: \"54bd4f01-93b3-4414-bd38-f62a7e530654\") " pod="openshift-marketplace/certified-operators-lmx4r" Nov 24 21:20:10 crc kubenswrapper[4812]: I1124 21:20:10.596316 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wvsq\" (UniqueName: \"kubernetes.io/projected/54bd4f01-93b3-4414-bd38-f62a7e530654-kube-api-access-5wvsq\") pod \"certified-operators-lmx4r\" (UID: \"54bd4f01-93b3-4414-bd38-f62a7e530654\") " pod="openshift-marketplace/certified-operators-lmx4r" Nov 24 21:20:10 crc kubenswrapper[4812]: I1124 21:20:10.596391 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54bd4f01-93b3-4414-bd38-f62a7e530654-catalog-content\") pod \"certified-operators-lmx4r\" (UID: \"54bd4f01-93b3-4414-bd38-f62a7e530654\") " pod="openshift-marketplace/certified-operators-lmx4r" Nov 24 21:20:10 crc kubenswrapper[4812]: I1124 21:20:10.698747 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54bd4f01-93b3-4414-bd38-f62a7e530654-utilities\") pod \"certified-operators-lmx4r\" (UID: \"54bd4f01-93b3-4414-bd38-f62a7e530654\") " pod="openshift-marketplace/certified-operators-lmx4r" Nov 24 21:20:10 crc kubenswrapper[4812]: I1124 21:20:10.699206 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wvsq\" (UniqueName: \"kubernetes.io/projected/54bd4f01-93b3-4414-bd38-f62a7e530654-kube-api-access-5wvsq\") pod \"certified-operators-lmx4r\" (UID: \"54bd4f01-93b3-4414-bd38-f62a7e530654\") " pod="openshift-marketplace/certified-operators-lmx4r" Nov 24 21:20:10 crc kubenswrapper[4812]: I1124 21:20:10.699376 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54bd4f01-93b3-4414-bd38-f62a7e530654-catalog-content\") pod \"certified-operators-lmx4r\" (UID: \"54bd4f01-93b3-4414-bd38-f62a7e530654\") " pod="openshift-marketplace/certified-operators-lmx4r" Nov 24 21:20:10 crc kubenswrapper[4812]: I1124 21:20:10.699442 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54bd4f01-93b3-4414-bd38-f62a7e530654-utilities\") pod \"certified-operators-lmx4r\" (UID: \"54bd4f01-93b3-4414-bd38-f62a7e530654\") " pod="openshift-marketplace/certified-operators-lmx4r" Nov 24 21:20:10 crc kubenswrapper[4812]: I1124 21:20:10.699898 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54bd4f01-93b3-4414-bd38-f62a7e530654-catalog-content\") pod \"certified-operators-lmx4r\" (UID: \"54bd4f01-93b3-4414-bd38-f62a7e530654\") " pod="openshift-marketplace/certified-operators-lmx4r" Nov 24 21:20:10 crc kubenswrapper[4812]: I1124 21:20:10.729683 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wvsq\" (UniqueName: \"kubernetes.io/projected/54bd4f01-93b3-4414-bd38-f62a7e530654-kube-api-access-5wvsq\") pod \"certified-operators-lmx4r\" (UID: \"54bd4f01-93b3-4414-bd38-f62a7e530654\") " pod="openshift-marketplace/certified-operators-lmx4r" Nov 24 21:20:10 crc kubenswrapper[4812]: I1124 21:20:10.755937 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmx4r" Nov 24 21:20:11 crc kubenswrapper[4812]: I1124 21:20:11.289014 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lmx4r"] Nov 24 21:20:11 crc kubenswrapper[4812]: I1124 21:20:11.748520 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmx4r" event={"ID":"54bd4f01-93b3-4414-bd38-f62a7e530654","Type":"ContainerStarted","Data":"0c5f5d1024cdcb615a380a37cef5428483bca9f2e954a412148815f79f36521d"} Nov 24 21:20:11 crc kubenswrapper[4812]: I1124 21:20:11.749061 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmx4r" event={"ID":"54bd4f01-93b3-4414-bd38-f62a7e530654","Type":"ContainerStarted","Data":"52341324cdd01eb8969ca39b99c27f6e0af6fd52392c2461f73f962baf242dbd"} Nov 24 21:20:12 crc kubenswrapper[4812]: I1124 21:20:12.760048 4812 generic.go:334] "Generic (PLEG): container finished" podID="54bd4f01-93b3-4414-bd38-f62a7e530654" containerID="0c5f5d1024cdcb615a380a37cef5428483bca9f2e954a412148815f79f36521d" exitCode=0 Nov 24 21:20:12 crc kubenswrapper[4812]: I1124 21:20:12.760181 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmx4r" event={"ID":"54bd4f01-93b3-4414-bd38-f62a7e530654","Type":"ContainerDied","Data":"0c5f5d1024cdcb615a380a37cef5428483bca9f2e954a412148815f79f36521d"} Nov 24 21:20:13 crc kubenswrapper[4812]: I1124 21:20:13.777747 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmx4r" event={"ID":"54bd4f01-93b3-4414-bd38-f62a7e530654","Type":"ContainerStarted","Data":"0221b3d726eba88a620f6b1d99c3630d2f3ba299a0f4424826cbb3900df5aa31"} Nov 24 21:20:16 crc kubenswrapper[4812]: I1124 21:20:16.818660 4812 generic.go:334] "Generic (PLEG): container finished" podID="54bd4f01-93b3-4414-bd38-f62a7e530654" containerID="0221b3d726eba88a620f6b1d99c3630d2f3ba299a0f4424826cbb3900df5aa31" exitCode=0 Nov 24 21:20:16 crc kubenswrapper[4812]: I1124 21:20:16.819050 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmx4r" event={"ID":"54bd4f01-93b3-4414-bd38-f62a7e530654","Type":"ContainerDied","Data":"0221b3d726eba88a620f6b1d99c3630d2f3ba299a0f4424826cbb3900df5aa31"} Nov 24 21:20:17 crc kubenswrapper[4812]: I1124 21:20:17.840964 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmx4r" event={"ID":"54bd4f01-93b3-4414-bd38-f62a7e530654","Type":"ContainerStarted","Data":"95da2df052089d71ec759d8a7da1a895956e4f1b81bbbf3496bb58df42f34ad0"} Nov 24 21:20:17 crc kubenswrapper[4812]: I1124 21:20:17.872580 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lmx4r" podStartSLOduration=2.271174556 podStartE2EDuration="7.872559169s" podCreationTimestamp="2025-11-24 21:20:10 +0000 UTC" firstStartedPulling="2025-11-24 21:20:11.751635792 +0000 UTC m=+7405.540588173" lastFinishedPulling="2025-11-24 21:20:17.353020405 +0000 UTC m=+7411.141972786" observedRunningTime="2025-11-24 21:20:17.870701977 +0000 UTC m=+7411.659654348" watchObservedRunningTime="2025-11-24 21:20:17.872559169 +0000 UTC m=+7411.661511550" Nov 24 21:20:20 crc kubenswrapper[4812]: I1124 21:20:20.756654 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lmx4r" Nov 24 21:20:20 crc kubenswrapper[4812]: I1124 21:20:20.757533 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lmx4r" Nov 24 21:20:20 crc kubenswrapper[4812]: I1124 21:20:20.845963 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lmx4r" Nov 24 21:20:30 crc kubenswrapper[4812]: I1124 21:20:30.841794 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lmx4r" Nov 24 21:20:30 crc kubenswrapper[4812]: I1124 21:20:30.911694 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lmx4r"] Nov 24 21:20:30 crc kubenswrapper[4812]: I1124 21:20:30.990093 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lmx4r" podUID="54bd4f01-93b3-4414-bd38-f62a7e530654" containerName="registry-server" containerID="cri-o://95da2df052089d71ec759d8a7da1a895956e4f1b81bbbf3496bb58df42f34ad0" gracePeriod=2 Nov 24 21:20:31 crc kubenswrapper[4812]: I1124 21:20:31.579465 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmx4r" Nov 24 21:20:31 crc kubenswrapper[4812]: I1124 21:20:31.662934 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54bd4f01-93b3-4414-bd38-f62a7e530654-utilities\") pod \"54bd4f01-93b3-4414-bd38-f62a7e530654\" (UID: \"54bd4f01-93b3-4414-bd38-f62a7e530654\") " Nov 24 21:20:31 crc kubenswrapper[4812]: I1124 21:20:31.663027 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54bd4f01-93b3-4414-bd38-f62a7e530654-catalog-content\") pod \"54bd4f01-93b3-4414-bd38-f62a7e530654\" (UID: \"54bd4f01-93b3-4414-bd38-f62a7e530654\") " Nov 24 21:20:31 crc kubenswrapper[4812]: I1124 21:20:31.663109 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wvsq\" (UniqueName: \"kubernetes.io/projected/54bd4f01-93b3-4414-bd38-f62a7e530654-kube-api-access-5wvsq\") pod \"54bd4f01-93b3-4414-bd38-f62a7e530654\" (UID: \"54bd4f01-93b3-4414-bd38-f62a7e530654\") " Nov 24 21:20:31 crc kubenswrapper[4812]: I1124 21:20:31.664270 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54bd4f01-93b3-4414-bd38-f62a7e530654-utilities" (OuterVolumeSpecName: "utilities") pod "54bd4f01-93b3-4414-bd38-f62a7e530654" (UID: "54bd4f01-93b3-4414-bd38-f62a7e530654"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:20:31 crc kubenswrapper[4812]: I1124 21:20:31.675767 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54bd4f01-93b3-4414-bd38-f62a7e530654-kube-api-access-5wvsq" (OuterVolumeSpecName: "kube-api-access-5wvsq") pod "54bd4f01-93b3-4414-bd38-f62a7e530654" (UID: "54bd4f01-93b3-4414-bd38-f62a7e530654"). InnerVolumeSpecName "kube-api-access-5wvsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:31 crc kubenswrapper[4812]: I1124 21:20:31.736078 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54bd4f01-93b3-4414-bd38-f62a7e530654-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "54bd4f01-93b3-4414-bd38-f62a7e530654" (UID: "54bd4f01-93b3-4414-bd38-f62a7e530654"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:20:31 crc kubenswrapper[4812]: I1124 21:20:31.766584 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54bd4f01-93b3-4414-bd38-f62a7e530654-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:31 crc kubenswrapper[4812]: I1124 21:20:31.766853 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54bd4f01-93b3-4414-bd38-f62a7e530654-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:31 crc kubenswrapper[4812]: I1124 21:20:31.766934 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wvsq\" (UniqueName: \"kubernetes.io/projected/54bd4f01-93b3-4414-bd38-f62a7e530654-kube-api-access-5wvsq\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:32 crc kubenswrapper[4812]: I1124 21:20:32.004261 4812 generic.go:334] "Generic (PLEG): container finished" podID="54bd4f01-93b3-4414-bd38-f62a7e530654" containerID="95da2df052089d71ec759d8a7da1a895956e4f1b81bbbf3496bb58df42f34ad0" exitCode=0 Nov 24 21:20:32 crc kubenswrapper[4812]: I1124 21:20:32.004312 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmx4r" Nov 24 21:20:32 crc kubenswrapper[4812]: I1124 21:20:32.004315 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmx4r" event={"ID":"54bd4f01-93b3-4414-bd38-f62a7e530654","Type":"ContainerDied","Data":"95da2df052089d71ec759d8a7da1a895956e4f1b81bbbf3496bb58df42f34ad0"} Nov 24 21:20:32 crc kubenswrapper[4812]: I1124 21:20:32.004462 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmx4r" event={"ID":"54bd4f01-93b3-4414-bd38-f62a7e530654","Type":"ContainerDied","Data":"52341324cdd01eb8969ca39b99c27f6e0af6fd52392c2461f73f962baf242dbd"} Nov 24 21:20:32 crc kubenswrapper[4812]: I1124 21:20:32.004498 4812 scope.go:117] "RemoveContainer" containerID="95da2df052089d71ec759d8a7da1a895956e4f1b81bbbf3496bb58df42f34ad0" Nov 24 21:20:32 crc kubenswrapper[4812]: I1124 21:20:32.043741 4812 scope.go:117] "RemoveContainer" containerID="0221b3d726eba88a620f6b1d99c3630d2f3ba299a0f4424826cbb3900df5aa31" Nov 24 21:20:32 crc kubenswrapper[4812]: I1124 21:20:32.048580 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lmx4r"] Nov 24 21:20:32 crc kubenswrapper[4812]: I1124 21:20:32.058614 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lmx4r"] Nov 24 21:20:32 crc kubenswrapper[4812]: I1124 21:20:32.098679 4812 scope.go:117] "RemoveContainer" containerID="0c5f5d1024cdcb615a380a37cef5428483bca9f2e954a412148815f79f36521d" Nov 24 21:20:32 crc kubenswrapper[4812]: I1124 21:20:32.147239 4812 scope.go:117] "RemoveContainer" containerID="95da2df052089d71ec759d8a7da1a895956e4f1b81bbbf3496bb58df42f34ad0" Nov 24 21:20:32 crc kubenswrapper[4812]: E1124 21:20:32.151225 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95da2df052089d71ec759d8a7da1a895956e4f1b81bbbf3496bb58df42f34ad0\": container with ID starting with 95da2df052089d71ec759d8a7da1a895956e4f1b81bbbf3496bb58df42f34ad0 not found: ID does not exist" containerID="95da2df052089d71ec759d8a7da1a895956e4f1b81bbbf3496bb58df42f34ad0" Nov 24 21:20:32 crc kubenswrapper[4812]: I1124 21:20:32.151270 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95da2df052089d71ec759d8a7da1a895956e4f1b81bbbf3496bb58df42f34ad0"} err="failed to get container status \"95da2df052089d71ec759d8a7da1a895956e4f1b81bbbf3496bb58df42f34ad0\": rpc error: code = NotFound desc = could not find container \"95da2df052089d71ec759d8a7da1a895956e4f1b81bbbf3496bb58df42f34ad0\": container with ID starting with 95da2df052089d71ec759d8a7da1a895956e4f1b81bbbf3496bb58df42f34ad0 not found: ID does not exist" Nov 24 21:20:32 crc kubenswrapper[4812]: I1124 21:20:32.151301 4812 scope.go:117] "RemoveContainer" containerID="0221b3d726eba88a620f6b1d99c3630d2f3ba299a0f4424826cbb3900df5aa31" Nov 24 21:20:32 crc kubenswrapper[4812]: E1124 21:20:32.152370 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0221b3d726eba88a620f6b1d99c3630d2f3ba299a0f4424826cbb3900df5aa31\": container with ID starting with 0221b3d726eba88a620f6b1d99c3630d2f3ba299a0f4424826cbb3900df5aa31 not found: ID does not exist" containerID="0221b3d726eba88a620f6b1d99c3630d2f3ba299a0f4424826cbb3900df5aa31" Nov 24 21:20:32 crc kubenswrapper[4812]: I1124 21:20:32.152424 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0221b3d726eba88a620f6b1d99c3630d2f3ba299a0f4424826cbb3900df5aa31"} err="failed to get container status \"0221b3d726eba88a620f6b1d99c3630d2f3ba299a0f4424826cbb3900df5aa31\": rpc error: code = NotFound desc = could not find container \"0221b3d726eba88a620f6b1d99c3630d2f3ba299a0f4424826cbb3900df5aa31\": container with ID starting with 0221b3d726eba88a620f6b1d99c3630d2f3ba299a0f4424826cbb3900df5aa31 not found: ID does not exist" Nov 24 21:20:32 crc kubenswrapper[4812]: I1124 21:20:32.152456 4812 scope.go:117] "RemoveContainer" containerID="0c5f5d1024cdcb615a380a37cef5428483bca9f2e954a412148815f79f36521d" Nov 24 21:20:32 crc kubenswrapper[4812]: E1124 21:20:32.152802 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c5f5d1024cdcb615a380a37cef5428483bca9f2e954a412148815f79f36521d\": container with ID starting with 0c5f5d1024cdcb615a380a37cef5428483bca9f2e954a412148815f79f36521d not found: ID does not exist" containerID="0c5f5d1024cdcb615a380a37cef5428483bca9f2e954a412148815f79f36521d" Nov 24 21:20:32 crc kubenswrapper[4812]: I1124 21:20:32.152822 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c5f5d1024cdcb615a380a37cef5428483bca9f2e954a412148815f79f36521d"} err="failed to get container status \"0c5f5d1024cdcb615a380a37cef5428483bca9f2e954a412148815f79f36521d\": rpc error: code = NotFound desc = could not find container \"0c5f5d1024cdcb615a380a37cef5428483bca9f2e954a412148815f79f36521d\": container with ID starting with 0c5f5d1024cdcb615a380a37cef5428483bca9f2e954a412148815f79f36521d not found: ID does not exist" Nov 24 21:20:32 crc kubenswrapper[4812]: I1124 21:20:32.978459 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54bd4f01-93b3-4414-bd38-f62a7e530654" path="/var/lib/kubelet/pods/54bd4f01-93b3-4414-bd38-f62a7e530654/volumes" Nov 24 21:20:32 crc kubenswrapper[4812]: I1124 21:20:32.998001 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:20:32 crc kubenswrapper[4812]: I1124 21:20:32.998058 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:20:50 crc kubenswrapper[4812]: I1124 21:20:50.633508 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-f55484fc8-26grz" podUID="ca44fa40-7e18-4f18-b5a9-8714994880b8" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 24 21:21:02 crc kubenswrapper[4812]: I1124 21:21:02.998946 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:21:02 crc kubenswrapper[4812]: I1124 21:21:02.999686 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:21:03 crc kubenswrapper[4812]: I1124 21:21:02.999753 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 21:21:03 crc kubenswrapper[4812]: I1124 21:21:03.001132 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:21:03 crc kubenswrapper[4812]: I1124 21:21:03.001249 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" gracePeriod=600 Nov 24 21:21:03 crc kubenswrapper[4812]: E1124 21:21:03.167253 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:21:03 crc kubenswrapper[4812]: I1124 21:21:03.418280 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" exitCode=0 Nov 24 21:21:03 crc kubenswrapper[4812]: I1124 21:21:03.418426 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586"} Nov 24 21:21:03 crc kubenswrapper[4812]: I1124 21:21:03.422434 4812 scope.go:117] "RemoveContainer" containerID="42da69ace901f1d1c1447c55d84bba6cad25de3b45c999d9dc8c61d5cd823f6d" Nov 24 21:21:03 crc kubenswrapper[4812]: I1124 21:21:03.424166 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:21:03 crc kubenswrapper[4812]: E1124 21:21:03.424719 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:21:14 crc kubenswrapper[4812]: I1124 21:21:14.966612 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:21:14 crc kubenswrapper[4812]: E1124 21:21:14.967605 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:21:27 crc kubenswrapper[4812]: I1124 21:21:27.965694 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:21:27 crc kubenswrapper[4812]: E1124 21:21:27.966671 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:21:40 crc kubenswrapper[4812]: I1124 21:21:40.966497 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:21:40 crc kubenswrapper[4812]: E1124 21:21:40.967190 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:21:52 crc kubenswrapper[4812]: I1124 21:21:52.966379 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:21:52 crc kubenswrapper[4812]: E1124 21:21:52.967123 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:21:59 crc kubenswrapper[4812]: I1124 21:21:59.045655 4812 generic.go:334] "Generic (PLEG): container finished" podID="d632362c-f1f0-4d67-a5aa-4521e500ae30" containerID="d732db0b2dca5e6bc2b4c3b11934a3b2eb81718bf05265f483cfb3437b2ce1af" exitCode=0 Nov 24 21:21:59 crc kubenswrapper[4812]: I1124 21:21:59.046133 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-mlwwr" event={"ID":"d632362c-f1f0-4d67-a5aa-4521e500ae30","Type":"ContainerDied","Data":"d732db0b2dca5e6bc2b4c3b11934a3b2eb81718bf05265f483cfb3437b2ce1af"} Nov 24 21:22:00 crc kubenswrapper[4812]: I1124 21:22:00.600461 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-mlwwr" Nov 24 21:22:00 crc kubenswrapper[4812]: I1124 21:22:00.690912 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d632362c-f1f0-4d67-a5aa-4521e500ae30-inventory\") pod \"d632362c-f1f0-4d67-a5aa-4521e500ae30\" (UID: \"d632362c-f1f0-4d67-a5aa-4521e500ae30\") " Nov 24 21:22:00 crc kubenswrapper[4812]: I1124 21:22:00.691191 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nht29\" (UniqueName: \"kubernetes.io/projected/d632362c-f1f0-4d67-a5aa-4521e500ae30-kube-api-access-nht29\") pod \"d632362c-f1f0-4d67-a5aa-4521e500ae30\" (UID: \"d632362c-f1f0-4d67-a5aa-4521e500ae30\") " Nov 24 21:22:00 crc kubenswrapper[4812]: I1124 21:22:00.691449 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d632362c-f1f0-4d67-a5aa-4521e500ae30-ssh-key\") pod \"d632362c-f1f0-4d67-a5aa-4521e500ae30\" (UID: \"d632362c-f1f0-4d67-a5aa-4521e500ae30\") " Nov 24 21:22:00 crc kubenswrapper[4812]: I1124 21:22:00.691534 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d632362c-f1f0-4d67-a5aa-4521e500ae30-bootstrap-combined-ca-bundle\") pod \"d632362c-f1f0-4d67-a5aa-4521e500ae30\" (UID: \"d632362c-f1f0-4d67-a5aa-4521e500ae30\") " Nov 24 21:22:00 crc kubenswrapper[4812]: I1124 21:22:00.706621 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d632362c-f1f0-4d67-a5aa-4521e500ae30-kube-api-access-nht29" (OuterVolumeSpecName: "kube-api-access-nht29") pod "d632362c-f1f0-4d67-a5aa-4521e500ae30" (UID: "d632362c-f1f0-4d67-a5aa-4521e500ae30"). InnerVolumeSpecName "kube-api-access-nht29". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:22:00 crc kubenswrapper[4812]: I1124 21:22:00.710758 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d632362c-f1f0-4d67-a5aa-4521e500ae30-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d632362c-f1f0-4d67-a5aa-4521e500ae30" (UID: "d632362c-f1f0-4d67-a5aa-4521e500ae30"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:22:00 crc kubenswrapper[4812]: I1124 21:22:00.726730 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d632362c-f1f0-4d67-a5aa-4521e500ae30-inventory" (OuterVolumeSpecName: "inventory") pod "d632362c-f1f0-4d67-a5aa-4521e500ae30" (UID: "d632362c-f1f0-4d67-a5aa-4521e500ae30"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:22:00 crc kubenswrapper[4812]: I1124 21:22:00.728881 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d632362c-f1f0-4d67-a5aa-4521e500ae30-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d632362c-f1f0-4d67-a5aa-4521e500ae30" (UID: "d632362c-f1f0-4d67-a5aa-4521e500ae30"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:22:00 crc kubenswrapper[4812]: I1124 21:22:00.794169 4812 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d632362c-f1f0-4d67-a5aa-4521e500ae30-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:22:00 crc kubenswrapper[4812]: I1124 21:22:00.794201 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d632362c-f1f0-4d67-a5aa-4521e500ae30-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:22:00 crc kubenswrapper[4812]: I1124 21:22:00.794212 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nht29\" (UniqueName: \"kubernetes.io/projected/d632362c-f1f0-4d67-a5aa-4521e500ae30-kube-api-access-nht29\") on node \"crc\" DevicePath \"\"" Nov 24 21:22:00 crc kubenswrapper[4812]: I1124 21:22:00.794221 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d632362c-f1f0-4d67-a5aa-4521e500ae30-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.071903 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-mlwwr" event={"ID":"d632362c-f1f0-4d67-a5aa-4521e500ae30","Type":"ContainerDied","Data":"ae5b5616164b2f0c5e619f5e3a7701c4c25035df25a05e100782e2207495fb95"} Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.071954 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae5b5616164b2f0c5e619f5e3a7701c4c25035df25a05e100782e2207495fb95" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.071966 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-mlwwr" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.151757 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-kntww"] Nov 24 21:22:01 crc kubenswrapper[4812]: E1124 21:22:01.152185 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54bd4f01-93b3-4414-bd38-f62a7e530654" containerName="extract-utilities" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.152202 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="54bd4f01-93b3-4414-bd38-f62a7e530654" containerName="extract-utilities" Nov 24 21:22:01 crc kubenswrapper[4812]: E1124 21:22:01.152214 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d632362c-f1f0-4d67-a5aa-4521e500ae30" containerName="bootstrap-openstack-openstack-cell1" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.152221 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d632362c-f1f0-4d67-a5aa-4521e500ae30" containerName="bootstrap-openstack-openstack-cell1" Nov 24 21:22:01 crc kubenswrapper[4812]: E1124 21:22:01.152243 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54bd4f01-93b3-4414-bd38-f62a7e530654" containerName="registry-server" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.152249 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="54bd4f01-93b3-4414-bd38-f62a7e530654" containerName="registry-server" Nov 24 21:22:01 crc kubenswrapper[4812]: E1124 21:22:01.152261 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54bd4f01-93b3-4414-bd38-f62a7e530654" containerName="extract-content" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.152267 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="54bd4f01-93b3-4414-bd38-f62a7e530654" containerName="extract-content" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.152478 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d632362c-f1f0-4d67-a5aa-4521e500ae30" containerName="bootstrap-openstack-openstack-cell1" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.152498 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="54bd4f01-93b3-4414-bd38-f62a7e530654" containerName="registry-server" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.153272 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-kntww" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.162193 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.162264 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.162691 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.162199 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2x7tj" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.165384 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-kntww"] Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.204466 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf6fj\" (UniqueName: \"kubernetes.io/projected/6cf727d3-1ff3-4538-b397-4b5075237e17-kube-api-access-gf6fj\") pod \"download-cache-openstack-openstack-cell1-kntww\" (UID: \"6cf727d3-1ff3-4538-b397-4b5075237e17\") " pod="openstack/download-cache-openstack-openstack-cell1-kntww" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.204591 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cf727d3-1ff3-4538-b397-4b5075237e17-ssh-key\") pod \"download-cache-openstack-openstack-cell1-kntww\" (UID: \"6cf727d3-1ff3-4538-b397-4b5075237e17\") " pod="openstack/download-cache-openstack-openstack-cell1-kntww" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.204704 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cf727d3-1ff3-4538-b397-4b5075237e17-inventory\") pod \"download-cache-openstack-openstack-cell1-kntww\" (UID: \"6cf727d3-1ff3-4538-b397-4b5075237e17\") " pod="openstack/download-cache-openstack-openstack-cell1-kntww" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.305812 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf6fj\" (UniqueName: \"kubernetes.io/projected/6cf727d3-1ff3-4538-b397-4b5075237e17-kube-api-access-gf6fj\") pod \"download-cache-openstack-openstack-cell1-kntww\" (UID: \"6cf727d3-1ff3-4538-b397-4b5075237e17\") " pod="openstack/download-cache-openstack-openstack-cell1-kntww" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.305904 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cf727d3-1ff3-4538-b397-4b5075237e17-ssh-key\") pod \"download-cache-openstack-openstack-cell1-kntww\" (UID: \"6cf727d3-1ff3-4538-b397-4b5075237e17\") " pod="openstack/download-cache-openstack-openstack-cell1-kntww" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.305948 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cf727d3-1ff3-4538-b397-4b5075237e17-inventory\") pod \"download-cache-openstack-openstack-cell1-kntww\" (UID: \"6cf727d3-1ff3-4538-b397-4b5075237e17\") " pod="openstack/download-cache-openstack-openstack-cell1-kntww" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.309929 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cf727d3-1ff3-4538-b397-4b5075237e17-inventory\") pod \"download-cache-openstack-openstack-cell1-kntww\" (UID: \"6cf727d3-1ff3-4538-b397-4b5075237e17\") " pod="openstack/download-cache-openstack-openstack-cell1-kntww" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.310766 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cf727d3-1ff3-4538-b397-4b5075237e17-ssh-key\") pod \"download-cache-openstack-openstack-cell1-kntww\" (UID: \"6cf727d3-1ff3-4538-b397-4b5075237e17\") " pod="openstack/download-cache-openstack-openstack-cell1-kntww" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.322912 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf6fj\" (UniqueName: \"kubernetes.io/projected/6cf727d3-1ff3-4538-b397-4b5075237e17-kube-api-access-gf6fj\") pod \"download-cache-openstack-openstack-cell1-kntww\" (UID: \"6cf727d3-1ff3-4538-b397-4b5075237e17\") " pod="openstack/download-cache-openstack-openstack-cell1-kntww" Nov 24 21:22:01 crc kubenswrapper[4812]: I1124 21:22:01.473098 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-kntww" Nov 24 21:22:02 crc kubenswrapper[4812]: I1124 21:22:02.164597 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-kntww"] Nov 24 21:22:03 crc kubenswrapper[4812]: I1124 21:22:03.098214 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-kntww" event={"ID":"6cf727d3-1ff3-4538-b397-4b5075237e17","Type":"ContainerStarted","Data":"878e494a65e808917aa5c2ea5a539ad05bfa7395377b7300d5e918acf3679b9d"} Nov 24 21:22:03 crc kubenswrapper[4812]: I1124 21:22:03.098828 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-kntww" event={"ID":"6cf727d3-1ff3-4538-b397-4b5075237e17","Type":"ContainerStarted","Data":"ea033f37d4c65069d1f1c0524c8d21911f6d5fd1d703b41383f22bf9455984a8"} Nov 24 21:22:03 crc kubenswrapper[4812]: I1124 21:22:03.114991 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-kntww" podStartSLOduration=1.586634792 podStartE2EDuration="2.114971966s" podCreationTimestamp="2025-11-24 21:22:01 +0000 UTC" firstStartedPulling="2025-11-24 21:22:02.201017182 +0000 UTC m=+7515.989969563" lastFinishedPulling="2025-11-24 21:22:02.729354366 +0000 UTC m=+7516.518306737" observedRunningTime="2025-11-24 21:22:03.113140424 +0000 UTC m=+7516.902092795" watchObservedRunningTime="2025-11-24 21:22:03.114971966 +0000 UTC m=+7516.903924337" Nov 24 21:22:04 crc kubenswrapper[4812]: I1124 21:22:04.966639 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:22:04 crc kubenswrapper[4812]: E1124 21:22:04.967200 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:22:15 crc kubenswrapper[4812]: I1124 21:22:15.968533 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:22:15 crc kubenswrapper[4812]: E1124 21:22:15.972110 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:22:26 crc kubenswrapper[4812]: I1124 21:22:26.981535 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:22:26 crc kubenswrapper[4812]: E1124 21:22:26.985600 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:22:41 crc kubenswrapper[4812]: I1124 21:22:41.966318 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:22:41 crc kubenswrapper[4812]: E1124 21:22:41.970866 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:22:53 crc kubenswrapper[4812]: I1124 21:22:53.966682 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:22:53 crc kubenswrapper[4812]: E1124 21:22:53.967984 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:23:07 crc kubenswrapper[4812]: I1124 21:23:07.967045 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:23:07 crc kubenswrapper[4812]: E1124 21:23:07.968031 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:23:21 crc kubenswrapper[4812]: I1124 21:23:21.967032 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:23:21 crc kubenswrapper[4812]: E1124 21:23:21.969244 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:23:32 crc kubenswrapper[4812]: I1124 21:23:32.192664 4812 generic.go:334] "Generic (PLEG): container finished" podID="6cf727d3-1ff3-4538-b397-4b5075237e17" containerID="878e494a65e808917aa5c2ea5a539ad05bfa7395377b7300d5e918acf3679b9d" exitCode=0 Nov 24 21:23:32 crc kubenswrapper[4812]: I1124 21:23:32.192814 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-kntww" event={"ID":"6cf727d3-1ff3-4538-b397-4b5075237e17","Type":"ContainerDied","Data":"878e494a65e808917aa5c2ea5a539ad05bfa7395377b7300d5e918acf3679b9d"} Nov 24 21:23:33 crc kubenswrapper[4812]: I1124 21:23:33.653806 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-kntww" Nov 24 21:23:33 crc kubenswrapper[4812]: I1124 21:23:33.832767 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cf727d3-1ff3-4538-b397-4b5075237e17-inventory\") pod \"6cf727d3-1ff3-4538-b397-4b5075237e17\" (UID: \"6cf727d3-1ff3-4538-b397-4b5075237e17\") " Nov 24 21:23:33 crc kubenswrapper[4812]: I1124 21:23:33.832930 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf6fj\" (UniqueName: \"kubernetes.io/projected/6cf727d3-1ff3-4538-b397-4b5075237e17-kube-api-access-gf6fj\") pod \"6cf727d3-1ff3-4538-b397-4b5075237e17\" (UID: \"6cf727d3-1ff3-4538-b397-4b5075237e17\") " Nov 24 21:23:33 crc kubenswrapper[4812]: I1124 21:23:33.832984 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cf727d3-1ff3-4538-b397-4b5075237e17-ssh-key\") pod \"6cf727d3-1ff3-4538-b397-4b5075237e17\" (UID: \"6cf727d3-1ff3-4538-b397-4b5075237e17\") " Nov 24 21:23:33 crc kubenswrapper[4812]: I1124 21:23:33.839689 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cf727d3-1ff3-4538-b397-4b5075237e17-kube-api-access-gf6fj" (OuterVolumeSpecName: "kube-api-access-gf6fj") pod "6cf727d3-1ff3-4538-b397-4b5075237e17" (UID: "6cf727d3-1ff3-4538-b397-4b5075237e17"). InnerVolumeSpecName "kube-api-access-gf6fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:23:33 crc kubenswrapper[4812]: I1124 21:23:33.874884 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cf727d3-1ff3-4538-b397-4b5075237e17-inventory" (OuterVolumeSpecName: "inventory") pod "6cf727d3-1ff3-4538-b397-4b5075237e17" (UID: "6cf727d3-1ff3-4538-b397-4b5075237e17"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:23:33 crc kubenswrapper[4812]: I1124 21:23:33.885088 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cf727d3-1ff3-4538-b397-4b5075237e17-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6cf727d3-1ff3-4538-b397-4b5075237e17" (UID: "6cf727d3-1ff3-4538-b397-4b5075237e17"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:23:33 crc kubenswrapper[4812]: I1124 21:23:33.935529 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6cf727d3-1ff3-4538-b397-4b5075237e17-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:33 crc kubenswrapper[4812]: I1124 21:23:33.935583 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf6fj\" (UniqueName: \"kubernetes.io/projected/6cf727d3-1ff3-4538-b397-4b5075237e17-kube-api-access-gf6fj\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:33 crc kubenswrapper[4812]: I1124 21:23:33.935603 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6cf727d3-1ff3-4538-b397-4b5075237e17-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.218542 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-kntww" event={"ID":"6cf727d3-1ff3-4538-b397-4b5075237e17","Type":"ContainerDied","Data":"ea033f37d4c65069d1f1c0524c8d21911f6d5fd1d703b41383f22bf9455984a8"} Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.218847 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea033f37d4c65069d1f1c0524c8d21911f6d5fd1d703b41383f22bf9455984a8" Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.218644 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-kntww" Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.330965 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-4bh9q"] Nov 24 21:23:34 crc kubenswrapper[4812]: E1124 21:23:34.331623 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cf727d3-1ff3-4538-b397-4b5075237e17" containerName="download-cache-openstack-openstack-cell1" Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.331645 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cf727d3-1ff3-4538-b397-4b5075237e17" containerName="download-cache-openstack-openstack-cell1" Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.331860 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cf727d3-1ff3-4538-b397-4b5075237e17" containerName="download-cache-openstack-openstack-cell1" Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.332628 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-4bh9q" Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.335316 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.335630 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2x7tj" Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.335713 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.340797 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.366276 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-4bh9q"] Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.453447 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/37665f2f-cfb5-42f7-b07b-5accbe05e5ec-ssh-key\") pod \"configure-network-openstack-openstack-cell1-4bh9q\" (UID: \"37665f2f-cfb5-42f7-b07b-5accbe05e5ec\") " pod="openstack/configure-network-openstack-openstack-cell1-4bh9q" Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.453584 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37665f2f-cfb5-42f7-b07b-5accbe05e5ec-inventory\") pod \"configure-network-openstack-openstack-cell1-4bh9q\" (UID: \"37665f2f-cfb5-42f7-b07b-5accbe05e5ec\") " pod="openstack/configure-network-openstack-openstack-cell1-4bh9q" Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.453717 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djrlr\" (UniqueName: \"kubernetes.io/projected/37665f2f-cfb5-42f7-b07b-5accbe05e5ec-kube-api-access-djrlr\") pod \"configure-network-openstack-openstack-cell1-4bh9q\" (UID: \"37665f2f-cfb5-42f7-b07b-5accbe05e5ec\") " pod="openstack/configure-network-openstack-openstack-cell1-4bh9q" Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.555874 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/37665f2f-cfb5-42f7-b07b-5accbe05e5ec-ssh-key\") pod \"configure-network-openstack-openstack-cell1-4bh9q\" (UID: \"37665f2f-cfb5-42f7-b07b-5accbe05e5ec\") " pod="openstack/configure-network-openstack-openstack-cell1-4bh9q" Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.555985 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37665f2f-cfb5-42f7-b07b-5accbe05e5ec-inventory\") pod \"configure-network-openstack-openstack-cell1-4bh9q\" (UID: \"37665f2f-cfb5-42f7-b07b-5accbe05e5ec\") " pod="openstack/configure-network-openstack-openstack-cell1-4bh9q" Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.556047 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djrlr\" (UniqueName: \"kubernetes.io/projected/37665f2f-cfb5-42f7-b07b-5accbe05e5ec-kube-api-access-djrlr\") pod \"configure-network-openstack-openstack-cell1-4bh9q\" (UID: \"37665f2f-cfb5-42f7-b07b-5accbe05e5ec\") " pod="openstack/configure-network-openstack-openstack-cell1-4bh9q" Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.560317 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/37665f2f-cfb5-42f7-b07b-5accbe05e5ec-ssh-key\") pod \"configure-network-openstack-openstack-cell1-4bh9q\" (UID: \"37665f2f-cfb5-42f7-b07b-5accbe05e5ec\") " pod="openstack/configure-network-openstack-openstack-cell1-4bh9q" Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.560440 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37665f2f-cfb5-42f7-b07b-5accbe05e5ec-inventory\") pod \"configure-network-openstack-openstack-cell1-4bh9q\" (UID: \"37665f2f-cfb5-42f7-b07b-5accbe05e5ec\") " pod="openstack/configure-network-openstack-openstack-cell1-4bh9q" Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.589473 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djrlr\" (UniqueName: \"kubernetes.io/projected/37665f2f-cfb5-42f7-b07b-5accbe05e5ec-kube-api-access-djrlr\") pod \"configure-network-openstack-openstack-cell1-4bh9q\" (UID: \"37665f2f-cfb5-42f7-b07b-5accbe05e5ec\") " pod="openstack/configure-network-openstack-openstack-cell1-4bh9q" Nov 24 21:23:34 crc kubenswrapper[4812]: I1124 21:23:34.653712 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-4bh9q" Nov 24 21:23:35 crc kubenswrapper[4812]: I1124 21:23:35.216797 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-4bh9q"] Nov 24 21:23:36 crc kubenswrapper[4812]: I1124 21:23:36.264448 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-4bh9q" event={"ID":"37665f2f-cfb5-42f7-b07b-5accbe05e5ec","Type":"ContainerStarted","Data":"211c0c94a78a98bf0f419efa09f54ffbbfbcb8b0ced18264a8c485dbcc471b49"} Nov 24 21:23:36 crc kubenswrapper[4812]: I1124 21:23:36.264725 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-4bh9q" event={"ID":"37665f2f-cfb5-42f7-b07b-5accbe05e5ec","Type":"ContainerStarted","Data":"f4e96792cc083bf5cc92728c754c8e31977ffe68647149683cd720aa38ed8985"} Nov 24 21:23:36 crc kubenswrapper[4812]: I1124 21:23:36.981314 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:23:36 crc kubenswrapper[4812]: E1124 21:23:36.981838 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:23:47 crc kubenswrapper[4812]: I1124 21:23:47.966466 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:23:47 crc kubenswrapper[4812]: E1124 21:23:47.967376 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:23:58 crc kubenswrapper[4812]: I1124 21:23:58.965835 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:23:58 crc kubenswrapper[4812]: E1124 21:23:58.966600 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:24:10 crc kubenswrapper[4812]: I1124 21:24:10.966652 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:24:10 crc kubenswrapper[4812]: E1124 21:24:10.967414 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:24:24 crc kubenswrapper[4812]: I1124 21:24:24.965688 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:24:24 crc kubenswrapper[4812]: E1124 21:24:24.966788 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:24:27 crc kubenswrapper[4812]: I1124 21:24:27.512369 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-4bh9q" podStartSLOduration=53.063889652 podStartE2EDuration="53.512303081s" podCreationTimestamp="2025-11-24 21:23:34 +0000 UTC" firstStartedPulling="2025-11-24 21:23:35.224099124 +0000 UTC m=+7609.013051495" lastFinishedPulling="2025-11-24 21:23:35.672512513 +0000 UTC m=+7609.461464924" observedRunningTime="2025-11-24 21:23:36.295096888 +0000 UTC m=+7610.084049259" watchObservedRunningTime="2025-11-24 21:24:27.512303081 +0000 UTC m=+7661.301255492" Nov 24 21:24:27 crc kubenswrapper[4812]: I1124 21:24:27.524648 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rq45m"] Nov 24 21:24:27 crc kubenswrapper[4812]: I1124 21:24:27.527520 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rq45m" Nov 24 21:24:27 crc kubenswrapper[4812]: I1124 21:24:27.544309 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rq45m"] Nov 24 21:24:27 crc kubenswrapper[4812]: I1124 21:24:27.687727 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f76824b-2026-4c59-98bf-31d6b2e2a245-catalog-content\") pod \"redhat-operators-rq45m\" (UID: \"9f76824b-2026-4c59-98bf-31d6b2e2a245\") " pod="openshift-marketplace/redhat-operators-rq45m" Nov 24 21:24:27 crc kubenswrapper[4812]: I1124 21:24:27.687796 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f76824b-2026-4c59-98bf-31d6b2e2a245-utilities\") pod \"redhat-operators-rq45m\" (UID: \"9f76824b-2026-4c59-98bf-31d6b2e2a245\") " pod="openshift-marketplace/redhat-operators-rq45m" Nov 24 21:24:27 crc kubenswrapper[4812]: I1124 21:24:27.687906 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9f2c\" (UniqueName: \"kubernetes.io/projected/9f76824b-2026-4c59-98bf-31d6b2e2a245-kube-api-access-d9f2c\") pod \"redhat-operators-rq45m\" (UID: \"9f76824b-2026-4c59-98bf-31d6b2e2a245\") " pod="openshift-marketplace/redhat-operators-rq45m" Nov 24 21:24:27 crc kubenswrapper[4812]: I1124 21:24:27.790443 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f76824b-2026-4c59-98bf-31d6b2e2a245-catalog-content\") pod \"redhat-operators-rq45m\" (UID: \"9f76824b-2026-4c59-98bf-31d6b2e2a245\") " pod="openshift-marketplace/redhat-operators-rq45m" Nov 24 21:24:27 crc kubenswrapper[4812]: I1124 21:24:27.790765 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f76824b-2026-4c59-98bf-31d6b2e2a245-utilities\") pod \"redhat-operators-rq45m\" (UID: \"9f76824b-2026-4c59-98bf-31d6b2e2a245\") " pod="openshift-marketplace/redhat-operators-rq45m" Nov 24 21:24:27 crc kubenswrapper[4812]: I1124 21:24:27.790825 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9f2c\" (UniqueName: \"kubernetes.io/projected/9f76824b-2026-4c59-98bf-31d6b2e2a245-kube-api-access-d9f2c\") pod \"redhat-operators-rq45m\" (UID: \"9f76824b-2026-4c59-98bf-31d6b2e2a245\") " pod="openshift-marketplace/redhat-operators-rq45m" Nov 24 21:24:27 crc kubenswrapper[4812]: I1124 21:24:27.791379 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f76824b-2026-4c59-98bf-31d6b2e2a245-utilities\") pod \"redhat-operators-rq45m\" (UID: \"9f76824b-2026-4c59-98bf-31d6b2e2a245\") " pod="openshift-marketplace/redhat-operators-rq45m" Nov 24 21:24:27 crc kubenswrapper[4812]: I1124 21:24:27.791578 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f76824b-2026-4c59-98bf-31d6b2e2a245-catalog-content\") pod \"redhat-operators-rq45m\" (UID: \"9f76824b-2026-4c59-98bf-31d6b2e2a245\") " pod="openshift-marketplace/redhat-operators-rq45m" Nov 24 21:24:27 crc kubenswrapper[4812]: I1124 21:24:27.813729 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9f2c\" (UniqueName: \"kubernetes.io/projected/9f76824b-2026-4c59-98bf-31d6b2e2a245-kube-api-access-d9f2c\") pod \"redhat-operators-rq45m\" (UID: \"9f76824b-2026-4c59-98bf-31d6b2e2a245\") " pod="openshift-marketplace/redhat-operators-rq45m" Nov 24 21:24:27 crc kubenswrapper[4812]: I1124 21:24:27.853588 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rq45m" Nov 24 21:24:28 crc kubenswrapper[4812]: I1124 21:24:28.360759 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rq45m"] Nov 24 21:24:28 crc kubenswrapper[4812]: W1124 21:24:28.367626 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f76824b_2026_4c59_98bf_31d6b2e2a245.slice/crio-d8bf1208c59743881d65f563556d220bfb6489ea4e1270aaad5b18ec698c289e WatchSource:0}: Error finding container d8bf1208c59743881d65f563556d220bfb6489ea4e1270aaad5b18ec698c289e: Status 404 returned error can't find the container with id d8bf1208c59743881d65f563556d220bfb6489ea4e1270aaad5b18ec698c289e Nov 24 21:24:28 crc kubenswrapper[4812]: I1124 21:24:28.848137 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rq45m" event={"ID":"9f76824b-2026-4c59-98bf-31d6b2e2a245","Type":"ContainerStarted","Data":"d8bf1208c59743881d65f563556d220bfb6489ea4e1270aaad5b18ec698c289e"} Nov 24 21:24:29 crc kubenswrapper[4812]: I1124 21:24:29.864367 4812 generic.go:334] "Generic (PLEG): container finished" podID="9f76824b-2026-4c59-98bf-31d6b2e2a245" containerID="d217457fbc8c8e2f2437204219564bd9b652ca210726f1b26dd3c917c4cf5287" exitCode=0 Nov 24 21:24:29 crc kubenswrapper[4812]: I1124 21:24:29.864476 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rq45m" event={"ID":"9f76824b-2026-4c59-98bf-31d6b2e2a245","Type":"ContainerDied","Data":"d217457fbc8c8e2f2437204219564bd9b652ca210726f1b26dd3c917c4cf5287"} Nov 24 21:24:29 crc kubenswrapper[4812]: I1124 21:24:29.870622 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:24:31 crc kubenswrapper[4812]: I1124 21:24:31.889039 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rq45m" event={"ID":"9f76824b-2026-4c59-98bf-31d6b2e2a245","Type":"ContainerStarted","Data":"8e66d86f3599e1de1a17a03991af36432955b024eb2424000a912849e7e50582"} Nov 24 21:24:33 crc kubenswrapper[4812]: I1124 21:24:33.911430 4812 generic.go:334] "Generic (PLEG): container finished" podID="9f76824b-2026-4c59-98bf-31d6b2e2a245" containerID="8e66d86f3599e1de1a17a03991af36432955b024eb2424000a912849e7e50582" exitCode=0 Nov 24 21:24:33 crc kubenswrapper[4812]: I1124 21:24:33.911482 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rq45m" event={"ID":"9f76824b-2026-4c59-98bf-31d6b2e2a245","Type":"ContainerDied","Data":"8e66d86f3599e1de1a17a03991af36432955b024eb2424000a912849e7e50582"} Nov 24 21:24:35 crc kubenswrapper[4812]: I1124 21:24:35.965882 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:24:35 crc kubenswrapper[4812]: E1124 21:24:35.966901 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:24:37 crc kubenswrapper[4812]: I1124 21:24:37.968872 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rq45m" event={"ID":"9f76824b-2026-4c59-98bf-31d6b2e2a245","Type":"ContainerStarted","Data":"c0ab89f1efc7f7e3ed0682532086caf3c80159a2666f33e9d5e8c521ee51e6e4"} Nov 24 21:24:37 crc kubenswrapper[4812]: I1124 21:24:37.994490 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rq45m" podStartSLOduration=3.4940469690000002 podStartE2EDuration="10.994465031s" podCreationTimestamp="2025-11-24 21:24:27 +0000 UTC" firstStartedPulling="2025-11-24 21:24:29.870366683 +0000 UTC m=+7663.659319064" lastFinishedPulling="2025-11-24 21:24:37.370784715 +0000 UTC m=+7671.159737126" observedRunningTime="2025-11-24 21:24:37.988282335 +0000 UTC m=+7671.777234706" watchObservedRunningTime="2025-11-24 21:24:37.994465031 +0000 UTC m=+7671.783417402" Nov 24 21:24:46 crc kubenswrapper[4812]: I1124 21:24:46.974098 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:24:46 crc kubenswrapper[4812]: E1124 21:24:46.975921 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:24:47 crc kubenswrapper[4812]: I1124 21:24:47.854400 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rq45m" Nov 24 21:24:47 crc kubenswrapper[4812]: I1124 21:24:47.854446 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rq45m" Nov 24 21:24:47 crc kubenswrapper[4812]: I1124 21:24:47.906474 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rq45m" Nov 24 21:24:48 crc kubenswrapper[4812]: I1124 21:24:48.123472 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rq45m" Nov 24 21:24:48 crc kubenswrapper[4812]: I1124 21:24:48.182489 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rq45m"] Nov 24 21:24:50 crc kubenswrapper[4812]: I1124 21:24:50.083094 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rq45m" podUID="9f76824b-2026-4c59-98bf-31d6b2e2a245" containerName="registry-server" containerID="cri-o://c0ab89f1efc7f7e3ed0682532086caf3c80159a2666f33e9d5e8c521ee51e6e4" gracePeriod=2 Nov 24 21:24:50 crc kubenswrapper[4812]: I1124 21:24:50.693236 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rq45m" Nov 24 21:24:50 crc kubenswrapper[4812]: I1124 21:24:50.725967 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f76824b-2026-4c59-98bf-31d6b2e2a245-catalog-content\") pod \"9f76824b-2026-4c59-98bf-31d6b2e2a245\" (UID: \"9f76824b-2026-4c59-98bf-31d6b2e2a245\") " Nov 24 21:24:50 crc kubenswrapper[4812]: I1124 21:24:50.726064 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9f2c\" (UniqueName: \"kubernetes.io/projected/9f76824b-2026-4c59-98bf-31d6b2e2a245-kube-api-access-d9f2c\") pod \"9f76824b-2026-4c59-98bf-31d6b2e2a245\" (UID: \"9f76824b-2026-4c59-98bf-31d6b2e2a245\") " Nov 24 21:24:50 crc kubenswrapper[4812]: I1124 21:24:50.726101 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f76824b-2026-4c59-98bf-31d6b2e2a245-utilities\") pod \"9f76824b-2026-4c59-98bf-31d6b2e2a245\" (UID: \"9f76824b-2026-4c59-98bf-31d6b2e2a245\") " Nov 24 21:24:50 crc kubenswrapper[4812]: I1124 21:24:50.727805 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f76824b-2026-4c59-98bf-31d6b2e2a245-utilities" (OuterVolumeSpecName: "utilities") pod "9f76824b-2026-4c59-98bf-31d6b2e2a245" (UID: "9f76824b-2026-4c59-98bf-31d6b2e2a245"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:24:50 crc kubenswrapper[4812]: I1124 21:24:50.732905 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f76824b-2026-4c59-98bf-31d6b2e2a245-kube-api-access-d9f2c" (OuterVolumeSpecName: "kube-api-access-d9f2c") pod "9f76824b-2026-4c59-98bf-31d6b2e2a245" (UID: "9f76824b-2026-4c59-98bf-31d6b2e2a245"). InnerVolumeSpecName "kube-api-access-d9f2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:24:50 crc kubenswrapper[4812]: I1124 21:24:50.831102 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9f2c\" (UniqueName: \"kubernetes.io/projected/9f76824b-2026-4c59-98bf-31d6b2e2a245-kube-api-access-d9f2c\") on node \"crc\" DevicePath \"\"" Nov 24 21:24:50 crc kubenswrapper[4812]: I1124 21:24:50.831141 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f76824b-2026-4c59-98bf-31d6b2e2a245-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:24:50 crc kubenswrapper[4812]: I1124 21:24:50.841209 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f76824b-2026-4c59-98bf-31d6b2e2a245-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f76824b-2026-4c59-98bf-31d6b2e2a245" (UID: "9f76824b-2026-4c59-98bf-31d6b2e2a245"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:24:50 crc kubenswrapper[4812]: I1124 21:24:50.933118 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f76824b-2026-4c59-98bf-31d6b2e2a245-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:24:51 crc kubenswrapper[4812]: I1124 21:24:51.096025 4812 generic.go:334] "Generic (PLEG): container finished" podID="9f76824b-2026-4c59-98bf-31d6b2e2a245" containerID="c0ab89f1efc7f7e3ed0682532086caf3c80159a2666f33e9d5e8c521ee51e6e4" exitCode=0 Nov 24 21:24:51 crc kubenswrapper[4812]: I1124 21:24:51.096050 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rq45m" event={"ID":"9f76824b-2026-4c59-98bf-31d6b2e2a245","Type":"ContainerDied","Data":"c0ab89f1efc7f7e3ed0682532086caf3c80159a2666f33e9d5e8c521ee51e6e4"} Nov 24 21:24:51 crc kubenswrapper[4812]: I1124 21:24:51.096420 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rq45m" event={"ID":"9f76824b-2026-4c59-98bf-31d6b2e2a245","Type":"ContainerDied","Data":"d8bf1208c59743881d65f563556d220bfb6489ea4e1270aaad5b18ec698c289e"} Nov 24 21:24:51 crc kubenswrapper[4812]: I1124 21:24:51.096449 4812 scope.go:117] "RemoveContainer" containerID="c0ab89f1efc7f7e3ed0682532086caf3c80159a2666f33e9d5e8c521ee51e6e4" Nov 24 21:24:51 crc kubenswrapper[4812]: I1124 21:24:51.096115 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rq45m" Nov 24 21:24:51 crc kubenswrapper[4812]: I1124 21:24:51.122791 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rq45m"] Nov 24 21:24:51 crc kubenswrapper[4812]: I1124 21:24:51.135003 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rq45m"] Nov 24 21:24:51 crc kubenswrapper[4812]: I1124 21:24:51.139087 4812 scope.go:117] "RemoveContainer" containerID="8e66d86f3599e1de1a17a03991af36432955b024eb2424000a912849e7e50582" Nov 24 21:24:51 crc kubenswrapper[4812]: I1124 21:24:51.170487 4812 scope.go:117] "RemoveContainer" containerID="d217457fbc8c8e2f2437204219564bd9b652ca210726f1b26dd3c917c4cf5287" Nov 24 21:24:51 crc kubenswrapper[4812]: I1124 21:24:51.229619 4812 scope.go:117] "RemoveContainer" containerID="c0ab89f1efc7f7e3ed0682532086caf3c80159a2666f33e9d5e8c521ee51e6e4" Nov 24 21:24:51 crc kubenswrapper[4812]: E1124 21:24:51.230170 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0ab89f1efc7f7e3ed0682532086caf3c80159a2666f33e9d5e8c521ee51e6e4\": container with ID starting with c0ab89f1efc7f7e3ed0682532086caf3c80159a2666f33e9d5e8c521ee51e6e4 not found: ID does not exist" containerID="c0ab89f1efc7f7e3ed0682532086caf3c80159a2666f33e9d5e8c521ee51e6e4" Nov 24 21:24:51 crc kubenswrapper[4812]: I1124 21:24:51.230214 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0ab89f1efc7f7e3ed0682532086caf3c80159a2666f33e9d5e8c521ee51e6e4"} err="failed to get container status \"c0ab89f1efc7f7e3ed0682532086caf3c80159a2666f33e9d5e8c521ee51e6e4\": rpc error: code = NotFound desc = could not find container \"c0ab89f1efc7f7e3ed0682532086caf3c80159a2666f33e9d5e8c521ee51e6e4\": container with ID starting with c0ab89f1efc7f7e3ed0682532086caf3c80159a2666f33e9d5e8c521ee51e6e4 not found: ID does not exist" Nov 24 21:24:51 crc kubenswrapper[4812]: I1124 21:24:51.230242 4812 scope.go:117] "RemoveContainer" containerID="8e66d86f3599e1de1a17a03991af36432955b024eb2424000a912849e7e50582" Nov 24 21:24:51 crc kubenswrapper[4812]: E1124 21:24:51.231176 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e66d86f3599e1de1a17a03991af36432955b024eb2424000a912849e7e50582\": container with ID starting with 8e66d86f3599e1de1a17a03991af36432955b024eb2424000a912849e7e50582 not found: ID does not exist" containerID="8e66d86f3599e1de1a17a03991af36432955b024eb2424000a912849e7e50582" Nov 24 21:24:51 crc kubenswrapper[4812]: I1124 21:24:51.231210 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e66d86f3599e1de1a17a03991af36432955b024eb2424000a912849e7e50582"} err="failed to get container status \"8e66d86f3599e1de1a17a03991af36432955b024eb2424000a912849e7e50582\": rpc error: code = NotFound desc = could not find container \"8e66d86f3599e1de1a17a03991af36432955b024eb2424000a912849e7e50582\": container with ID starting with 8e66d86f3599e1de1a17a03991af36432955b024eb2424000a912849e7e50582 not found: ID does not exist" Nov 24 21:24:51 crc kubenswrapper[4812]: I1124 21:24:51.231230 4812 scope.go:117] "RemoveContainer" containerID="d217457fbc8c8e2f2437204219564bd9b652ca210726f1b26dd3c917c4cf5287" Nov 24 21:24:51 crc kubenswrapper[4812]: E1124 21:24:51.231553 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d217457fbc8c8e2f2437204219564bd9b652ca210726f1b26dd3c917c4cf5287\": container with ID starting with d217457fbc8c8e2f2437204219564bd9b652ca210726f1b26dd3c917c4cf5287 not found: ID does not exist" containerID="d217457fbc8c8e2f2437204219564bd9b652ca210726f1b26dd3c917c4cf5287" Nov 24 21:24:51 crc kubenswrapper[4812]: I1124 21:24:51.231593 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d217457fbc8c8e2f2437204219564bd9b652ca210726f1b26dd3c917c4cf5287"} err="failed to get container status \"d217457fbc8c8e2f2437204219564bd9b652ca210726f1b26dd3c917c4cf5287\": rpc error: code = NotFound desc = could not find container \"d217457fbc8c8e2f2437204219564bd9b652ca210726f1b26dd3c917c4cf5287\": container with ID starting with d217457fbc8c8e2f2437204219564bd9b652ca210726f1b26dd3c917c4cf5287 not found: ID does not exist" Nov 24 21:24:52 crc kubenswrapper[4812]: I1124 21:24:52.992373 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f76824b-2026-4c59-98bf-31d6b2e2a245" path="/var/lib/kubelet/pods/9f76824b-2026-4c59-98bf-31d6b2e2a245/volumes" Nov 24 21:24:54 crc kubenswrapper[4812]: I1124 21:24:54.133737 4812 generic.go:334] "Generic (PLEG): container finished" podID="37665f2f-cfb5-42f7-b07b-5accbe05e5ec" containerID="211c0c94a78a98bf0f419efa09f54ffbbfbcb8b0ced18264a8c485dbcc471b49" exitCode=0 Nov 24 21:24:54 crc kubenswrapper[4812]: I1124 21:24:54.133804 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-4bh9q" event={"ID":"37665f2f-cfb5-42f7-b07b-5accbe05e5ec","Type":"ContainerDied","Data":"211c0c94a78a98bf0f419efa09f54ffbbfbcb8b0ced18264a8c485dbcc471b49"} Nov 24 21:24:55 crc kubenswrapper[4812]: I1124 21:24:55.642823 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-4bh9q" Nov 24 21:24:55 crc kubenswrapper[4812]: I1124 21:24:55.744481 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djrlr\" (UniqueName: \"kubernetes.io/projected/37665f2f-cfb5-42f7-b07b-5accbe05e5ec-kube-api-access-djrlr\") pod \"37665f2f-cfb5-42f7-b07b-5accbe05e5ec\" (UID: \"37665f2f-cfb5-42f7-b07b-5accbe05e5ec\") " Nov 24 21:24:55 crc kubenswrapper[4812]: I1124 21:24:55.744569 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37665f2f-cfb5-42f7-b07b-5accbe05e5ec-inventory\") pod \"37665f2f-cfb5-42f7-b07b-5accbe05e5ec\" (UID: \"37665f2f-cfb5-42f7-b07b-5accbe05e5ec\") " Nov 24 21:24:55 crc kubenswrapper[4812]: I1124 21:24:55.744750 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/37665f2f-cfb5-42f7-b07b-5accbe05e5ec-ssh-key\") pod \"37665f2f-cfb5-42f7-b07b-5accbe05e5ec\" (UID: \"37665f2f-cfb5-42f7-b07b-5accbe05e5ec\") " Nov 24 21:24:55 crc kubenswrapper[4812]: I1124 21:24:55.749480 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37665f2f-cfb5-42f7-b07b-5accbe05e5ec-kube-api-access-djrlr" (OuterVolumeSpecName: "kube-api-access-djrlr") pod "37665f2f-cfb5-42f7-b07b-5accbe05e5ec" (UID: "37665f2f-cfb5-42f7-b07b-5accbe05e5ec"). InnerVolumeSpecName "kube-api-access-djrlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:24:55 crc kubenswrapper[4812]: I1124 21:24:55.787888 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37665f2f-cfb5-42f7-b07b-5accbe05e5ec-inventory" (OuterVolumeSpecName: "inventory") pod "37665f2f-cfb5-42f7-b07b-5accbe05e5ec" (UID: "37665f2f-cfb5-42f7-b07b-5accbe05e5ec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:24:55 crc kubenswrapper[4812]: I1124 21:24:55.791091 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37665f2f-cfb5-42f7-b07b-5accbe05e5ec-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "37665f2f-cfb5-42f7-b07b-5accbe05e5ec" (UID: "37665f2f-cfb5-42f7-b07b-5accbe05e5ec"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:24:55 crc kubenswrapper[4812]: I1124 21:24:55.846912 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/37665f2f-cfb5-42f7-b07b-5accbe05e5ec-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:24:55 crc kubenswrapper[4812]: I1124 21:24:55.846947 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djrlr\" (UniqueName: \"kubernetes.io/projected/37665f2f-cfb5-42f7-b07b-5accbe05e5ec-kube-api-access-djrlr\") on node \"crc\" DevicePath \"\"" Nov 24 21:24:55 crc kubenswrapper[4812]: I1124 21:24:55.846959 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/37665f2f-cfb5-42f7-b07b-5accbe05e5ec-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.155496 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-4bh9q" event={"ID":"37665f2f-cfb5-42f7-b07b-5accbe05e5ec","Type":"ContainerDied","Data":"f4e96792cc083bf5cc92728c754c8e31977ffe68647149683cd720aa38ed8985"} Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.155909 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4e96792cc083bf5cc92728c754c8e31977ffe68647149683cd720aa38ed8985" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.155654 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-4bh9q" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.265067 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-c22ms"] Nov 24 21:24:56 crc kubenswrapper[4812]: E1124 21:24:56.265586 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f76824b-2026-4c59-98bf-31d6b2e2a245" containerName="registry-server" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.265604 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f76824b-2026-4c59-98bf-31d6b2e2a245" containerName="registry-server" Nov 24 21:24:56 crc kubenswrapper[4812]: E1124 21:24:56.265627 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f76824b-2026-4c59-98bf-31d6b2e2a245" containerName="extract-utilities" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.265634 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f76824b-2026-4c59-98bf-31d6b2e2a245" containerName="extract-utilities" Nov 24 21:24:56 crc kubenswrapper[4812]: E1124 21:24:56.265649 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37665f2f-cfb5-42f7-b07b-5accbe05e5ec" containerName="configure-network-openstack-openstack-cell1" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.265656 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="37665f2f-cfb5-42f7-b07b-5accbe05e5ec" containerName="configure-network-openstack-openstack-cell1" Nov 24 21:24:56 crc kubenswrapper[4812]: E1124 21:24:56.265672 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f76824b-2026-4c59-98bf-31d6b2e2a245" containerName="extract-content" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.265679 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f76824b-2026-4c59-98bf-31d6b2e2a245" containerName="extract-content" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.265878 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="37665f2f-cfb5-42f7-b07b-5accbe05e5ec" containerName="configure-network-openstack-openstack-cell1" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.265903 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f76824b-2026-4c59-98bf-31d6b2e2a245" containerName="registry-server" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.266760 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-c22ms" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.273558 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.273640 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2x7tj" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.273967 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.274315 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.293418 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-c22ms"] Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.360657 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cedb7d1-e8f4-4652-9d14-bec67cb873eb-ssh-key\") pod \"validate-network-openstack-openstack-cell1-c22ms\" (UID: \"9cedb7d1-e8f4-4652-9d14-bec67cb873eb\") " pod="openstack/validate-network-openstack-openstack-cell1-c22ms" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.360960 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cedb7d1-e8f4-4652-9d14-bec67cb873eb-inventory\") pod \"validate-network-openstack-openstack-cell1-c22ms\" (UID: \"9cedb7d1-e8f4-4652-9d14-bec67cb873eb\") " pod="openstack/validate-network-openstack-openstack-cell1-c22ms" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.361066 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b7rb\" (UniqueName: \"kubernetes.io/projected/9cedb7d1-e8f4-4652-9d14-bec67cb873eb-kube-api-access-7b7rb\") pod \"validate-network-openstack-openstack-cell1-c22ms\" (UID: \"9cedb7d1-e8f4-4652-9d14-bec67cb873eb\") " pod="openstack/validate-network-openstack-openstack-cell1-c22ms" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.465377 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cedb7d1-e8f4-4652-9d14-bec67cb873eb-ssh-key\") pod \"validate-network-openstack-openstack-cell1-c22ms\" (UID: \"9cedb7d1-e8f4-4652-9d14-bec67cb873eb\") " pod="openstack/validate-network-openstack-openstack-cell1-c22ms" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.465810 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cedb7d1-e8f4-4652-9d14-bec67cb873eb-inventory\") pod \"validate-network-openstack-openstack-cell1-c22ms\" (UID: \"9cedb7d1-e8f4-4652-9d14-bec67cb873eb\") " pod="openstack/validate-network-openstack-openstack-cell1-c22ms" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.465985 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b7rb\" (UniqueName: \"kubernetes.io/projected/9cedb7d1-e8f4-4652-9d14-bec67cb873eb-kube-api-access-7b7rb\") pod \"validate-network-openstack-openstack-cell1-c22ms\" (UID: \"9cedb7d1-e8f4-4652-9d14-bec67cb873eb\") " pod="openstack/validate-network-openstack-openstack-cell1-c22ms" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.479238 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cedb7d1-e8f4-4652-9d14-bec67cb873eb-inventory\") pod \"validate-network-openstack-openstack-cell1-c22ms\" (UID: \"9cedb7d1-e8f4-4652-9d14-bec67cb873eb\") " pod="openstack/validate-network-openstack-openstack-cell1-c22ms" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.479370 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cedb7d1-e8f4-4652-9d14-bec67cb873eb-ssh-key\") pod \"validate-network-openstack-openstack-cell1-c22ms\" (UID: \"9cedb7d1-e8f4-4652-9d14-bec67cb873eb\") " pod="openstack/validate-network-openstack-openstack-cell1-c22ms" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.488116 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b7rb\" (UniqueName: \"kubernetes.io/projected/9cedb7d1-e8f4-4652-9d14-bec67cb873eb-kube-api-access-7b7rb\") pod \"validate-network-openstack-openstack-cell1-c22ms\" (UID: \"9cedb7d1-e8f4-4652-9d14-bec67cb873eb\") " pod="openstack/validate-network-openstack-openstack-cell1-c22ms" Nov 24 21:24:56 crc kubenswrapper[4812]: I1124 21:24:56.592166 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-c22ms" Nov 24 21:24:57 crc kubenswrapper[4812]: I1124 21:24:57.140272 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-c22ms"] Nov 24 21:24:57 crc kubenswrapper[4812]: W1124 21:24:57.146724 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cedb7d1_e8f4_4652_9d14_bec67cb873eb.slice/crio-ee48cb6ef4a2c96287a33bdd898a46bf301e1df4173dcedb8717ecd225e54cc0 WatchSource:0}: Error finding container ee48cb6ef4a2c96287a33bdd898a46bf301e1df4173dcedb8717ecd225e54cc0: Status 404 returned error can't find the container with id ee48cb6ef4a2c96287a33bdd898a46bf301e1df4173dcedb8717ecd225e54cc0 Nov 24 21:24:57 crc kubenswrapper[4812]: I1124 21:24:57.168690 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-c22ms" event={"ID":"9cedb7d1-e8f4-4652-9d14-bec67cb873eb","Type":"ContainerStarted","Data":"ee48cb6ef4a2c96287a33bdd898a46bf301e1df4173dcedb8717ecd225e54cc0"} Nov 24 21:24:58 crc kubenswrapper[4812]: I1124 21:24:58.178396 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-c22ms" event={"ID":"9cedb7d1-e8f4-4652-9d14-bec67cb873eb","Type":"ContainerStarted","Data":"bdb02efcaab6021430c069b3c44c96f9fd2526000e2d2606b9c01b75a93f6cfb"} Nov 24 21:24:59 crc kubenswrapper[4812]: I1124 21:24:59.920455 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-c22ms" podStartSLOduration=3.4104425259999998 podStartE2EDuration="3.92042666s" podCreationTimestamp="2025-11-24 21:24:56 +0000 UTC" firstStartedPulling="2025-11-24 21:24:57.149777006 +0000 UTC m=+7690.938729377" lastFinishedPulling="2025-11-24 21:24:57.6597611 +0000 UTC m=+7691.448713511" observedRunningTime="2025-11-24 21:24:58.197726757 +0000 UTC m=+7691.986679128" watchObservedRunningTime="2025-11-24 21:24:59.92042666 +0000 UTC m=+7693.709379071" Nov 24 21:24:59 crc kubenswrapper[4812]: I1124 21:24:59.934194 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vghfr"] Nov 24 21:24:59 crc kubenswrapper[4812]: I1124 21:24:59.937979 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vghfr" Nov 24 21:24:59 crc kubenswrapper[4812]: I1124 21:24:59.947260 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vghfr"] Nov 24 21:24:59 crc kubenswrapper[4812]: I1124 21:24:59.966364 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:24:59 crc kubenswrapper[4812]: E1124 21:24:59.966714 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:25:00 crc kubenswrapper[4812]: I1124 21:25:00.044655 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5687720a-9c94-42bb-b631-e4b34a1fadbe-catalog-content\") pod \"redhat-marketplace-vghfr\" (UID: \"5687720a-9c94-42bb-b631-e4b34a1fadbe\") " pod="openshift-marketplace/redhat-marketplace-vghfr" Nov 24 21:25:00 crc kubenswrapper[4812]: I1124 21:25:00.045162 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5687720a-9c94-42bb-b631-e4b34a1fadbe-utilities\") pod \"redhat-marketplace-vghfr\" (UID: \"5687720a-9c94-42bb-b631-e4b34a1fadbe\") " pod="openshift-marketplace/redhat-marketplace-vghfr" Nov 24 21:25:00 crc kubenswrapper[4812]: I1124 21:25:00.047127 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb72z\" (UniqueName: \"kubernetes.io/projected/5687720a-9c94-42bb-b631-e4b34a1fadbe-kube-api-access-lb72z\") pod \"redhat-marketplace-vghfr\" (UID: \"5687720a-9c94-42bb-b631-e4b34a1fadbe\") " pod="openshift-marketplace/redhat-marketplace-vghfr" Nov 24 21:25:00 crc kubenswrapper[4812]: I1124 21:25:00.155107 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5687720a-9c94-42bb-b631-e4b34a1fadbe-utilities\") pod \"redhat-marketplace-vghfr\" (UID: \"5687720a-9c94-42bb-b631-e4b34a1fadbe\") " pod="openshift-marketplace/redhat-marketplace-vghfr" Nov 24 21:25:00 crc kubenswrapper[4812]: I1124 21:25:00.155354 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb72z\" (UniqueName: \"kubernetes.io/projected/5687720a-9c94-42bb-b631-e4b34a1fadbe-kube-api-access-lb72z\") pod \"redhat-marketplace-vghfr\" (UID: \"5687720a-9c94-42bb-b631-e4b34a1fadbe\") " pod="openshift-marketplace/redhat-marketplace-vghfr" Nov 24 21:25:00 crc kubenswrapper[4812]: I1124 21:25:00.155592 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5687720a-9c94-42bb-b631-e4b34a1fadbe-catalog-content\") pod \"redhat-marketplace-vghfr\" (UID: \"5687720a-9c94-42bb-b631-e4b34a1fadbe\") " pod="openshift-marketplace/redhat-marketplace-vghfr" Nov 24 21:25:00 crc kubenswrapper[4812]: I1124 21:25:00.156376 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5687720a-9c94-42bb-b631-e4b34a1fadbe-catalog-content\") pod \"redhat-marketplace-vghfr\" (UID: \"5687720a-9c94-42bb-b631-e4b34a1fadbe\") " pod="openshift-marketplace/redhat-marketplace-vghfr" Nov 24 21:25:00 crc kubenswrapper[4812]: I1124 21:25:00.156798 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5687720a-9c94-42bb-b631-e4b34a1fadbe-utilities\") pod \"redhat-marketplace-vghfr\" (UID: \"5687720a-9c94-42bb-b631-e4b34a1fadbe\") " pod="openshift-marketplace/redhat-marketplace-vghfr" Nov 24 21:25:00 crc kubenswrapper[4812]: I1124 21:25:00.194255 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb72z\" (UniqueName: \"kubernetes.io/projected/5687720a-9c94-42bb-b631-e4b34a1fadbe-kube-api-access-lb72z\") pod \"redhat-marketplace-vghfr\" (UID: \"5687720a-9c94-42bb-b631-e4b34a1fadbe\") " pod="openshift-marketplace/redhat-marketplace-vghfr" Nov 24 21:25:00 crc kubenswrapper[4812]: I1124 21:25:00.276457 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vghfr" Nov 24 21:25:00 crc kubenswrapper[4812]: W1124 21:25:00.814525 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5687720a_9c94_42bb_b631_e4b34a1fadbe.slice/crio-5b4256378ec9ab05b701070bd0a701eb1aac3de48961a8b2a072785dffac53c3 WatchSource:0}: Error finding container 5b4256378ec9ab05b701070bd0a701eb1aac3de48961a8b2a072785dffac53c3: Status 404 returned error can't find the container with id 5b4256378ec9ab05b701070bd0a701eb1aac3de48961a8b2a072785dffac53c3 Nov 24 21:25:00 crc kubenswrapper[4812]: I1124 21:25:00.823501 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vghfr"] Nov 24 21:25:01 crc kubenswrapper[4812]: I1124 21:25:01.230703 4812 generic.go:334] "Generic (PLEG): container finished" podID="5687720a-9c94-42bb-b631-e4b34a1fadbe" containerID="087040cca8be22e9114bd0709ac9e6dd004a479d22057a5f1e64d59528aa74e6" exitCode=0 Nov 24 21:25:01 crc kubenswrapper[4812]: I1124 21:25:01.230770 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vghfr" event={"ID":"5687720a-9c94-42bb-b631-e4b34a1fadbe","Type":"ContainerDied","Data":"087040cca8be22e9114bd0709ac9e6dd004a479d22057a5f1e64d59528aa74e6"} Nov 24 21:25:01 crc kubenswrapper[4812]: I1124 21:25:01.230808 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vghfr" event={"ID":"5687720a-9c94-42bb-b631-e4b34a1fadbe","Type":"ContainerStarted","Data":"5b4256378ec9ab05b701070bd0a701eb1aac3de48961a8b2a072785dffac53c3"} Nov 24 21:25:02 crc kubenswrapper[4812]: I1124 21:25:02.243107 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vghfr" event={"ID":"5687720a-9c94-42bb-b631-e4b34a1fadbe","Type":"ContainerStarted","Data":"a8be969d1f226144a5df22288598041a5713719ad58dee070dc0f70caca488ec"} Nov 24 21:25:03 crc kubenswrapper[4812]: I1124 21:25:03.256475 4812 generic.go:334] "Generic (PLEG): container finished" podID="5687720a-9c94-42bb-b631-e4b34a1fadbe" containerID="a8be969d1f226144a5df22288598041a5713719ad58dee070dc0f70caca488ec" exitCode=0 Nov 24 21:25:03 crc kubenswrapper[4812]: I1124 21:25:03.256579 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vghfr" event={"ID":"5687720a-9c94-42bb-b631-e4b34a1fadbe","Type":"ContainerDied","Data":"a8be969d1f226144a5df22288598041a5713719ad58dee070dc0f70caca488ec"} Nov 24 21:25:04 crc kubenswrapper[4812]: I1124 21:25:04.268294 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vghfr" event={"ID":"5687720a-9c94-42bb-b631-e4b34a1fadbe","Type":"ContainerStarted","Data":"42bec50d205065f8110c3b0fd925f3d69c51d0c40d23861c8ca6b753e4254c1d"} Nov 24 21:25:04 crc kubenswrapper[4812]: I1124 21:25:04.270298 4812 generic.go:334] "Generic (PLEG): container finished" podID="9cedb7d1-e8f4-4652-9d14-bec67cb873eb" containerID="bdb02efcaab6021430c069b3c44c96f9fd2526000e2d2606b9c01b75a93f6cfb" exitCode=0 Nov 24 21:25:04 crc kubenswrapper[4812]: I1124 21:25:04.270347 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-c22ms" event={"ID":"9cedb7d1-e8f4-4652-9d14-bec67cb873eb","Type":"ContainerDied","Data":"bdb02efcaab6021430c069b3c44c96f9fd2526000e2d2606b9c01b75a93f6cfb"} Nov 24 21:25:04 crc kubenswrapper[4812]: I1124 21:25:04.298793 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vghfr" podStartSLOduration=2.838702099 podStartE2EDuration="5.298776011s" podCreationTimestamp="2025-11-24 21:24:59 +0000 UTC" firstStartedPulling="2025-11-24 21:25:01.235947105 +0000 UTC m=+7695.024899476" lastFinishedPulling="2025-11-24 21:25:03.696021007 +0000 UTC m=+7697.484973388" observedRunningTime="2025-11-24 21:25:04.296048323 +0000 UTC m=+7698.085000704" watchObservedRunningTime="2025-11-24 21:25:04.298776011 +0000 UTC m=+7698.087728382" Nov 24 21:25:05 crc kubenswrapper[4812]: I1124 21:25:05.752914 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-c22ms" Nov 24 21:25:05 crc kubenswrapper[4812]: I1124 21:25:05.900486 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cedb7d1-e8f4-4652-9d14-bec67cb873eb-ssh-key\") pod \"9cedb7d1-e8f4-4652-9d14-bec67cb873eb\" (UID: \"9cedb7d1-e8f4-4652-9d14-bec67cb873eb\") " Nov 24 21:25:05 crc kubenswrapper[4812]: I1124 21:25:05.901077 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b7rb\" (UniqueName: \"kubernetes.io/projected/9cedb7d1-e8f4-4652-9d14-bec67cb873eb-kube-api-access-7b7rb\") pod \"9cedb7d1-e8f4-4652-9d14-bec67cb873eb\" (UID: \"9cedb7d1-e8f4-4652-9d14-bec67cb873eb\") " Nov 24 21:25:05 crc kubenswrapper[4812]: I1124 21:25:05.901115 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cedb7d1-e8f4-4652-9d14-bec67cb873eb-inventory\") pod \"9cedb7d1-e8f4-4652-9d14-bec67cb873eb\" (UID: \"9cedb7d1-e8f4-4652-9d14-bec67cb873eb\") " Nov 24 21:25:05 crc kubenswrapper[4812]: I1124 21:25:05.908710 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cedb7d1-e8f4-4652-9d14-bec67cb873eb-kube-api-access-7b7rb" (OuterVolumeSpecName: "kube-api-access-7b7rb") pod "9cedb7d1-e8f4-4652-9d14-bec67cb873eb" (UID: "9cedb7d1-e8f4-4652-9d14-bec67cb873eb"). InnerVolumeSpecName "kube-api-access-7b7rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:25:05 crc kubenswrapper[4812]: I1124 21:25:05.941248 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cedb7d1-e8f4-4652-9d14-bec67cb873eb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9cedb7d1-e8f4-4652-9d14-bec67cb873eb" (UID: "9cedb7d1-e8f4-4652-9d14-bec67cb873eb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:25:05 crc kubenswrapper[4812]: I1124 21:25:05.942534 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cedb7d1-e8f4-4652-9d14-bec67cb873eb-inventory" (OuterVolumeSpecName: "inventory") pod "9cedb7d1-e8f4-4652-9d14-bec67cb873eb" (UID: "9cedb7d1-e8f4-4652-9d14-bec67cb873eb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.005000 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b7rb\" (UniqueName: \"kubernetes.io/projected/9cedb7d1-e8f4-4652-9d14-bec67cb873eb-kube-api-access-7b7rb\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.005665 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9cedb7d1-e8f4-4652-9d14-bec67cb873eb-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.005796 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9cedb7d1-e8f4-4652-9d14-bec67cb873eb-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.114842 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-swspx"] Nov 24 21:25:06 crc kubenswrapper[4812]: E1124 21:25:06.115945 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cedb7d1-e8f4-4652-9d14-bec67cb873eb" containerName="validate-network-openstack-openstack-cell1" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.116072 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cedb7d1-e8f4-4652-9d14-bec67cb873eb" containerName="validate-network-openstack-openstack-cell1" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.116521 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cedb7d1-e8f4-4652-9d14-bec67cb873eb" containerName="validate-network-openstack-openstack-cell1" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.118720 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-swspx" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.122167 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-swspx"] Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.294111 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-c22ms" event={"ID":"9cedb7d1-e8f4-4652-9d14-bec67cb873eb","Type":"ContainerDied","Data":"ee48cb6ef4a2c96287a33bdd898a46bf301e1df4173dcedb8717ecd225e54cc0"} Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.294476 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee48cb6ef4a2c96287a33bdd898a46bf301e1df4173dcedb8717ecd225e54cc0" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.294447 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-c22ms" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.312575 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b92065a-f3a0-46c5-ba92-8cccb0a98517-catalog-content\") pod \"community-operators-swspx\" (UID: \"9b92065a-f3a0-46c5-ba92-8cccb0a98517\") " pod="openshift-marketplace/community-operators-swspx" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.312712 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b92065a-f3a0-46c5-ba92-8cccb0a98517-utilities\") pod \"community-operators-swspx\" (UID: \"9b92065a-f3a0-46c5-ba92-8cccb0a98517\") " pod="openshift-marketplace/community-operators-swspx" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.312867 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsv6p\" (UniqueName: \"kubernetes.io/projected/9b92065a-f3a0-46c5-ba92-8cccb0a98517-kube-api-access-gsv6p\") pod \"community-operators-swspx\" (UID: \"9b92065a-f3a0-46c5-ba92-8cccb0a98517\") " pod="openshift-marketplace/community-operators-swspx" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.415410 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsv6p\" (UniqueName: \"kubernetes.io/projected/9b92065a-f3a0-46c5-ba92-8cccb0a98517-kube-api-access-gsv6p\") pod \"community-operators-swspx\" (UID: \"9b92065a-f3a0-46c5-ba92-8cccb0a98517\") " pod="openshift-marketplace/community-operators-swspx" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.415843 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b92065a-f3a0-46c5-ba92-8cccb0a98517-catalog-content\") pod \"community-operators-swspx\" (UID: \"9b92065a-f3a0-46c5-ba92-8cccb0a98517\") " pod="openshift-marketplace/community-operators-swspx" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.416024 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b92065a-f3a0-46c5-ba92-8cccb0a98517-utilities\") pod \"community-operators-swspx\" (UID: \"9b92065a-f3a0-46c5-ba92-8cccb0a98517\") " pod="openshift-marketplace/community-operators-swspx" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.416719 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b92065a-f3a0-46c5-ba92-8cccb0a98517-catalog-content\") pod \"community-operators-swspx\" (UID: \"9b92065a-f3a0-46c5-ba92-8cccb0a98517\") " pod="openshift-marketplace/community-operators-swspx" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.417261 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b92065a-f3a0-46c5-ba92-8cccb0a98517-utilities\") pod \"community-operators-swspx\" (UID: \"9b92065a-f3a0-46c5-ba92-8cccb0a98517\") " pod="openshift-marketplace/community-operators-swspx" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.435678 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsv6p\" (UniqueName: \"kubernetes.io/projected/9b92065a-f3a0-46c5-ba92-8cccb0a98517-kube-api-access-gsv6p\") pod \"community-operators-swspx\" (UID: \"9b92065a-f3a0-46c5-ba92-8cccb0a98517\") " pod="openshift-marketplace/community-operators-swspx" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.467771 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-h56pj"] Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.469406 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-h56pj" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.475904 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-swspx" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.479850 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.479956 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.480272 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2x7tj" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.480512 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.488179 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-h56pj"] Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.518393 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a2711f5-c999-47eb-9a01-04e4691e5983-inventory\") pod \"install-os-openstack-openstack-cell1-h56pj\" (UID: \"5a2711f5-c999-47eb-9a01-04e4691e5983\") " pod="openstack/install-os-openstack-openstack-cell1-h56pj" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.518799 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qh9b\" (UniqueName: \"kubernetes.io/projected/5a2711f5-c999-47eb-9a01-04e4691e5983-kube-api-access-6qh9b\") pod \"install-os-openstack-openstack-cell1-h56pj\" (UID: \"5a2711f5-c999-47eb-9a01-04e4691e5983\") " pod="openstack/install-os-openstack-openstack-cell1-h56pj" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.518840 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a2711f5-c999-47eb-9a01-04e4691e5983-ssh-key\") pod \"install-os-openstack-openstack-cell1-h56pj\" (UID: \"5a2711f5-c999-47eb-9a01-04e4691e5983\") " pod="openstack/install-os-openstack-openstack-cell1-h56pj" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.620631 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qh9b\" (UniqueName: \"kubernetes.io/projected/5a2711f5-c999-47eb-9a01-04e4691e5983-kube-api-access-6qh9b\") pod \"install-os-openstack-openstack-cell1-h56pj\" (UID: \"5a2711f5-c999-47eb-9a01-04e4691e5983\") " pod="openstack/install-os-openstack-openstack-cell1-h56pj" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.620713 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a2711f5-c999-47eb-9a01-04e4691e5983-ssh-key\") pod \"install-os-openstack-openstack-cell1-h56pj\" (UID: \"5a2711f5-c999-47eb-9a01-04e4691e5983\") " pod="openstack/install-os-openstack-openstack-cell1-h56pj" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.620840 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a2711f5-c999-47eb-9a01-04e4691e5983-inventory\") pod \"install-os-openstack-openstack-cell1-h56pj\" (UID: \"5a2711f5-c999-47eb-9a01-04e4691e5983\") " pod="openstack/install-os-openstack-openstack-cell1-h56pj" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.629207 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a2711f5-c999-47eb-9a01-04e4691e5983-ssh-key\") pod \"install-os-openstack-openstack-cell1-h56pj\" (UID: \"5a2711f5-c999-47eb-9a01-04e4691e5983\") " pod="openstack/install-os-openstack-openstack-cell1-h56pj" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.630637 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a2711f5-c999-47eb-9a01-04e4691e5983-inventory\") pod \"install-os-openstack-openstack-cell1-h56pj\" (UID: \"5a2711f5-c999-47eb-9a01-04e4691e5983\") " pod="openstack/install-os-openstack-openstack-cell1-h56pj" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.639615 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qh9b\" (UniqueName: \"kubernetes.io/projected/5a2711f5-c999-47eb-9a01-04e4691e5983-kube-api-access-6qh9b\") pod \"install-os-openstack-openstack-cell1-h56pj\" (UID: \"5a2711f5-c999-47eb-9a01-04e4691e5983\") " pod="openstack/install-os-openstack-openstack-cell1-h56pj" Nov 24 21:25:06 crc kubenswrapper[4812]: I1124 21:25:06.825438 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-h56pj" Nov 24 21:25:07 crc kubenswrapper[4812]: I1124 21:25:07.005711 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-swspx"] Nov 24 21:25:07 crc kubenswrapper[4812]: W1124 21:25:07.035576 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b92065a_f3a0_46c5_ba92_8cccb0a98517.slice/crio-e6193bf756ac98f56d7cb6297b64dfe9df74822c7fe0dd721738235be767926c WatchSource:0}: Error finding container e6193bf756ac98f56d7cb6297b64dfe9df74822c7fe0dd721738235be767926c: Status 404 returned error can't find the container with id e6193bf756ac98f56d7cb6297b64dfe9df74822c7fe0dd721738235be767926c Nov 24 21:25:08 crc kubenswrapper[4812]: I1124 21:25:07.304695 4812 generic.go:334] "Generic (PLEG): container finished" podID="9b92065a-f3a0-46c5-ba92-8cccb0a98517" containerID="36d25c36e0787f6178d986b66a3ce00f08c00136d8d1af092dc73521d384b855" exitCode=0 Nov 24 21:25:08 crc kubenswrapper[4812]: I1124 21:25:07.304793 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swspx" event={"ID":"9b92065a-f3a0-46c5-ba92-8cccb0a98517","Type":"ContainerDied","Data":"36d25c36e0787f6178d986b66a3ce00f08c00136d8d1af092dc73521d384b855"} Nov 24 21:25:08 crc kubenswrapper[4812]: I1124 21:25:07.305026 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swspx" event={"ID":"9b92065a-f3a0-46c5-ba92-8cccb0a98517","Type":"ContainerStarted","Data":"e6193bf756ac98f56d7cb6297b64dfe9df74822c7fe0dd721738235be767926c"} Nov 24 21:25:08 crc kubenswrapper[4812]: I1124 21:25:07.427484 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-h56pj"] Nov 24 21:25:08 crc kubenswrapper[4812]: I1124 21:25:08.317202 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-h56pj" event={"ID":"5a2711f5-c999-47eb-9a01-04e4691e5983","Type":"ContainerStarted","Data":"d0836590718ab0b8172fb14df65abddaba00830aeae3484302173984873e26db"} Nov 24 21:25:08 crc kubenswrapper[4812]: I1124 21:25:08.317535 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-h56pj" event={"ID":"5a2711f5-c999-47eb-9a01-04e4691e5983","Type":"ContainerStarted","Data":"2d198ed94e105068a9ddf5b6c3050f57d50e1578d038a4140ab2bbe88e90ae52"} Nov 24 21:25:08 crc kubenswrapper[4812]: I1124 21:25:08.342888 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-h56pj" podStartSLOduration=1.914805034 podStartE2EDuration="2.342867046s" podCreationTimestamp="2025-11-24 21:25:06 +0000 UTC" firstStartedPulling="2025-11-24 21:25:07.436202601 +0000 UTC m=+7701.225154982" lastFinishedPulling="2025-11-24 21:25:07.864264633 +0000 UTC m=+7701.653216994" observedRunningTime="2025-11-24 21:25:08.331526405 +0000 UTC m=+7702.120478826" watchObservedRunningTime="2025-11-24 21:25:08.342867046 +0000 UTC m=+7702.131819427" Nov 24 21:25:10 crc kubenswrapper[4812]: I1124 21:25:10.277135 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vghfr" Nov 24 21:25:10 crc kubenswrapper[4812]: I1124 21:25:10.277259 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vghfr" Nov 24 21:25:10 crc kubenswrapper[4812]: I1124 21:25:10.360211 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vghfr" Nov 24 21:25:10 crc kubenswrapper[4812]: I1124 21:25:10.422377 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vghfr" Nov 24 21:25:10 crc kubenswrapper[4812]: I1124 21:25:10.966310 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:25:10 crc kubenswrapper[4812]: E1124 21:25:10.966821 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:25:11 crc kubenswrapper[4812]: I1124 21:25:11.521454 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vghfr"] Nov 24 21:25:12 crc kubenswrapper[4812]: I1124 21:25:12.399078 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vghfr" podUID="5687720a-9c94-42bb-b631-e4b34a1fadbe" containerName="registry-server" containerID="cri-o://42bec50d205065f8110c3b0fd925f3d69c51d0c40d23861c8ca6b753e4254c1d" gracePeriod=2 Nov 24 21:25:12 crc kubenswrapper[4812]: I1124 21:25:12.400671 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swspx" event={"ID":"9b92065a-f3a0-46c5-ba92-8cccb0a98517","Type":"ContainerStarted","Data":"f2ecd0df218678521ce6d0512c4d0115fc698b343c0d8ee8bc0b7569d878cad6"} Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.063366 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vghfr" Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.110232 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lb72z\" (UniqueName: \"kubernetes.io/projected/5687720a-9c94-42bb-b631-e4b34a1fadbe-kube-api-access-lb72z\") pod \"5687720a-9c94-42bb-b631-e4b34a1fadbe\" (UID: \"5687720a-9c94-42bb-b631-e4b34a1fadbe\") " Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.110399 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5687720a-9c94-42bb-b631-e4b34a1fadbe-utilities\") pod \"5687720a-9c94-42bb-b631-e4b34a1fadbe\" (UID: \"5687720a-9c94-42bb-b631-e4b34a1fadbe\") " Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.110471 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5687720a-9c94-42bb-b631-e4b34a1fadbe-catalog-content\") pod \"5687720a-9c94-42bb-b631-e4b34a1fadbe\" (UID: \"5687720a-9c94-42bb-b631-e4b34a1fadbe\") " Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.112785 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5687720a-9c94-42bb-b631-e4b34a1fadbe-utilities" (OuterVolumeSpecName: "utilities") pod "5687720a-9c94-42bb-b631-e4b34a1fadbe" (UID: "5687720a-9c94-42bb-b631-e4b34a1fadbe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.126237 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5687720a-9c94-42bb-b631-e4b34a1fadbe-kube-api-access-lb72z" (OuterVolumeSpecName: "kube-api-access-lb72z") pod "5687720a-9c94-42bb-b631-e4b34a1fadbe" (UID: "5687720a-9c94-42bb-b631-e4b34a1fadbe"). InnerVolumeSpecName "kube-api-access-lb72z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.143399 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5687720a-9c94-42bb-b631-e4b34a1fadbe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5687720a-9c94-42bb-b631-e4b34a1fadbe" (UID: "5687720a-9c94-42bb-b631-e4b34a1fadbe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.213703 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lb72z\" (UniqueName: \"kubernetes.io/projected/5687720a-9c94-42bb-b631-e4b34a1fadbe-kube-api-access-lb72z\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.213738 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5687720a-9c94-42bb-b631-e4b34a1fadbe-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.213752 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5687720a-9c94-42bb-b631-e4b34a1fadbe-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.415880 4812 generic.go:334] "Generic (PLEG): container finished" podID="5687720a-9c94-42bb-b631-e4b34a1fadbe" containerID="42bec50d205065f8110c3b0fd925f3d69c51d0c40d23861c8ca6b753e4254c1d" exitCode=0 Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.415953 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vghfr" Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.415997 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vghfr" event={"ID":"5687720a-9c94-42bb-b631-e4b34a1fadbe","Type":"ContainerDied","Data":"42bec50d205065f8110c3b0fd925f3d69c51d0c40d23861c8ca6b753e4254c1d"} Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.416036 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vghfr" event={"ID":"5687720a-9c94-42bb-b631-e4b34a1fadbe","Type":"ContainerDied","Data":"5b4256378ec9ab05b701070bd0a701eb1aac3de48961a8b2a072785dffac53c3"} Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.416057 4812 scope.go:117] "RemoveContainer" containerID="42bec50d205065f8110c3b0fd925f3d69c51d0c40d23861c8ca6b753e4254c1d" Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.451431 4812 scope.go:117] "RemoveContainer" containerID="a8be969d1f226144a5df22288598041a5713719ad58dee070dc0f70caca488ec" Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.482133 4812 scope.go:117] "RemoveContainer" containerID="087040cca8be22e9114bd0709ac9e6dd004a479d22057a5f1e64d59528aa74e6" Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.485246 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vghfr"] Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.499379 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vghfr"] Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.540695 4812 scope.go:117] "RemoveContainer" containerID="42bec50d205065f8110c3b0fd925f3d69c51d0c40d23861c8ca6b753e4254c1d" Nov 24 21:25:13 crc kubenswrapper[4812]: E1124 21:25:13.541226 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42bec50d205065f8110c3b0fd925f3d69c51d0c40d23861c8ca6b753e4254c1d\": container with ID starting with 42bec50d205065f8110c3b0fd925f3d69c51d0c40d23861c8ca6b753e4254c1d not found: ID does not exist" containerID="42bec50d205065f8110c3b0fd925f3d69c51d0c40d23861c8ca6b753e4254c1d" Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.541253 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42bec50d205065f8110c3b0fd925f3d69c51d0c40d23861c8ca6b753e4254c1d"} err="failed to get container status \"42bec50d205065f8110c3b0fd925f3d69c51d0c40d23861c8ca6b753e4254c1d\": rpc error: code = NotFound desc = could not find container \"42bec50d205065f8110c3b0fd925f3d69c51d0c40d23861c8ca6b753e4254c1d\": container with ID starting with 42bec50d205065f8110c3b0fd925f3d69c51d0c40d23861c8ca6b753e4254c1d not found: ID does not exist" Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.541272 4812 scope.go:117] "RemoveContainer" containerID="a8be969d1f226144a5df22288598041a5713719ad58dee070dc0f70caca488ec" Nov 24 21:25:13 crc kubenswrapper[4812]: E1124 21:25:13.541739 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8be969d1f226144a5df22288598041a5713719ad58dee070dc0f70caca488ec\": container with ID starting with a8be969d1f226144a5df22288598041a5713719ad58dee070dc0f70caca488ec not found: ID does not exist" containerID="a8be969d1f226144a5df22288598041a5713719ad58dee070dc0f70caca488ec" Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.541783 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8be969d1f226144a5df22288598041a5713719ad58dee070dc0f70caca488ec"} err="failed to get container status \"a8be969d1f226144a5df22288598041a5713719ad58dee070dc0f70caca488ec\": rpc error: code = NotFound desc = could not find container \"a8be969d1f226144a5df22288598041a5713719ad58dee070dc0f70caca488ec\": container with ID starting with a8be969d1f226144a5df22288598041a5713719ad58dee070dc0f70caca488ec not found: ID does not exist" Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.541803 4812 scope.go:117] "RemoveContainer" containerID="087040cca8be22e9114bd0709ac9e6dd004a479d22057a5f1e64d59528aa74e6" Nov 24 21:25:13 crc kubenswrapper[4812]: E1124 21:25:13.542238 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"087040cca8be22e9114bd0709ac9e6dd004a479d22057a5f1e64d59528aa74e6\": container with ID starting with 087040cca8be22e9114bd0709ac9e6dd004a479d22057a5f1e64d59528aa74e6 not found: ID does not exist" containerID="087040cca8be22e9114bd0709ac9e6dd004a479d22057a5f1e64d59528aa74e6" Nov 24 21:25:13 crc kubenswrapper[4812]: I1124 21:25:13.542255 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"087040cca8be22e9114bd0709ac9e6dd004a479d22057a5f1e64d59528aa74e6"} err="failed to get container status \"087040cca8be22e9114bd0709ac9e6dd004a479d22057a5f1e64d59528aa74e6\": rpc error: code = NotFound desc = could not find container \"087040cca8be22e9114bd0709ac9e6dd004a479d22057a5f1e64d59528aa74e6\": container with ID starting with 087040cca8be22e9114bd0709ac9e6dd004a479d22057a5f1e64d59528aa74e6 not found: ID does not exist" Nov 24 21:25:14 crc kubenswrapper[4812]: I1124 21:25:14.441647 4812 generic.go:334] "Generic (PLEG): container finished" podID="9b92065a-f3a0-46c5-ba92-8cccb0a98517" containerID="f2ecd0df218678521ce6d0512c4d0115fc698b343c0d8ee8bc0b7569d878cad6" exitCode=0 Nov 24 21:25:14 crc kubenswrapper[4812]: I1124 21:25:14.441724 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swspx" event={"ID":"9b92065a-f3a0-46c5-ba92-8cccb0a98517","Type":"ContainerDied","Data":"f2ecd0df218678521ce6d0512c4d0115fc698b343c0d8ee8bc0b7569d878cad6"} Nov 24 21:25:14 crc kubenswrapper[4812]: I1124 21:25:14.980002 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5687720a-9c94-42bb-b631-e4b34a1fadbe" path="/var/lib/kubelet/pods/5687720a-9c94-42bb-b631-e4b34a1fadbe/volumes" Nov 24 21:25:15 crc kubenswrapper[4812]: I1124 21:25:15.467125 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swspx" event={"ID":"9b92065a-f3a0-46c5-ba92-8cccb0a98517","Type":"ContainerStarted","Data":"7b77b06b22c58c801df9801ce026dd6e07ee3d69d48f2d19e9e5f9696fb2cde3"} Nov 24 21:25:15 crc kubenswrapper[4812]: I1124 21:25:15.505400 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-swspx" podStartSLOduration=1.95555196 podStartE2EDuration="9.505378483s" podCreationTimestamp="2025-11-24 21:25:06 +0000 UTC" firstStartedPulling="2025-11-24 21:25:07.306800453 +0000 UTC m=+7701.095752824" lastFinishedPulling="2025-11-24 21:25:14.856626976 +0000 UTC m=+7708.645579347" observedRunningTime="2025-11-24 21:25:15.489628957 +0000 UTC m=+7709.278581358" watchObservedRunningTime="2025-11-24 21:25:15.505378483 +0000 UTC m=+7709.294330864" Nov 24 21:25:16 crc kubenswrapper[4812]: I1124 21:25:16.476366 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-swspx" Nov 24 21:25:16 crc kubenswrapper[4812]: I1124 21:25:16.476722 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-swspx" Nov 24 21:25:17 crc kubenswrapper[4812]: I1124 21:25:17.537979 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-swspx" podUID="9b92065a-f3a0-46c5-ba92-8cccb0a98517" containerName="registry-server" probeResult="failure" output=< Nov 24 21:25:17 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 21:25:17 crc kubenswrapper[4812]: > Nov 24 21:25:25 crc kubenswrapper[4812]: I1124 21:25:25.965649 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:25:25 crc kubenswrapper[4812]: E1124 21:25:25.966570 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:25:26 crc kubenswrapper[4812]: I1124 21:25:26.547072 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-swspx" Nov 24 21:25:26 crc kubenswrapper[4812]: I1124 21:25:26.627170 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-swspx" Nov 24 21:25:26 crc kubenswrapper[4812]: I1124 21:25:26.797944 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-swspx"] Nov 24 21:25:27 crc kubenswrapper[4812]: I1124 21:25:27.617150 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-swspx" podUID="9b92065a-f3a0-46c5-ba92-8cccb0a98517" containerName="registry-server" containerID="cri-o://7b77b06b22c58c801df9801ce026dd6e07ee3d69d48f2d19e9e5f9696fb2cde3" gracePeriod=2 Nov 24 21:25:28 crc kubenswrapper[4812]: I1124 21:25:28.191418 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-swspx" Nov 24 21:25:28 crc kubenswrapper[4812]: I1124 21:25:28.201536 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b92065a-f3a0-46c5-ba92-8cccb0a98517-catalog-content\") pod \"9b92065a-f3a0-46c5-ba92-8cccb0a98517\" (UID: \"9b92065a-f3a0-46c5-ba92-8cccb0a98517\") " Nov 24 21:25:28 crc kubenswrapper[4812]: I1124 21:25:28.201602 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b92065a-f3a0-46c5-ba92-8cccb0a98517-utilities\") pod \"9b92065a-f3a0-46c5-ba92-8cccb0a98517\" (UID: \"9b92065a-f3a0-46c5-ba92-8cccb0a98517\") " Nov 24 21:25:28 crc kubenswrapper[4812]: I1124 21:25:28.201703 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsv6p\" (UniqueName: \"kubernetes.io/projected/9b92065a-f3a0-46c5-ba92-8cccb0a98517-kube-api-access-gsv6p\") pod \"9b92065a-f3a0-46c5-ba92-8cccb0a98517\" (UID: \"9b92065a-f3a0-46c5-ba92-8cccb0a98517\") " Nov 24 21:25:28 crc kubenswrapper[4812]: I1124 21:25:28.202610 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b92065a-f3a0-46c5-ba92-8cccb0a98517-utilities" (OuterVolumeSpecName: "utilities") pod "9b92065a-f3a0-46c5-ba92-8cccb0a98517" (UID: "9b92065a-f3a0-46c5-ba92-8cccb0a98517"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:25:28 crc kubenswrapper[4812]: I1124 21:25:28.218933 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b92065a-f3a0-46c5-ba92-8cccb0a98517-kube-api-access-gsv6p" (OuterVolumeSpecName: "kube-api-access-gsv6p") pod "9b92065a-f3a0-46c5-ba92-8cccb0a98517" (UID: "9b92065a-f3a0-46c5-ba92-8cccb0a98517"). InnerVolumeSpecName "kube-api-access-gsv6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:25:28 crc kubenswrapper[4812]: I1124 21:25:28.293168 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b92065a-f3a0-46c5-ba92-8cccb0a98517-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b92065a-f3a0-46c5-ba92-8cccb0a98517" (UID: "9b92065a-f3a0-46c5-ba92-8cccb0a98517"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:25:28 crc kubenswrapper[4812]: I1124 21:25:28.306914 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsv6p\" (UniqueName: \"kubernetes.io/projected/9b92065a-f3a0-46c5-ba92-8cccb0a98517-kube-api-access-gsv6p\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:28 crc kubenswrapper[4812]: I1124 21:25:28.306952 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b92065a-f3a0-46c5-ba92-8cccb0a98517-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:28 crc kubenswrapper[4812]: I1124 21:25:28.306989 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b92065a-f3a0-46c5-ba92-8cccb0a98517-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:29 crc kubenswrapper[4812]: I1124 21:25:29.251472 4812 generic.go:334] "Generic (PLEG): container finished" podID="9b92065a-f3a0-46c5-ba92-8cccb0a98517" containerID="7b77b06b22c58c801df9801ce026dd6e07ee3d69d48f2d19e9e5f9696fb2cde3" exitCode=0 Nov 24 21:25:29 crc kubenswrapper[4812]: I1124 21:25:29.251873 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swspx" event={"ID":"9b92065a-f3a0-46c5-ba92-8cccb0a98517","Type":"ContainerDied","Data":"7b77b06b22c58c801df9801ce026dd6e07ee3d69d48f2d19e9e5f9696fb2cde3"} Nov 24 21:25:29 crc kubenswrapper[4812]: I1124 21:25:29.251922 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-swspx" event={"ID":"9b92065a-f3a0-46c5-ba92-8cccb0a98517","Type":"ContainerDied","Data":"e6193bf756ac98f56d7cb6297b64dfe9df74822c7fe0dd721738235be767926c"} Nov 24 21:25:29 crc kubenswrapper[4812]: I1124 21:25:29.251950 4812 scope.go:117] "RemoveContainer" containerID="7b77b06b22c58c801df9801ce026dd6e07ee3d69d48f2d19e9e5f9696fb2cde3" Nov 24 21:25:29 crc kubenswrapper[4812]: I1124 21:25:29.252161 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-swspx" Nov 24 21:25:29 crc kubenswrapper[4812]: I1124 21:25:29.289651 4812 scope.go:117] "RemoveContainer" containerID="f2ecd0df218678521ce6d0512c4d0115fc698b343c0d8ee8bc0b7569d878cad6" Nov 24 21:25:29 crc kubenswrapper[4812]: I1124 21:25:29.294857 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-swspx"] Nov 24 21:25:29 crc kubenswrapper[4812]: I1124 21:25:29.303105 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-swspx"] Nov 24 21:25:29 crc kubenswrapper[4812]: I1124 21:25:29.312155 4812 scope.go:117] "RemoveContainer" containerID="36d25c36e0787f6178d986b66a3ce00f08c00136d8d1af092dc73521d384b855" Nov 24 21:25:29 crc kubenswrapper[4812]: I1124 21:25:29.367974 4812 scope.go:117] "RemoveContainer" containerID="7b77b06b22c58c801df9801ce026dd6e07ee3d69d48f2d19e9e5f9696fb2cde3" Nov 24 21:25:29 crc kubenswrapper[4812]: E1124 21:25:29.368856 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b77b06b22c58c801df9801ce026dd6e07ee3d69d48f2d19e9e5f9696fb2cde3\": container with ID starting with 7b77b06b22c58c801df9801ce026dd6e07ee3d69d48f2d19e9e5f9696fb2cde3 not found: ID does not exist" containerID="7b77b06b22c58c801df9801ce026dd6e07ee3d69d48f2d19e9e5f9696fb2cde3" Nov 24 21:25:29 crc kubenswrapper[4812]: I1124 21:25:29.368925 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b77b06b22c58c801df9801ce026dd6e07ee3d69d48f2d19e9e5f9696fb2cde3"} err="failed to get container status \"7b77b06b22c58c801df9801ce026dd6e07ee3d69d48f2d19e9e5f9696fb2cde3\": rpc error: code = NotFound desc = could not find container \"7b77b06b22c58c801df9801ce026dd6e07ee3d69d48f2d19e9e5f9696fb2cde3\": container with ID starting with 7b77b06b22c58c801df9801ce026dd6e07ee3d69d48f2d19e9e5f9696fb2cde3 not found: ID does not exist" Nov 24 21:25:29 crc kubenswrapper[4812]: I1124 21:25:29.368959 4812 scope.go:117] "RemoveContainer" containerID="f2ecd0df218678521ce6d0512c4d0115fc698b343c0d8ee8bc0b7569d878cad6" Nov 24 21:25:29 crc kubenswrapper[4812]: E1124 21:25:29.369496 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2ecd0df218678521ce6d0512c4d0115fc698b343c0d8ee8bc0b7569d878cad6\": container with ID starting with f2ecd0df218678521ce6d0512c4d0115fc698b343c0d8ee8bc0b7569d878cad6 not found: ID does not exist" containerID="f2ecd0df218678521ce6d0512c4d0115fc698b343c0d8ee8bc0b7569d878cad6" Nov 24 21:25:29 crc kubenswrapper[4812]: I1124 21:25:29.369548 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2ecd0df218678521ce6d0512c4d0115fc698b343c0d8ee8bc0b7569d878cad6"} err="failed to get container status \"f2ecd0df218678521ce6d0512c4d0115fc698b343c0d8ee8bc0b7569d878cad6\": rpc error: code = NotFound desc = could not find container \"f2ecd0df218678521ce6d0512c4d0115fc698b343c0d8ee8bc0b7569d878cad6\": container with ID starting with f2ecd0df218678521ce6d0512c4d0115fc698b343c0d8ee8bc0b7569d878cad6 not found: ID does not exist" Nov 24 21:25:29 crc kubenswrapper[4812]: I1124 21:25:29.369576 4812 scope.go:117] "RemoveContainer" containerID="36d25c36e0787f6178d986b66a3ce00f08c00136d8d1af092dc73521d384b855" Nov 24 21:25:29 crc kubenswrapper[4812]: E1124 21:25:29.369910 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36d25c36e0787f6178d986b66a3ce00f08c00136d8d1af092dc73521d384b855\": container with ID starting with 36d25c36e0787f6178d986b66a3ce00f08c00136d8d1af092dc73521d384b855 not found: ID does not exist" containerID="36d25c36e0787f6178d986b66a3ce00f08c00136d8d1af092dc73521d384b855" Nov 24 21:25:29 crc kubenswrapper[4812]: I1124 21:25:29.369960 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36d25c36e0787f6178d986b66a3ce00f08c00136d8d1af092dc73521d384b855"} err="failed to get container status \"36d25c36e0787f6178d986b66a3ce00f08c00136d8d1af092dc73521d384b855\": rpc error: code = NotFound desc = could not find container \"36d25c36e0787f6178d986b66a3ce00f08c00136d8d1af092dc73521d384b855\": container with ID starting with 36d25c36e0787f6178d986b66a3ce00f08c00136d8d1af092dc73521d384b855 not found: ID does not exist" Nov 24 21:25:30 crc kubenswrapper[4812]: I1124 21:25:30.992123 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b92065a-f3a0-46c5-ba92-8cccb0a98517" path="/var/lib/kubelet/pods/9b92065a-f3a0-46c5-ba92-8cccb0a98517/volumes" Nov 24 21:25:39 crc kubenswrapper[4812]: I1124 21:25:39.966624 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:25:39 crc kubenswrapper[4812]: E1124 21:25:39.969211 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:25:51 crc kubenswrapper[4812]: I1124 21:25:51.965930 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:25:51 crc kubenswrapper[4812]: E1124 21:25:51.967036 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:25:54 crc kubenswrapper[4812]: I1124 21:25:54.570289 4812 generic.go:334] "Generic (PLEG): container finished" podID="5a2711f5-c999-47eb-9a01-04e4691e5983" containerID="d0836590718ab0b8172fb14df65abddaba00830aeae3484302173984873e26db" exitCode=0 Nov 24 21:25:54 crc kubenswrapper[4812]: I1124 21:25:54.570403 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-h56pj" event={"ID":"5a2711f5-c999-47eb-9a01-04e4691e5983","Type":"ContainerDied","Data":"d0836590718ab0b8172fb14df65abddaba00830aeae3484302173984873e26db"} Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.066497 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-h56pj" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.182919 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a2711f5-c999-47eb-9a01-04e4691e5983-ssh-key\") pod \"5a2711f5-c999-47eb-9a01-04e4691e5983\" (UID: \"5a2711f5-c999-47eb-9a01-04e4691e5983\") " Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.183122 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a2711f5-c999-47eb-9a01-04e4691e5983-inventory\") pod \"5a2711f5-c999-47eb-9a01-04e4691e5983\" (UID: \"5a2711f5-c999-47eb-9a01-04e4691e5983\") " Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.183285 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qh9b\" (UniqueName: \"kubernetes.io/projected/5a2711f5-c999-47eb-9a01-04e4691e5983-kube-api-access-6qh9b\") pod \"5a2711f5-c999-47eb-9a01-04e4691e5983\" (UID: \"5a2711f5-c999-47eb-9a01-04e4691e5983\") " Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.188622 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a2711f5-c999-47eb-9a01-04e4691e5983-kube-api-access-6qh9b" (OuterVolumeSpecName: "kube-api-access-6qh9b") pod "5a2711f5-c999-47eb-9a01-04e4691e5983" (UID: "5a2711f5-c999-47eb-9a01-04e4691e5983"). InnerVolumeSpecName "kube-api-access-6qh9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.224757 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a2711f5-c999-47eb-9a01-04e4691e5983-inventory" (OuterVolumeSpecName: "inventory") pod "5a2711f5-c999-47eb-9a01-04e4691e5983" (UID: "5a2711f5-c999-47eb-9a01-04e4691e5983"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.242107 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a2711f5-c999-47eb-9a01-04e4691e5983-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5a2711f5-c999-47eb-9a01-04e4691e5983" (UID: "5a2711f5-c999-47eb-9a01-04e4691e5983"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.287235 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a2711f5-c999-47eb-9a01-04e4691e5983-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.287310 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qh9b\" (UniqueName: \"kubernetes.io/projected/5a2711f5-c999-47eb-9a01-04e4691e5983-kube-api-access-6qh9b\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.287374 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5a2711f5-c999-47eb-9a01-04e4691e5983-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.595826 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-h56pj" event={"ID":"5a2711f5-c999-47eb-9a01-04e4691e5983","Type":"ContainerDied","Data":"2d198ed94e105068a9ddf5b6c3050f57d50e1578d038a4140ab2bbe88e90ae52"} Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.596257 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d198ed94e105068a9ddf5b6c3050f57d50e1578d038a4140ab2bbe88e90ae52" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.595907 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-h56pj" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.693504 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-85zlh"] Nov 24 21:25:56 crc kubenswrapper[4812]: E1124 21:25:56.694304 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b92065a-f3a0-46c5-ba92-8cccb0a98517" containerName="extract-utilities" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.694442 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b92065a-f3a0-46c5-ba92-8cccb0a98517" containerName="extract-utilities" Nov 24 21:25:56 crc kubenswrapper[4812]: E1124 21:25:56.694544 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b92065a-f3a0-46c5-ba92-8cccb0a98517" containerName="extract-content" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.694630 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b92065a-f3a0-46c5-ba92-8cccb0a98517" containerName="extract-content" Nov 24 21:25:56 crc kubenswrapper[4812]: E1124 21:25:56.694768 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a2711f5-c999-47eb-9a01-04e4691e5983" containerName="install-os-openstack-openstack-cell1" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.694875 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a2711f5-c999-47eb-9a01-04e4691e5983" containerName="install-os-openstack-openstack-cell1" Nov 24 21:25:56 crc kubenswrapper[4812]: E1124 21:25:56.694973 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5687720a-9c94-42bb-b631-e4b34a1fadbe" containerName="extract-content" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.695060 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5687720a-9c94-42bb-b631-e4b34a1fadbe" containerName="extract-content" Nov 24 21:25:56 crc kubenswrapper[4812]: E1124 21:25:56.695186 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5687720a-9c94-42bb-b631-e4b34a1fadbe" containerName="extract-utilities" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.695298 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5687720a-9c94-42bb-b631-e4b34a1fadbe" containerName="extract-utilities" Nov 24 21:25:56 crc kubenswrapper[4812]: E1124 21:25:56.695444 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5687720a-9c94-42bb-b631-e4b34a1fadbe" containerName="registry-server" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.695536 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="5687720a-9c94-42bb-b631-e4b34a1fadbe" containerName="registry-server" Nov 24 21:25:56 crc kubenswrapper[4812]: E1124 21:25:56.695635 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b92065a-f3a0-46c5-ba92-8cccb0a98517" containerName="registry-server" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.695725 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b92065a-f3a0-46c5-ba92-8cccb0a98517" containerName="registry-server" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.696159 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b92065a-f3a0-46c5-ba92-8cccb0a98517" containerName="registry-server" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.696278 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="5687720a-9c94-42bb-b631-e4b34a1fadbe" containerName="registry-server" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.696409 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a2711f5-c999-47eb-9a01-04e4691e5983" containerName="install-os-openstack-openstack-cell1" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.697680 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-85zlh" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.708023 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-85zlh"] Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.743684 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.743934 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.744277 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.744454 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2x7tj" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.799069 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58b1f59e-7923-41f2-a94c-ffec3693018c-ssh-key\") pod \"configure-os-openstack-openstack-cell1-85zlh\" (UID: \"58b1f59e-7923-41f2-a94c-ffec3693018c\") " pod="openstack/configure-os-openstack-openstack-cell1-85zlh" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.799186 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58b1f59e-7923-41f2-a94c-ffec3693018c-inventory\") pod \"configure-os-openstack-openstack-cell1-85zlh\" (UID: \"58b1f59e-7923-41f2-a94c-ffec3693018c\") " pod="openstack/configure-os-openstack-openstack-cell1-85zlh" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.799295 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcw4h\" (UniqueName: \"kubernetes.io/projected/58b1f59e-7923-41f2-a94c-ffec3693018c-kube-api-access-mcw4h\") pod \"configure-os-openstack-openstack-cell1-85zlh\" (UID: \"58b1f59e-7923-41f2-a94c-ffec3693018c\") " pod="openstack/configure-os-openstack-openstack-cell1-85zlh" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.901111 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58b1f59e-7923-41f2-a94c-ffec3693018c-ssh-key\") pod \"configure-os-openstack-openstack-cell1-85zlh\" (UID: \"58b1f59e-7923-41f2-a94c-ffec3693018c\") " pod="openstack/configure-os-openstack-openstack-cell1-85zlh" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.901211 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58b1f59e-7923-41f2-a94c-ffec3693018c-inventory\") pod \"configure-os-openstack-openstack-cell1-85zlh\" (UID: \"58b1f59e-7923-41f2-a94c-ffec3693018c\") " pod="openstack/configure-os-openstack-openstack-cell1-85zlh" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.901317 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcw4h\" (UniqueName: \"kubernetes.io/projected/58b1f59e-7923-41f2-a94c-ffec3693018c-kube-api-access-mcw4h\") pod \"configure-os-openstack-openstack-cell1-85zlh\" (UID: \"58b1f59e-7923-41f2-a94c-ffec3693018c\") " pod="openstack/configure-os-openstack-openstack-cell1-85zlh" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.906021 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58b1f59e-7923-41f2-a94c-ffec3693018c-ssh-key\") pod \"configure-os-openstack-openstack-cell1-85zlh\" (UID: \"58b1f59e-7923-41f2-a94c-ffec3693018c\") " pod="openstack/configure-os-openstack-openstack-cell1-85zlh" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.906412 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58b1f59e-7923-41f2-a94c-ffec3693018c-inventory\") pod \"configure-os-openstack-openstack-cell1-85zlh\" (UID: \"58b1f59e-7923-41f2-a94c-ffec3693018c\") " pod="openstack/configure-os-openstack-openstack-cell1-85zlh" Nov 24 21:25:56 crc kubenswrapper[4812]: I1124 21:25:56.918263 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcw4h\" (UniqueName: \"kubernetes.io/projected/58b1f59e-7923-41f2-a94c-ffec3693018c-kube-api-access-mcw4h\") pod \"configure-os-openstack-openstack-cell1-85zlh\" (UID: \"58b1f59e-7923-41f2-a94c-ffec3693018c\") " pod="openstack/configure-os-openstack-openstack-cell1-85zlh" Nov 24 21:25:57 crc kubenswrapper[4812]: I1124 21:25:57.064010 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-85zlh" Nov 24 21:25:57 crc kubenswrapper[4812]: I1124 21:25:57.439986 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-85zlh"] Nov 24 21:25:57 crc kubenswrapper[4812]: I1124 21:25:57.607001 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-85zlh" event={"ID":"58b1f59e-7923-41f2-a94c-ffec3693018c","Type":"ContainerStarted","Data":"37ff599593308cfd4c87065e55839457f8eb8d824038449d27b8225cab2f3033"} Nov 24 21:25:58 crc kubenswrapper[4812]: I1124 21:25:58.626681 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-85zlh" event={"ID":"58b1f59e-7923-41f2-a94c-ffec3693018c","Type":"ContainerStarted","Data":"ab198ce551a13122ef250100c6dad068b27a5e8349b4afbe2c76d86b732082e5"} Nov 24 21:25:58 crc kubenswrapper[4812]: I1124 21:25:58.651606 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-85zlh" podStartSLOduration=1.9337889860000002 podStartE2EDuration="2.65158482s" podCreationTimestamp="2025-11-24 21:25:56 +0000 UTC" firstStartedPulling="2025-11-24 21:25:57.451054094 +0000 UTC m=+7751.240006475" lastFinishedPulling="2025-11-24 21:25:58.168849908 +0000 UTC m=+7751.957802309" observedRunningTime="2025-11-24 21:25:58.644658723 +0000 UTC m=+7752.433611094" watchObservedRunningTime="2025-11-24 21:25:58.65158482 +0000 UTC m=+7752.440537191" Nov 24 21:26:03 crc kubenswrapper[4812]: I1124 21:26:03.967218 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:26:04 crc kubenswrapper[4812]: I1124 21:26:04.697107 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"f12dd0b6651f8011b7bf36f9bcf3ca426e59a40997459cec8ec481a2878ef075"} Nov 24 21:26:42 crc kubenswrapper[4812]: I1124 21:26:42.143967 4812 generic.go:334] "Generic (PLEG): container finished" podID="58b1f59e-7923-41f2-a94c-ffec3693018c" containerID="ab198ce551a13122ef250100c6dad068b27a5e8349b4afbe2c76d86b732082e5" exitCode=0 Nov 24 21:26:42 crc kubenswrapper[4812]: I1124 21:26:42.144100 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-85zlh" event={"ID":"58b1f59e-7923-41f2-a94c-ffec3693018c","Type":"ContainerDied","Data":"ab198ce551a13122ef250100c6dad068b27a5e8349b4afbe2c76d86b732082e5"} Nov 24 21:26:43 crc kubenswrapper[4812]: I1124 21:26:43.612602 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-85zlh" Nov 24 21:26:43 crc kubenswrapper[4812]: I1124 21:26:43.742066 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58b1f59e-7923-41f2-a94c-ffec3693018c-ssh-key\") pod \"58b1f59e-7923-41f2-a94c-ffec3693018c\" (UID: \"58b1f59e-7923-41f2-a94c-ffec3693018c\") " Nov 24 21:26:43 crc kubenswrapper[4812]: I1124 21:26:43.742236 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcw4h\" (UniqueName: \"kubernetes.io/projected/58b1f59e-7923-41f2-a94c-ffec3693018c-kube-api-access-mcw4h\") pod \"58b1f59e-7923-41f2-a94c-ffec3693018c\" (UID: \"58b1f59e-7923-41f2-a94c-ffec3693018c\") " Nov 24 21:26:43 crc kubenswrapper[4812]: I1124 21:26:43.743225 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58b1f59e-7923-41f2-a94c-ffec3693018c-inventory\") pod \"58b1f59e-7923-41f2-a94c-ffec3693018c\" (UID: \"58b1f59e-7923-41f2-a94c-ffec3693018c\") " Nov 24 21:26:43 crc kubenswrapper[4812]: I1124 21:26:43.751259 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58b1f59e-7923-41f2-a94c-ffec3693018c-kube-api-access-mcw4h" (OuterVolumeSpecName: "kube-api-access-mcw4h") pod "58b1f59e-7923-41f2-a94c-ffec3693018c" (UID: "58b1f59e-7923-41f2-a94c-ffec3693018c"). InnerVolumeSpecName "kube-api-access-mcw4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:26:43 crc kubenswrapper[4812]: I1124 21:26:43.807980 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b1f59e-7923-41f2-a94c-ffec3693018c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "58b1f59e-7923-41f2-a94c-ffec3693018c" (UID: "58b1f59e-7923-41f2-a94c-ffec3693018c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:26:43 crc kubenswrapper[4812]: I1124 21:26:43.817646 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58b1f59e-7923-41f2-a94c-ffec3693018c-inventory" (OuterVolumeSpecName: "inventory") pod "58b1f59e-7923-41f2-a94c-ffec3693018c" (UID: "58b1f59e-7923-41f2-a94c-ffec3693018c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:26:43 crc kubenswrapper[4812]: I1124 21:26:43.847118 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58b1f59e-7923-41f2-a94c-ffec3693018c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:43 crc kubenswrapper[4812]: I1124 21:26:43.847164 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcw4h\" (UniqueName: \"kubernetes.io/projected/58b1f59e-7923-41f2-a94c-ffec3693018c-kube-api-access-mcw4h\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:43 crc kubenswrapper[4812]: I1124 21:26:43.847204 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/58b1f59e-7923-41f2-a94c-ffec3693018c-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.165069 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-85zlh" event={"ID":"58b1f59e-7923-41f2-a94c-ffec3693018c","Type":"ContainerDied","Data":"37ff599593308cfd4c87065e55839457f8eb8d824038449d27b8225cab2f3033"} Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.165391 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37ff599593308cfd4c87065e55839457f8eb8d824038449d27b8225cab2f3033" Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.165146 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-85zlh" Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.354678 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-n5jp4"] Nov 24 21:26:44 crc kubenswrapper[4812]: E1124 21:26:44.355820 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58b1f59e-7923-41f2-a94c-ffec3693018c" containerName="configure-os-openstack-openstack-cell1" Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.355851 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="58b1f59e-7923-41f2-a94c-ffec3693018c" containerName="configure-os-openstack-openstack-cell1" Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.356273 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="58b1f59e-7923-41f2-a94c-ffec3693018c" containerName="configure-os-openstack-openstack-cell1" Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.357616 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-n5jp4" Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.362526 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2x7tj" Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.362633 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.362539 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.370282 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-n5jp4"] Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.372020 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.460467 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e3443cfc-e84b-4210-a03e-7596c3e620bf-inventory-0\") pod \"ssh-known-hosts-openstack-n5jp4\" (UID: \"e3443cfc-e84b-4210-a03e-7596c3e620bf\") " pod="openstack/ssh-known-hosts-openstack-n5jp4" Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.460510 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2v4g\" (UniqueName: \"kubernetes.io/projected/e3443cfc-e84b-4210-a03e-7596c3e620bf-kube-api-access-f2v4g\") pod \"ssh-known-hosts-openstack-n5jp4\" (UID: \"e3443cfc-e84b-4210-a03e-7596c3e620bf\") " pod="openstack/ssh-known-hosts-openstack-n5jp4" Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.460668 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e3443cfc-e84b-4210-a03e-7596c3e620bf-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-n5jp4\" (UID: \"e3443cfc-e84b-4210-a03e-7596c3e620bf\") " pod="openstack/ssh-known-hosts-openstack-n5jp4" Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.563205 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e3443cfc-e84b-4210-a03e-7596c3e620bf-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-n5jp4\" (UID: \"e3443cfc-e84b-4210-a03e-7596c3e620bf\") " pod="openstack/ssh-known-hosts-openstack-n5jp4" Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.563423 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e3443cfc-e84b-4210-a03e-7596c3e620bf-inventory-0\") pod \"ssh-known-hosts-openstack-n5jp4\" (UID: \"e3443cfc-e84b-4210-a03e-7596c3e620bf\") " pod="openstack/ssh-known-hosts-openstack-n5jp4" Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.563463 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2v4g\" (UniqueName: \"kubernetes.io/projected/e3443cfc-e84b-4210-a03e-7596c3e620bf-kube-api-access-f2v4g\") pod \"ssh-known-hosts-openstack-n5jp4\" (UID: \"e3443cfc-e84b-4210-a03e-7596c3e620bf\") " pod="openstack/ssh-known-hosts-openstack-n5jp4" Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.573013 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e3443cfc-e84b-4210-a03e-7596c3e620bf-inventory-0\") pod \"ssh-known-hosts-openstack-n5jp4\" (UID: \"e3443cfc-e84b-4210-a03e-7596c3e620bf\") " pod="openstack/ssh-known-hosts-openstack-n5jp4" Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.574316 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e3443cfc-e84b-4210-a03e-7596c3e620bf-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-n5jp4\" (UID: \"e3443cfc-e84b-4210-a03e-7596c3e620bf\") " pod="openstack/ssh-known-hosts-openstack-n5jp4" Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.595468 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2v4g\" (UniqueName: \"kubernetes.io/projected/e3443cfc-e84b-4210-a03e-7596c3e620bf-kube-api-access-f2v4g\") pod \"ssh-known-hosts-openstack-n5jp4\" (UID: \"e3443cfc-e84b-4210-a03e-7596c3e620bf\") " pod="openstack/ssh-known-hosts-openstack-n5jp4" Nov 24 21:26:44 crc kubenswrapper[4812]: I1124 21:26:44.695664 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-n5jp4" Nov 24 21:26:45 crc kubenswrapper[4812]: I1124 21:26:45.249679 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-n5jp4"] Nov 24 21:26:46 crc kubenswrapper[4812]: I1124 21:26:46.186680 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-n5jp4" event={"ID":"e3443cfc-e84b-4210-a03e-7596c3e620bf","Type":"ContainerStarted","Data":"5cb478cd62b236eab33901689016a9b7a2c542489aecf7af3d892d2d2128d7d2"} Nov 24 21:26:46 crc kubenswrapper[4812]: I1124 21:26:46.186970 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-n5jp4" event={"ID":"e3443cfc-e84b-4210-a03e-7596c3e620bf","Type":"ContainerStarted","Data":"1936b47bec135357f9d206e52f76d9bee6a200ee2c5d3f28be2b6eaa1466ba43"} Nov 24 21:26:46 crc kubenswrapper[4812]: I1124 21:26:46.216300 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-n5jp4" podStartSLOduration=1.671615877 podStartE2EDuration="2.216278754s" podCreationTimestamp="2025-11-24 21:26:44 +0000 UTC" firstStartedPulling="2025-11-24 21:26:45.256270385 +0000 UTC m=+7799.045222756" lastFinishedPulling="2025-11-24 21:26:45.800933262 +0000 UTC m=+7799.589885633" observedRunningTime="2025-11-24 21:26:46.208591566 +0000 UTC m=+7799.997543967" watchObservedRunningTime="2025-11-24 21:26:46.216278754 +0000 UTC m=+7800.005231115" Nov 24 21:26:55 crc kubenswrapper[4812]: I1124 21:26:55.291986 4812 generic.go:334] "Generic (PLEG): container finished" podID="e3443cfc-e84b-4210-a03e-7596c3e620bf" containerID="5cb478cd62b236eab33901689016a9b7a2c542489aecf7af3d892d2d2128d7d2" exitCode=0 Nov 24 21:26:55 crc kubenswrapper[4812]: I1124 21:26:55.292076 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-n5jp4" event={"ID":"e3443cfc-e84b-4210-a03e-7596c3e620bf","Type":"ContainerDied","Data":"5cb478cd62b236eab33901689016a9b7a2c542489aecf7af3d892d2d2128d7d2"} Nov 24 21:26:56 crc kubenswrapper[4812]: I1124 21:26:56.785149 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-n5jp4" Nov 24 21:26:56 crc kubenswrapper[4812]: I1124 21:26:56.948974 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e3443cfc-e84b-4210-a03e-7596c3e620bf-inventory-0\") pod \"e3443cfc-e84b-4210-a03e-7596c3e620bf\" (UID: \"e3443cfc-e84b-4210-a03e-7596c3e620bf\") " Nov 24 21:26:56 crc kubenswrapper[4812]: I1124 21:26:56.949537 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2v4g\" (UniqueName: \"kubernetes.io/projected/e3443cfc-e84b-4210-a03e-7596c3e620bf-kube-api-access-f2v4g\") pod \"e3443cfc-e84b-4210-a03e-7596c3e620bf\" (UID: \"e3443cfc-e84b-4210-a03e-7596c3e620bf\") " Nov 24 21:26:56 crc kubenswrapper[4812]: I1124 21:26:56.949631 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e3443cfc-e84b-4210-a03e-7596c3e620bf-ssh-key-openstack-cell1\") pod \"e3443cfc-e84b-4210-a03e-7596c3e620bf\" (UID: \"e3443cfc-e84b-4210-a03e-7596c3e620bf\") " Nov 24 21:26:56 crc kubenswrapper[4812]: I1124 21:26:56.955382 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3443cfc-e84b-4210-a03e-7596c3e620bf-kube-api-access-f2v4g" (OuterVolumeSpecName: "kube-api-access-f2v4g") pod "e3443cfc-e84b-4210-a03e-7596c3e620bf" (UID: "e3443cfc-e84b-4210-a03e-7596c3e620bf"). InnerVolumeSpecName "kube-api-access-f2v4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:26:56 crc kubenswrapper[4812]: I1124 21:26:56.993155 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3443cfc-e84b-4210-a03e-7596c3e620bf-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "e3443cfc-e84b-4210-a03e-7596c3e620bf" (UID: "e3443cfc-e84b-4210-a03e-7596c3e620bf"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:26:56 crc kubenswrapper[4812]: I1124 21:26:56.995415 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3443cfc-e84b-4210-a03e-7596c3e620bf-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "e3443cfc-e84b-4210-a03e-7596c3e620bf" (UID: "e3443cfc-e84b-4210-a03e-7596c3e620bf"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.052696 4812 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e3443cfc-e84b-4210-a03e-7596c3e620bf-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.052849 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2v4g\" (UniqueName: \"kubernetes.io/projected/e3443cfc-e84b-4210-a03e-7596c3e620bf-kube-api-access-f2v4g\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.052946 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/e3443cfc-e84b-4210-a03e-7596c3e620bf-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.320251 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-n5jp4" event={"ID":"e3443cfc-e84b-4210-a03e-7596c3e620bf","Type":"ContainerDied","Data":"1936b47bec135357f9d206e52f76d9bee6a200ee2c5d3f28be2b6eaa1466ba43"} Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.320314 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1936b47bec135357f9d206e52f76d9bee6a200ee2c5d3f28be2b6eaa1466ba43" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.320384 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-n5jp4" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.414907 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-mvcx7"] Nov 24 21:26:57 crc kubenswrapper[4812]: E1124 21:26:57.415383 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3443cfc-e84b-4210-a03e-7596c3e620bf" containerName="ssh-known-hosts-openstack" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.415400 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3443cfc-e84b-4210-a03e-7596c3e620bf" containerName="ssh-known-hosts-openstack" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.415602 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3443cfc-e84b-4210-a03e-7596c3e620bf" containerName="ssh-known-hosts-openstack" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.416318 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-mvcx7" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.419356 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.419407 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.420022 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2x7tj" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.420217 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.440600 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-mvcx7"] Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.565131 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nkpm\" (UniqueName: \"kubernetes.io/projected/73077426-5c9e-4705-8840-6d65e962cf45-kube-api-access-6nkpm\") pod \"run-os-openstack-openstack-cell1-mvcx7\" (UID: \"73077426-5c9e-4705-8840-6d65e962cf45\") " pod="openstack/run-os-openstack-openstack-cell1-mvcx7" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.565238 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/73077426-5c9e-4705-8840-6d65e962cf45-ssh-key\") pod \"run-os-openstack-openstack-cell1-mvcx7\" (UID: \"73077426-5c9e-4705-8840-6d65e962cf45\") " pod="openstack/run-os-openstack-openstack-cell1-mvcx7" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.565590 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73077426-5c9e-4705-8840-6d65e962cf45-inventory\") pod \"run-os-openstack-openstack-cell1-mvcx7\" (UID: \"73077426-5c9e-4705-8840-6d65e962cf45\") " pod="openstack/run-os-openstack-openstack-cell1-mvcx7" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.667632 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nkpm\" (UniqueName: \"kubernetes.io/projected/73077426-5c9e-4705-8840-6d65e962cf45-kube-api-access-6nkpm\") pod \"run-os-openstack-openstack-cell1-mvcx7\" (UID: \"73077426-5c9e-4705-8840-6d65e962cf45\") " pod="openstack/run-os-openstack-openstack-cell1-mvcx7" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.667697 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/73077426-5c9e-4705-8840-6d65e962cf45-ssh-key\") pod \"run-os-openstack-openstack-cell1-mvcx7\" (UID: \"73077426-5c9e-4705-8840-6d65e962cf45\") " pod="openstack/run-os-openstack-openstack-cell1-mvcx7" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.667777 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73077426-5c9e-4705-8840-6d65e962cf45-inventory\") pod \"run-os-openstack-openstack-cell1-mvcx7\" (UID: \"73077426-5c9e-4705-8840-6d65e962cf45\") " pod="openstack/run-os-openstack-openstack-cell1-mvcx7" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.675778 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73077426-5c9e-4705-8840-6d65e962cf45-inventory\") pod \"run-os-openstack-openstack-cell1-mvcx7\" (UID: \"73077426-5c9e-4705-8840-6d65e962cf45\") " pod="openstack/run-os-openstack-openstack-cell1-mvcx7" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.676647 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/73077426-5c9e-4705-8840-6d65e962cf45-ssh-key\") pod \"run-os-openstack-openstack-cell1-mvcx7\" (UID: \"73077426-5c9e-4705-8840-6d65e962cf45\") " pod="openstack/run-os-openstack-openstack-cell1-mvcx7" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.701209 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nkpm\" (UniqueName: \"kubernetes.io/projected/73077426-5c9e-4705-8840-6d65e962cf45-kube-api-access-6nkpm\") pod \"run-os-openstack-openstack-cell1-mvcx7\" (UID: \"73077426-5c9e-4705-8840-6d65e962cf45\") " pod="openstack/run-os-openstack-openstack-cell1-mvcx7" Nov 24 21:26:57 crc kubenswrapper[4812]: I1124 21:26:57.742621 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-mvcx7" Nov 24 21:26:58 crc kubenswrapper[4812]: I1124 21:26:58.362290 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-mvcx7"] Nov 24 21:26:59 crc kubenswrapper[4812]: I1124 21:26:59.366642 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-mvcx7" event={"ID":"73077426-5c9e-4705-8840-6d65e962cf45","Type":"ContainerStarted","Data":"16973df2b6cf2904151f24361b3ee380d8230916184adb23a77fabad47515832"} Nov 24 21:27:00 crc kubenswrapper[4812]: I1124 21:27:00.381715 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-mvcx7" event={"ID":"73077426-5c9e-4705-8840-6d65e962cf45","Type":"ContainerStarted","Data":"cbe714849cbd5587c18584a90781be6047a691c70bb8d097660e8a5cc409e063"} Nov 24 21:27:00 crc kubenswrapper[4812]: I1124 21:27:00.410138 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-mvcx7" podStartSLOduration=2.617402696 podStartE2EDuration="3.410118693s" podCreationTimestamp="2025-11-24 21:26:57 +0000 UTC" firstStartedPulling="2025-11-24 21:26:58.376896327 +0000 UTC m=+7812.165848708" lastFinishedPulling="2025-11-24 21:26:59.169612294 +0000 UTC m=+7812.958564705" observedRunningTime="2025-11-24 21:27:00.401687734 +0000 UTC m=+7814.190640105" watchObservedRunningTime="2025-11-24 21:27:00.410118693 +0000 UTC m=+7814.199071064" Nov 24 21:27:07 crc kubenswrapper[4812]: I1124 21:27:07.463729 4812 generic.go:334] "Generic (PLEG): container finished" podID="73077426-5c9e-4705-8840-6d65e962cf45" containerID="cbe714849cbd5587c18584a90781be6047a691c70bb8d097660e8a5cc409e063" exitCode=0 Nov 24 21:27:07 crc kubenswrapper[4812]: I1124 21:27:07.463809 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-mvcx7" event={"ID":"73077426-5c9e-4705-8840-6d65e962cf45","Type":"ContainerDied","Data":"cbe714849cbd5587c18584a90781be6047a691c70bb8d097660e8a5cc409e063"} Nov 24 21:27:08 crc kubenswrapper[4812]: I1124 21:27:08.926655 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-mvcx7" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.020112 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/73077426-5c9e-4705-8840-6d65e962cf45-ssh-key\") pod \"73077426-5c9e-4705-8840-6d65e962cf45\" (UID: \"73077426-5c9e-4705-8840-6d65e962cf45\") " Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.020412 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nkpm\" (UniqueName: \"kubernetes.io/projected/73077426-5c9e-4705-8840-6d65e962cf45-kube-api-access-6nkpm\") pod \"73077426-5c9e-4705-8840-6d65e962cf45\" (UID: \"73077426-5c9e-4705-8840-6d65e962cf45\") " Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.020792 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73077426-5c9e-4705-8840-6d65e962cf45-inventory\") pod \"73077426-5c9e-4705-8840-6d65e962cf45\" (UID: \"73077426-5c9e-4705-8840-6d65e962cf45\") " Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.025841 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73077426-5c9e-4705-8840-6d65e962cf45-kube-api-access-6nkpm" (OuterVolumeSpecName: "kube-api-access-6nkpm") pod "73077426-5c9e-4705-8840-6d65e962cf45" (UID: "73077426-5c9e-4705-8840-6d65e962cf45"). InnerVolumeSpecName "kube-api-access-6nkpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.058578 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73077426-5c9e-4705-8840-6d65e962cf45-inventory" (OuterVolumeSpecName: "inventory") pod "73077426-5c9e-4705-8840-6d65e962cf45" (UID: "73077426-5c9e-4705-8840-6d65e962cf45"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.074138 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73077426-5c9e-4705-8840-6d65e962cf45-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "73077426-5c9e-4705-8840-6d65e962cf45" (UID: "73077426-5c9e-4705-8840-6d65e962cf45"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.123702 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nkpm\" (UniqueName: \"kubernetes.io/projected/73077426-5c9e-4705-8840-6d65e962cf45-kube-api-access-6nkpm\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.123731 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73077426-5c9e-4705-8840-6d65e962cf45-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.123741 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/73077426-5c9e-4705-8840-6d65e962cf45-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.495274 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-mvcx7" event={"ID":"73077426-5c9e-4705-8840-6d65e962cf45","Type":"ContainerDied","Data":"16973df2b6cf2904151f24361b3ee380d8230916184adb23a77fabad47515832"} Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.495731 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16973df2b6cf2904151f24361b3ee380d8230916184adb23a77fabad47515832" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.495311 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-mvcx7" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.579908 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-v8879"] Nov 24 21:27:09 crc kubenswrapper[4812]: E1124 21:27:09.580690 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73077426-5c9e-4705-8840-6d65e962cf45" containerName="run-os-openstack-openstack-cell1" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.580717 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="73077426-5c9e-4705-8840-6d65e962cf45" containerName="run-os-openstack-openstack-cell1" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.581102 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="73077426-5c9e-4705-8840-6d65e962cf45" containerName="run-os-openstack-openstack-cell1" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.582059 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-v8879" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.589550 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.589711 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2x7tj" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.590125 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.590223 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.611542 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-v8879"] Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.736870 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4279\" (UniqueName: \"kubernetes.io/projected/e3bff763-259a-4fae-b22e-c48e71a1c9a3-kube-api-access-z4279\") pod \"reboot-os-openstack-openstack-cell1-v8879\" (UID: \"e3bff763-259a-4fae-b22e-c48e71a1c9a3\") " pod="openstack/reboot-os-openstack-openstack-cell1-v8879" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.737366 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3bff763-259a-4fae-b22e-c48e71a1c9a3-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-v8879\" (UID: \"e3bff763-259a-4fae-b22e-c48e71a1c9a3\") " pod="openstack/reboot-os-openstack-openstack-cell1-v8879" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.737557 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3bff763-259a-4fae-b22e-c48e71a1c9a3-inventory\") pod \"reboot-os-openstack-openstack-cell1-v8879\" (UID: \"e3bff763-259a-4fae-b22e-c48e71a1c9a3\") " pod="openstack/reboot-os-openstack-openstack-cell1-v8879" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.839861 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4279\" (UniqueName: \"kubernetes.io/projected/e3bff763-259a-4fae-b22e-c48e71a1c9a3-kube-api-access-z4279\") pod \"reboot-os-openstack-openstack-cell1-v8879\" (UID: \"e3bff763-259a-4fae-b22e-c48e71a1c9a3\") " pod="openstack/reboot-os-openstack-openstack-cell1-v8879" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.840204 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3bff763-259a-4fae-b22e-c48e71a1c9a3-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-v8879\" (UID: \"e3bff763-259a-4fae-b22e-c48e71a1c9a3\") " pod="openstack/reboot-os-openstack-openstack-cell1-v8879" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.840242 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3bff763-259a-4fae-b22e-c48e71a1c9a3-inventory\") pod \"reboot-os-openstack-openstack-cell1-v8879\" (UID: \"e3bff763-259a-4fae-b22e-c48e71a1c9a3\") " pod="openstack/reboot-os-openstack-openstack-cell1-v8879" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.845215 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3bff763-259a-4fae-b22e-c48e71a1c9a3-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-v8879\" (UID: \"e3bff763-259a-4fae-b22e-c48e71a1c9a3\") " pod="openstack/reboot-os-openstack-openstack-cell1-v8879" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.845428 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3bff763-259a-4fae-b22e-c48e71a1c9a3-inventory\") pod \"reboot-os-openstack-openstack-cell1-v8879\" (UID: \"e3bff763-259a-4fae-b22e-c48e71a1c9a3\") " pod="openstack/reboot-os-openstack-openstack-cell1-v8879" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.857562 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4279\" (UniqueName: \"kubernetes.io/projected/e3bff763-259a-4fae-b22e-c48e71a1c9a3-kube-api-access-z4279\") pod \"reboot-os-openstack-openstack-cell1-v8879\" (UID: \"e3bff763-259a-4fae-b22e-c48e71a1c9a3\") " pod="openstack/reboot-os-openstack-openstack-cell1-v8879" Nov 24 21:27:09 crc kubenswrapper[4812]: I1124 21:27:09.912482 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-v8879" Nov 24 21:27:10 crc kubenswrapper[4812]: I1124 21:27:10.442436 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-v8879"] Nov 24 21:27:10 crc kubenswrapper[4812]: I1124 21:27:10.506788 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-v8879" event={"ID":"e3bff763-259a-4fae-b22e-c48e71a1c9a3","Type":"ContainerStarted","Data":"16f3932af9e69b0d104c0bec66b18ad434dd7e9c3beecf7e1d9f03b1b3fbc9b0"} Nov 24 21:27:11 crc kubenswrapper[4812]: I1124 21:27:11.524923 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-v8879" event={"ID":"e3bff763-259a-4fae-b22e-c48e71a1c9a3","Type":"ContainerStarted","Data":"5337d8f149d96eb6cb8c19503555dc4b7709c2b8aa7f85b232a31b7dbb83f562"} Nov 24 21:27:12 crc kubenswrapper[4812]: I1124 21:27:12.553225 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-v8879" podStartSLOduration=2.832949584 podStartE2EDuration="3.553206488s" podCreationTimestamp="2025-11-24 21:27:09 +0000 UTC" firstStartedPulling="2025-11-24 21:27:10.454798186 +0000 UTC m=+7824.243750557" lastFinishedPulling="2025-11-24 21:27:11.17505509 +0000 UTC m=+7824.964007461" observedRunningTime="2025-11-24 21:27:12.552546229 +0000 UTC m=+7826.341498600" watchObservedRunningTime="2025-11-24 21:27:12.553206488 +0000 UTC m=+7826.342158859" Nov 24 21:27:27 crc kubenswrapper[4812]: I1124 21:27:27.709588 4812 generic.go:334] "Generic (PLEG): container finished" podID="e3bff763-259a-4fae-b22e-c48e71a1c9a3" containerID="5337d8f149d96eb6cb8c19503555dc4b7709c2b8aa7f85b232a31b7dbb83f562" exitCode=0 Nov 24 21:27:27 crc kubenswrapper[4812]: I1124 21:27:27.709698 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-v8879" event={"ID":"e3bff763-259a-4fae-b22e-c48e71a1c9a3","Type":"ContainerDied","Data":"5337d8f149d96eb6cb8c19503555dc4b7709c2b8aa7f85b232a31b7dbb83f562"} Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.259998 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-v8879" Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.422538 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4279\" (UniqueName: \"kubernetes.io/projected/e3bff763-259a-4fae-b22e-c48e71a1c9a3-kube-api-access-z4279\") pod \"e3bff763-259a-4fae-b22e-c48e71a1c9a3\" (UID: \"e3bff763-259a-4fae-b22e-c48e71a1c9a3\") " Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.422791 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3bff763-259a-4fae-b22e-c48e71a1c9a3-ssh-key\") pod \"e3bff763-259a-4fae-b22e-c48e71a1c9a3\" (UID: \"e3bff763-259a-4fae-b22e-c48e71a1c9a3\") " Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.422869 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3bff763-259a-4fae-b22e-c48e71a1c9a3-inventory\") pod \"e3bff763-259a-4fae-b22e-c48e71a1c9a3\" (UID: \"e3bff763-259a-4fae-b22e-c48e71a1c9a3\") " Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.429276 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3bff763-259a-4fae-b22e-c48e71a1c9a3-kube-api-access-z4279" (OuterVolumeSpecName: "kube-api-access-z4279") pod "e3bff763-259a-4fae-b22e-c48e71a1c9a3" (UID: "e3bff763-259a-4fae-b22e-c48e71a1c9a3"). InnerVolumeSpecName "kube-api-access-z4279". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.456883 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3bff763-259a-4fae-b22e-c48e71a1c9a3-inventory" (OuterVolumeSpecName: "inventory") pod "e3bff763-259a-4fae-b22e-c48e71a1c9a3" (UID: "e3bff763-259a-4fae-b22e-c48e71a1c9a3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.489802 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3bff763-259a-4fae-b22e-c48e71a1c9a3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e3bff763-259a-4fae-b22e-c48e71a1c9a3" (UID: "e3bff763-259a-4fae-b22e-c48e71a1c9a3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.527299 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4279\" (UniqueName: \"kubernetes.io/projected/e3bff763-259a-4fae-b22e-c48e71a1c9a3-kube-api-access-z4279\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.527508 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e3bff763-259a-4fae-b22e-c48e71a1c9a3-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.527597 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3bff763-259a-4fae-b22e-c48e71a1c9a3-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.737194 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-v8879" event={"ID":"e3bff763-259a-4fae-b22e-c48e71a1c9a3","Type":"ContainerDied","Data":"16f3932af9e69b0d104c0bec66b18ad434dd7e9c3beecf7e1d9f03b1b3fbc9b0"} Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.737242 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16f3932af9e69b0d104c0bec66b18ad434dd7e9c3beecf7e1d9f03b1b3fbc9b0" Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.737742 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-v8879" Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.925772 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-9bgq8"] Nov 24 21:27:29 crc kubenswrapper[4812]: E1124 21:27:29.926466 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3bff763-259a-4fae-b22e-c48e71a1c9a3" containerName="reboot-os-openstack-openstack-cell1" Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.926482 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3bff763-259a-4fae-b22e-c48e71a1c9a3" containerName="reboot-os-openstack-openstack-cell1" Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.926665 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3bff763-259a-4fae-b22e-c48e71a1c9a3" containerName="reboot-os-openstack-openstack-cell1" Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.927381 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.930210 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.930412 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2x7tj" Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.934173 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-neutron-metadata-default-certs-0" Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.934434 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.935492 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-telemetry-default-certs-0" Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.935774 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.936552 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-libvirt-default-certs-0" Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.938419 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-ovn-default-certs-0" Nov 24 21:27:29 crc kubenswrapper[4812]: I1124 21:27:29.957818 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-9bgq8"] Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.037721 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.037819 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.037844 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.037933 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.037963 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.038007 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.038031 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-inventory\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.038105 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.038123 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.038159 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq997\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-kube-api-access-qq997\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.038183 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.038233 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.038264 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.038317 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-ssh-key\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.038371 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.139674 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.139746 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.139783 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-inventory\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.139865 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.139888 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.139907 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq997\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-kube-api-access-qq997\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.139938 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.139988 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.140038 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.140122 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-ssh-key\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.140154 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.140240 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.140320 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.140367 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.140412 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.146000 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.147594 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-ssh-key\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.149004 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-inventory\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.149602 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.152627 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.152734 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.156901 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.157016 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.157559 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.158242 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.159109 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.159287 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.159902 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.161853 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq997\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-kube-api-access-qq997\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.170561 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-9bgq8\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.297280 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:27:30 crc kubenswrapper[4812]: I1124 21:27:30.925743 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-9bgq8"] Nov 24 21:27:30 crc kubenswrapper[4812]: W1124 21:27:30.943542 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod189a60ec_9ff7_4e4c_bd6b_b24fde13c9a6.slice/crio-b035984eb8b93a02dbf599e13775b68cab9de8c6d482f1ba28603ee5d36ff223 WatchSource:0}: Error finding container b035984eb8b93a02dbf599e13775b68cab9de8c6d482f1ba28603ee5d36ff223: Status 404 returned error can't find the container with id b035984eb8b93a02dbf599e13775b68cab9de8c6d482f1ba28603ee5d36ff223 Nov 24 21:27:31 crc kubenswrapper[4812]: I1124 21:27:31.756449 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" event={"ID":"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6","Type":"ContainerStarted","Data":"e1b98021ecfa6290fba451b203d8f5d8b081cadd47fa7d1aa4b2fa31bf4b8dcc"} Nov 24 21:27:31 crc kubenswrapper[4812]: I1124 21:27:31.756945 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" event={"ID":"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6","Type":"ContainerStarted","Data":"b035984eb8b93a02dbf599e13775b68cab9de8c6d482f1ba28603ee5d36ff223"} Nov 24 21:27:31 crc kubenswrapper[4812]: I1124 21:27:31.789968 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" podStartSLOduration=2.35135571 podStartE2EDuration="2.789945131s" podCreationTimestamp="2025-11-24 21:27:29 +0000 UTC" firstStartedPulling="2025-11-24 21:27:30.945565419 +0000 UTC m=+7844.734517810" lastFinishedPulling="2025-11-24 21:27:31.38415485 +0000 UTC m=+7845.173107231" observedRunningTime="2025-11-24 21:27:31.779819244 +0000 UTC m=+7845.568771615" watchObservedRunningTime="2025-11-24 21:27:31.789945131 +0000 UTC m=+7845.578897522" Nov 24 21:28:10 crc kubenswrapper[4812]: I1124 21:28:10.168422 4812 generic.go:334] "Generic (PLEG): container finished" podID="189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6" containerID="e1b98021ecfa6290fba451b203d8f5d8b081cadd47fa7d1aa4b2fa31bf4b8dcc" exitCode=0 Nov 24 21:28:10 crc kubenswrapper[4812]: I1124 21:28:10.168481 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" event={"ID":"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6","Type":"ContainerDied","Data":"e1b98021ecfa6290fba451b203d8f5d8b081cadd47fa7d1aa4b2fa31bf4b8dcc"} Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.652170 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.716749 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-ssh-key\") pod \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.716852 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-inventory\") pod \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.716898 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-neutron-dhcp-combined-ca-bundle\") pod \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.716923 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-neutron-metadata-combined-ca-bundle\") pod \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.716996 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-ovn-default-certs-0\") pod \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.717043 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq997\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-kube-api-access-qq997\") pod \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.717070 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-telemetry-default-certs-0\") pod \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.717113 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-libvirt-default-certs-0\") pod \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.717151 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-libvirt-combined-ca-bundle\") pod \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.717219 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-neutron-metadata-default-certs-0\") pod \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.717245 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-bootstrap-combined-ca-bundle\") pod \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.717283 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-nova-combined-ca-bundle\") pod \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.717360 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-ovn-combined-ca-bundle\") pod \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.717421 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-telemetry-combined-ca-bundle\") pod \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.717502 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-neutron-sriov-combined-ca-bundle\") pod \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\" (UID: \"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6\") " Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.724423 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6" (UID: "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.729381 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6" (UID: "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.729412 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-ovn-default-certs-0") pod "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6" (UID: "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6"). InnerVolumeSpecName "openstack-cell1-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.729501 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-libvirt-default-certs-0") pod "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6" (UID: "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6"). InnerVolumeSpecName "openstack-cell1-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.729846 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-telemetry-default-certs-0") pod "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6" (UID: "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6"). InnerVolumeSpecName "openstack-cell1-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.730997 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6" (UID: "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.732453 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6" (UID: "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.733197 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6" (UID: "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.733711 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6" (UID: "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.733871 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-neutron-metadata-default-certs-0") pod "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6" (UID: "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6"). InnerVolumeSpecName "openstack-cell1-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.733885 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6" (UID: "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.735369 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-kube-api-access-qq997" (OuterVolumeSpecName: "kube-api-access-qq997") pod "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6" (UID: "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6"). InnerVolumeSpecName "kube-api-access-qq997". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.737274 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6" (UID: "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.761850 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-inventory" (OuterVolumeSpecName: "inventory") pod "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6" (UID: "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.785063 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6" (UID: "189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.820997 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.821050 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.821065 4812 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.821078 4812 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.821090 4812 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.821106 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq997\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-kube-api-access-qq997\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.821121 4812 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.821136 4812 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.821152 4812 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.821164 4812 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-openstack-cell1-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.821182 4812 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.821196 4812 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.821208 4812 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.821220 4812 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:11 crc kubenswrapper[4812]: I1124 21:28:11.821235 4812 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.190850 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" event={"ID":"189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6","Type":"ContainerDied","Data":"b035984eb8b93a02dbf599e13775b68cab9de8c6d482f1ba28603ee5d36ff223"} Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.191462 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b035984eb8b93a02dbf599e13775b68cab9de8c6d482f1ba28603ee5d36ff223" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.190914 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-9bgq8" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.295981 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-qxwxw"] Nov 24 21:28:12 crc kubenswrapper[4812]: E1124 21:28:12.296435 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6" containerName="install-certs-openstack-openstack-cell1" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.296453 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6" containerName="install-certs-openstack-openstack-cell1" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.296869 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6" containerName="install-certs-openstack-openstack-cell1" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.297562 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-qxwxw" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.300120 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.300467 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.300210 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.300519 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.305667 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2x7tj" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.316794 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-qxwxw"] Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.434781 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a7c8f911-0614-4e94-959f-8a0eabb6f1db-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-qxwxw\" (UID: \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\") " pod="openstack/ovn-openstack-openstack-cell1-qxwxw" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.434955 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7c8f911-0614-4e94-959f-8a0eabb6f1db-inventory\") pod \"ovn-openstack-openstack-cell1-qxwxw\" (UID: \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\") " pod="openstack/ovn-openstack-openstack-cell1-qxwxw" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.435016 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c8f911-0614-4e94-959f-8a0eabb6f1db-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-qxwxw\" (UID: \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\") " pod="openstack/ovn-openstack-openstack-cell1-qxwxw" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.435116 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7c8f911-0614-4e94-959f-8a0eabb6f1db-ssh-key\") pod \"ovn-openstack-openstack-cell1-qxwxw\" (UID: \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\") " pod="openstack/ovn-openstack-openstack-cell1-qxwxw" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.435305 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g68cn\" (UniqueName: \"kubernetes.io/projected/a7c8f911-0614-4e94-959f-8a0eabb6f1db-kube-api-access-g68cn\") pod \"ovn-openstack-openstack-cell1-qxwxw\" (UID: \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\") " pod="openstack/ovn-openstack-openstack-cell1-qxwxw" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.536833 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7c8f911-0614-4e94-959f-8a0eabb6f1db-inventory\") pod \"ovn-openstack-openstack-cell1-qxwxw\" (UID: \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\") " pod="openstack/ovn-openstack-openstack-cell1-qxwxw" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.536884 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c8f911-0614-4e94-959f-8a0eabb6f1db-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-qxwxw\" (UID: \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\") " pod="openstack/ovn-openstack-openstack-cell1-qxwxw" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.536939 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7c8f911-0614-4e94-959f-8a0eabb6f1db-ssh-key\") pod \"ovn-openstack-openstack-cell1-qxwxw\" (UID: \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\") " pod="openstack/ovn-openstack-openstack-cell1-qxwxw" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.537028 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g68cn\" (UniqueName: \"kubernetes.io/projected/a7c8f911-0614-4e94-959f-8a0eabb6f1db-kube-api-access-g68cn\") pod \"ovn-openstack-openstack-cell1-qxwxw\" (UID: \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\") " pod="openstack/ovn-openstack-openstack-cell1-qxwxw" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.537131 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a7c8f911-0614-4e94-959f-8a0eabb6f1db-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-qxwxw\" (UID: \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\") " pod="openstack/ovn-openstack-openstack-cell1-qxwxw" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.538105 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a7c8f911-0614-4e94-959f-8a0eabb6f1db-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-qxwxw\" (UID: \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\") " pod="openstack/ovn-openstack-openstack-cell1-qxwxw" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.547863 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7c8f911-0614-4e94-959f-8a0eabb6f1db-ssh-key\") pod \"ovn-openstack-openstack-cell1-qxwxw\" (UID: \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\") " pod="openstack/ovn-openstack-openstack-cell1-qxwxw" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.549350 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c8f911-0614-4e94-959f-8a0eabb6f1db-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-qxwxw\" (UID: \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\") " pod="openstack/ovn-openstack-openstack-cell1-qxwxw" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.554127 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7c8f911-0614-4e94-959f-8a0eabb6f1db-inventory\") pod \"ovn-openstack-openstack-cell1-qxwxw\" (UID: \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\") " pod="openstack/ovn-openstack-openstack-cell1-qxwxw" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.559297 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g68cn\" (UniqueName: \"kubernetes.io/projected/a7c8f911-0614-4e94-959f-8a0eabb6f1db-kube-api-access-g68cn\") pod \"ovn-openstack-openstack-cell1-qxwxw\" (UID: \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\") " pod="openstack/ovn-openstack-openstack-cell1-qxwxw" Nov 24 21:28:12 crc kubenswrapper[4812]: I1124 21:28:12.615459 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-qxwxw" Nov 24 21:28:13 crc kubenswrapper[4812]: I1124 21:28:13.174121 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-qxwxw"] Nov 24 21:28:13 crc kubenswrapper[4812]: I1124 21:28:13.208140 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-qxwxw" event={"ID":"a7c8f911-0614-4e94-959f-8a0eabb6f1db","Type":"ContainerStarted","Data":"82f42d2f8d0bf1c1a868eb46df5394f511d4c054160359b2fe45fecb6b016ef7"} Nov 24 21:28:14 crc kubenswrapper[4812]: I1124 21:28:14.222289 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-qxwxw" event={"ID":"a7c8f911-0614-4e94-959f-8a0eabb6f1db","Type":"ContainerStarted","Data":"0509fe5a193302e8bc55e178f0b974d81395c9804bc1cd5f3c1be64bfc865ea6"} Nov 24 21:28:14 crc kubenswrapper[4812]: I1124 21:28:14.242126 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-qxwxw" podStartSLOduration=1.741556159 podStartE2EDuration="2.242102416s" podCreationTimestamp="2025-11-24 21:28:12 +0000 UTC" firstStartedPulling="2025-11-24 21:28:13.183253576 +0000 UTC m=+7886.972205977" lastFinishedPulling="2025-11-24 21:28:13.683799863 +0000 UTC m=+7887.472752234" observedRunningTime="2025-11-24 21:28:14.240066608 +0000 UTC m=+7888.029018999" watchObservedRunningTime="2025-11-24 21:28:14.242102416 +0000 UTC m=+7888.031054787" Nov 24 21:28:32 crc kubenswrapper[4812]: I1124 21:28:32.998387 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:28:33 crc kubenswrapper[4812]: I1124 21:28:32.998968 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:29:02 crc kubenswrapper[4812]: I1124 21:29:02.998009 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:29:02 crc kubenswrapper[4812]: I1124 21:29:02.998552 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:29:20 crc kubenswrapper[4812]: I1124 21:29:20.222731 4812 generic.go:334] "Generic (PLEG): container finished" podID="a7c8f911-0614-4e94-959f-8a0eabb6f1db" containerID="0509fe5a193302e8bc55e178f0b974d81395c9804bc1cd5f3c1be64bfc865ea6" exitCode=0 Nov 24 21:29:20 crc kubenswrapper[4812]: I1124 21:29:20.222859 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-qxwxw" event={"ID":"a7c8f911-0614-4e94-959f-8a0eabb6f1db","Type":"ContainerDied","Data":"0509fe5a193302e8bc55e178f0b974d81395c9804bc1cd5f3c1be64bfc865ea6"} Nov 24 21:29:21 crc kubenswrapper[4812]: I1124 21:29:21.670171 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-qxwxw" Nov 24 21:29:21 crc kubenswrapper[4812]: I1124 21:29:21.806879 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c8f911-0614-4e94-959f-8a0eabb6f1db-ovn-combined-ca-bundle\") pod \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\" (UID: \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\") " Nov 24 21:29:21 crc kubenswrapper[4812]: I1124 21:29:21.807060 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7c8f911-0614-4e94-959f-8a0eabb6f1db-inventory\") pod \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\" (UID: \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\") " Nov 24 21:29:21 crc kubenswrapper[4812]: I1124 21:29:21.807114 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7c8f911-0614-4e94-959f-8a0eabb6f1db-ssh-key\") pod \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\" (UID: \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\") " Nov 24 21:29:21 crc kubenswrapper[4812]: I1124 21:29:21.807317 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g68cn\" (UniqueName: \"kubernetes.io/projected/a7c8f911-0614-4e94-959f-8a0eabb6f1db-kube-api-access-g68cn\") pod \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\" (UID: \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\") " Nov 24 21:29:21 crc kubenswrapper[4812]: I1124 21:29:21.807389 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a7c8f911-0614-4e94-959f-8a0eabb6f1db-ovncontroller-config-0\") pod \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\" (UID: \"a7c8f911-0614-4e94-959f-8a0eabb6f1db\") " Nov 24 21:29:21 crc kubenswrapper[4812]: I1124 21:29:21.814318 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c8f911-0614-4e94-959f-8a0eabb6f1db-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a7c8f911-0614-4e94-959f-8a0eabb6f1db" (UID: "a7c8f911-0614-4e94-959f-8a0eabb6f1db"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:21 crc kubenswrapper[4812]: I1124 21:29:21.820952 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c8f911-0614-4e94-959f-8a0eabb6f1db-kube-api-access-g68cn" (OuterVolumeSpecName: "kube-api-access-g68cn") pod "a7c8f911-0614-4e94-959f-8a0eabb6f1db" (UID: "a7c8f911-0614-4e94-959f-8a0eabb6f1db"). InnerVolumeSpecName "kube-api-access-g68cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:29:21 crc kubenswrapper[4812]: I1124 21:29:21.836473 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7c8f911-0614-4e94-959f-8a0eabb6f1db-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "a7c8f911-0614-4e94-959f-8a0eabb6f1db" (UID: "a7c8f911-0614-4e94-959f-8a0eabb6f1db"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:29:21 crc kubenswrapper[4812]: I1124 21:29:21.846237 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c8f911-0614-4e94-959f-8a0eabb6f1db-inventory" (OuterVolumeSpecName: "inventory") pod "a7c8f911-0614-4e94-959f-8a0eabb6f1db" (UID: "a7c8f911-0614-4e94-959f-8a0eabb6f1db"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:21 crc kubenswrapper[4812]: I1124 21:29:21.861239 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7c8f911-0614-4e94-959f-8a0eabb6f1db-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a7c8f911-0614-4e94-959f-8a0eabb6f1db" (UID: "a7c8f911-0614-4e94-959f-8a0eabb6f1db"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:29:21 crc kubenswrapper[4812]: I1124 21:29:21.909971 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g68cn\" (UniqueName: \"kubernetes.io/projected/a7c8f911-0614-4e94-959f-8a0eabb6f1db-kube-api-access-g68cn\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:21 crc kubenswrapper[4812]: I1124 21:29:21.910012 4812 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a7c8f911-0614-4e94-959f-8a0eabb6f1db-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:21 crc kubenswrapper[4812]: I1124 21:29:21.910027 4812 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7c8f911-0614-4e94-959f-8a0eabb6f1db-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:21 crc kubenswrapper[4812]: I1124 21:29:21.910039 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7c8f911-0614-4e94-959f-8a0eabb6f1db-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:21 crc kubenswrapper[4812]: I1124 21:29:21.910050 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a7c8f911-0614-4e94-959f-8a0eabb6f1db-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.241196 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-qxwxw" event={"ID":"a7c8f911-0614-4e94-959f-8a0eabb6f1db","Type":"ContainerDied","Data":"82f42d2f8d0bf1c1a868eb46df5394f511d4c054160359b2fe45fecb6b016ef7"} Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.241546 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82f42d2f8d0bf1c1a868eb46df5394f511d4c054160359b2fe45fecb6b016ef7" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.241264 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-qxwxw" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.349432 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-8jdjj"] Nov 24 21:29:22 crc kubenswrapper[4812]: E1124 21:29:22.360526 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c8f911-0614-4e94-959f-8a0eabb6f1db" containerName="ovn-openstack-openstack-cell1" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.360550 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c8f911-0614-4e94-959f-8a0eabb6f1db" containerName="ovn-openstack-openstack-cell1" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.360806 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c8f911-0614-4e94-959f-8a0eabb6f1db" containerName="ovn-openstack-openstack-cell1" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.361520 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.362694 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-8jdjj"] Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.364674 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2x7tj" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.364862 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.365052 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.365385 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.370175 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.370346 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.423211 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-8jdjj\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.423315 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbxbk\" (UniqueName: \"kubernetes.io/projected/797f1de2-68d2-44cb-88d8-37f0c13ffed8-kube-api-access-wbxbk\") pod \"neutron-metadata-openstack-openstack-cell1-8jdjj\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.423429 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-8jdjj\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.423508 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-8jdjj\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.423546 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-8jdjj\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.423599 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-8jdjj\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.525304 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-8jdjj\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.525446 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbxbk\" (UniqueName: \"kubernetes.io/projected/797f1de2-68d2-44cb-88d8-37f0c13ffed8-kube-api-access-wbxbk\") pod \"neutron-metadata-openstack-openstack-cell1-8jdjj\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.525590 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-8jdjj\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.525711 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-8jdjj\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.525782 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-8jdjj\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.525854 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-8jdjj\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.530455 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-8jdjj\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.530500 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-8jdjj\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.531199 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-8jdjj\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.531561 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-8jdjj\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.534383 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-8jdjj\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.545849 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbxbk\" (UniqueName: \"kubernetes.io/projected/797f1de2-68d2-44cb-88d8-37f0c13ffed8-kube-api-access-wbxbk\") pod \"neutron-metadata-openstack-openstack-cell1-8jdjj\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:29:22 crc kubenswrapper[4812]: I1124 21:29:22.695977 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:29:23 crc kubenswrapper[4812]: I1124 21:29:23.298608 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-8jdjj"] Nov 24 21:29:24 crc kubenswrapper[4812]: I1124 21:29:24.268356 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" event={"ID":"797f1de2-68d2-44cb-88d8-37f0c13ffed8","Type":"ContainerStarted","Data":"3bfee98d80075ba3d579e45e48ac1b2249ecdb87e414068ec6770cb29332689f"} Nov 24 21:29:26 crc kubenswrapper[4812]: I1124 21:29:26.289111 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" event={"ID":"797f1de2-68d2-44cb-88d8-37f0c13ffed8","Type":"ContainerStarted","Data":"6c6b878fa6f5cdd567f54c5a9ee4ad953b99596bc81ec2141f22232de99010cc"} Nov 24 21:29:26 crc kubenswrapper[4812]: I1124 21:29:26.324305 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" podStartSLOduration=2.132148789 podStartE2EDuration="4.324282447s" podCreationTimestamp="2025-11-24 21:29:22 +0000 UTC" firstStartedPulling="2025-11-24 21:29:23.295577669 +0000 UTC m=+7957.084530040" lastFinishedPulling="2025-11-24 21:29:25.487711327 +0000 UTC m=+7959.276663698" observedRunningTime="2025-11-24 21:29:26.309156488 +0000 UTC m=+7960.098108919" watchObservedRunningTime="2025-11-24 21:29:26.324282447 +0000 UTC m=+7960.113234858" Nov 24 21:29:33 crc kubenswrapper[4812]: I1124 21:29:32.998815 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:29:33 crc kubenswrapper[4812]: I1124 21:29:33.000577 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:29:33 crc kubenswrapper[4812]: I1124 21:29:33.000636 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 21:29:33 crc kubenswrapper[4812]: I1124 21:29:33.001436 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f12dd0b6651f8011b7bf36f9bcf3ca426e59a40997459cec8ec481a2878ef075"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:29:33 crc kubenswrapper[4812]: I1124 21:29:33.001495 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://f12dd0b6651f8011b7bf36f9bcf3ca426e59a40997459cec8ec481a2878ef075" gracePeriod=600 Nov 24 21:29:33 crc kubenswrapper[4812]: I1124 21:29:33.362433 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="f12dd0b6651f8011b7bf36f9bcf3ca426e59a40997459cec8ec481a2878ef075" exitCode=0 Nov 24 21:29:33 crc kubenswrapper[4812]: I1124 21:29:33.362497 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"f12dd0b6651f8011b7bf36f9bcf3ca426e59a40997459cec8ec481a2878ef075"} Nov 24 21:29:33 crc kubenswrapper[4812]: I1124 21:29:33.362548 4812 scope.go:117] "RemoveContainer" containerID="43a38cfe382c54eb8389f267c8c642929bccaf512333e46cd173f01410811586" Nov 24 21:29:34 crc kubenswrapper[4812]: I1124 21:29:34.374048 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930"} Nov 24 21:30:00 crc kubenswrapper[4812]: I1124 21:30:00.174319 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400330-4lmfx"] Nov 24 21:30:00 crc kubenswrapper[4812]: I1124 21:30:00.177307 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-4lmfx" Nov 24 21:30:00 crc kubenswrapper[4812]: I1124 21:30:00.182153 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 21:30:00 crc kubenswrapper[4812]: I1124 21:30:00.182372 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 21:30:00 crc kubenswrapper[4812]: I1124 21:30:00.204487 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400330-4lmfx"] Nov 24 21:30:00 crc kubenswrapper[4812]: I1124 21:30:00.269185 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afe00ac0-08df-4b9f-81a2-eef86e5b220d-secret-volume\") pod \"collect-profiles-29400330-4lmfx\" (UID: \"afe00ac0-08df-4b9f-81a2-eef86e5b220d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-4lmfx" Nov 24 21:30:00 crc kubenswrapper[4812]: I1124 21:30:00.269566 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drnf9\" (UniqueName: \"kubernetes.io/projected/afe00ac0-08df-4b9f-81a2-eef86e5b220d-kube-api-access-drnf9\") pod \"collect-profiles-29400330-4lmfx\" (UID: \"afe00ac0-08df-4b9f-81a2-eef86e5b220d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-4lmfx" Nov 24 21:30:00 crc kubenswrapper[4812]: I1124 21:30:00.269868 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afe00ac0-08df-4b9f-81a2-eef86e5b220d-config-volume\") pod \"collect-profiles-29400330-4lmfx\" (UID: \"afe00ac0-08df-4b9f-81a2-eef86e5b220d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-4lmfx" Nov 24 21:30:00 crc kubenswrapper[4812]: I1124 21:30:00.371715 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drnf9\" (UniqueName: \"kubernetes.io/projected/afe00ac0-08df-4b9f-81a2-eef86e5b220d-kube-api-access-drnf9\") pod \"collect-profiles-29400330-4lmfx\" (UID: \"afe00ac0-08df-4b9f-81a2-eef86e5b220d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-4lmfx" Nov 24 21:30:00 crc kubenswrapper[4812]: I1124 21:30:00.371850 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afe00ac0-08df-4b9f-81a2-eef86e5b220d-config-volume\") pod \"collect-profiles-29400330-4lmfx\" (UID: \"afe00ac0-08df-4b9f-81a2-eef86e5b220d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-4lmfx" Nov 24 21:30:00 crc kubenswrapper[4812]: I1124 21:30:00.371991 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afe00ac0-08df-4b9f-81a2-eef86e5b220d-secret-volume\") pod \"collect-profiles-29400330-4lmfx\" (UID: \"afe00ac0-08df-4b9f-81a2-eef86e5b220d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-4lmfx" Nov 24 21:30:00 crc kubenswrapper[4812]: I1124 21:30:00.373991 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afe00ac0-08df-4b9f-81a2-eef86e5b220d-config-volume\") pod \"collect-profiles-29400330-4lmfx\" (UID: \"afe00ac0-08df-4b9f-81a2-eef86e5b220d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-4lmfx" Nov 24 21:30:00 crc kubenswrapper[4812]: I1124 21:30:00.379681 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afe00ac0-08df-4b9f-81a2-eef86e5b220d-secret-volume\") pod \"collect-profiles-29400330-4lmfx\" (UID: \"afe00ac0-08df-4b9f-81a2-eef86e5b220d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-4lmfx" Nov 24 21:30:00 crc kubenswrapper[4812]: I1124 21:30:00.393679 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drnf9\" (UniqueName: \"kubernetes.io/projected/afe00ac0-08df-4b9f-81a2-eef86e5b220d-kube-api-access-drnf9\") pod \"collect-profiles-29400330-4lmfx\" (UID: \"afe00ac0-08df-4b9f-81a2-eef86e5b220d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-4lmfx" Nov 24 21:30:00 crc kubenswrapper[4812]: I1124 21:30:00.507295 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-4lmfx" Nov 24 21:30:01 crc kubenswrapper[4812]: I1124 21:30:01.023951 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400330-4lmfx"] Nov 24 21:30:01 crc kubenswrapper[4812]: W1124 21:30:01.039111 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafe00ac0_08df_4b9f_81a2_eef86e5b220d.slice/crio-4615c321b1b0ef72bb5ed694fa67fb5288faf60a00ecd541246fcc882366821b WatchSource:0}: Error finding container 4615c321b1b0ef72bb5ed694fa67fb5288faf60a00ecd541246fcc882366821b: Status 404 returned error can't find the container with id 4615c321b1b0ef72bb5ed694fa67fb5288faf60a00ecd541246fcc882366821b Nov 24 21:30:01 crc kubenswrapper[4812]: I1124 21:30:01.723054 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-4lmfx" event={"ID":"afe00ac0-08df-4b9f-81a2-eef86e5b220d","Type":"ContainerStarted","Data":"03f9b22fa9c7d12e1e31c5e0e0ea97589f815326e3169dd329042d5e5c6fb5f3"} Nov 24 21:30:01 crc kubenswrapper[4812]: I1124 21:30:01.723393 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-4lmfx" event={"ID":"afe00ac0-08df-4b9f-81a2-eef86e5b220d","Type":"ContainerStarted","Data":"4615c321b1b0ef72bb5ed694fa67fb5288faf60a00ecd541246fcc882366821b"} Nov 24 21:30:01 crc kubenswrapper[4812]: I1124 21:30:01.751910 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-4lmfx" podStartSLOduration=1.751892132 podStartE2EDuration="1.751892132s" podCreationTimestamp="2025-11-24 21:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:30:01.742833265 +0000 UTC m=+7995.531785636" watchObservedRunningTime="2025-11-24 21:30:01.751892132 +0000 UTC m=+7995.540844513" Nov 24 21:30:02 crc kubenswrapper[4812]: I1124 21:30:02.734682 4812 generic.go:334] "Generic (PLEG): container finished" podID="afe00ac0-08df-4b9f-81a2-eef86e5b220d" containerID="03f9b22fa9c7d12e1e31c5e0e0ea97589f815326e3169dd329042d5e5c6fb5f3" exitCode=0 Nov 24 21:30:02 crc kubenswrapper[4812]: I1124 21:30:02.734751 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-4lmfx" event={"ID":"afe00ac0-08df-4b9f-81a2-eef86e5b220d","Type":"ContainerDied","Data":"03f9b22fa9c7d12e1e31c5e0e0ea97589f815326e3169dd329042d5e5c6fb5f3"} Nov 24 21:30:04 crc kubenswrapper[4812]: I1124 21:30:04.232145 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-4lmfx" Nov 24 21:30:04 crc kubenswrapper[4812]: I1124 21:30:04.312017 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afe00ac0-08df-4b9f-81a2-eef86e5b220d-config-volume\") pod \"afe00ac0-08df-4b9f-81a2-eef86e5b220d\" (UID: \"afe00ac0-08df-4b9f-81a2-eef86e5b220d\") " Nov 24 21:30:04 crc kubenswrapper[4812]: I1124 21:30:04.312245 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drnf9\" (UniqueName: \"kubernetes.io/projected/afe00ac0-08df-4b9f-81a2-eef86e5b220d-kube-api-access-drnf9\") pod \"afe00ac0-08df-4b9f-81a2-eef86e5b220d\" (UID: \"afe00ac0-08df-4b9f-81a2-eef86e5b220d\") " Nov 24 21:30:04 crc kubenswrapper[4812]: I1124 21:30:04.312313 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afe00ac0-08df-4b9f-81a2-eef86e5b220d-secret-volume\") pod \"afe00ac0-08df-4b9f-81a2-eef86e5b220d\" (UID: \"afe00ac0-08df-4b9f-81a2-eef86e5b220d\") " Nov 24 21:30:04 crc kubenswrapper[4812]: I1124 21:30:04.312884 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afe00ac0-08df-4b9f-81a2-eef86e5b220d-config-volume" (OuterVolumeSpecName: "config-volume") pod "afe00ac0-08df-4b9f-81a2-eef86e5b220d" (UID: "afe00ac0-08df-4b9f-81a2-eef86e5b220d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:30:04 crc kubenswrapper[4812]: I1124 21:30:04.316909 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afe00ac0-08df-4b9f-81a2-eef86e5b220d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:04 crc kubenswrapper[4812]: I1124 21:30:04.320179 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe00ac0-08df-4b9f-81a2-eef86e5b220d-kube-api-access-drnf9" (OuterVolumeSpecName: "kube-api-access-drnf9") pod "afe00ac0-08df-4b9f-81a2-eef86e5b220d" (UID: "afe00ac0-08df-4b9f-81a2-eef86e5b220d"). InnerVolumeSpecName "kube-api-access-drnf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:30:04 crc kubenswrapper[4812]: I1124 21:30:04.326438 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe00ac0-08df-4b9f-81a2-eef86e5b220d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "afe00ac0-08df-4b9f-81a2-eef86e5b220d" (UID: "afe00ac0-08df-4b9f-81a2-eef86e5b220d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:30:04 crc kubenswrapper[4812]: I1124 21:30:04.419425 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drnf9\" (UniqueName: \"kubernetes.io/projected/afe00ac0-08df-4b9f-81a2-eef86e5b220d-kube-api-access-drnf9\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:04 crc kubenswrapper[4812]: I1124 21:30:04.419473 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/afe00ac0-08df-4b9f-81a2-eef86e5b220d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:04 crc kubenswrapper[4812]: I1124 21:30:04.759152 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-4lmfx" event={"ID":"afe00ac0-08df-4b9f-81a2-eef86e5b220d","Type":"ContainerDied","Data":"4615c321b1b0ef72bb5ed694fa67fb5288faf60a00ecd541246fcc882366821b"} Nov 24 21:30:04 crc kubenswrapper[4812]: I1124 21:30:04.759193 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4615c321b1b0ef72bb5ed694fa67fb5288faf60a00ecd541246fcc882366821b" Nov 24 21:30:04 crc kubenswrapper[4812]: I1124 21:30:04.759222 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-4lmfx" Nov 24 21:30:04 crc kubenswrapper[4812]: I1124 21:30:04.848624 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn"] Nov 24 21:30:04 crc kubenswrapper[4812]: I1124 21:30:04.866134 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400285-t48mn"] Nov 24 21:30:04 crc kubenswrapper[4812]: I1124 21:30:04.980135 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f58e63a-e7e2-498a-af27-92816aa53ba1" path="/var/lib/kubelet/pods/9f58e63a-e7e2-498a-af27-92816aa53ba1/volumes" Nov 24 21:30:18 crc kubenswrapper[4812]: I1124 21:30:18.922110 4812 generic.go:334] "Generic (PLEG): container finished" podID="797f1de2-68d2-44cb-88d8-37f0c13ffed8" containerID="6c6b878fa6f5cdd567f54c5a9ee4ad953b99596bc81ec2141f22232de99010cc" exitCode=0 Nov 24 21:30:18 crc kubenswrapper[4812]: I1124 21:30:18.922177 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" event={"ID":"797f1de2-68d2-44cb-88d8-37f0c13ffed8","Type":"ContainerDied","Data":"6c6b878fa6f5cdd567f54c5a9ee4ad953b99596bc81ec2141f22232de99010cc"} Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.482547 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.611805 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-neutron-metadata-combined-ca-bundle\") pod \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.612283 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-ssh-key\") pod \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.612304 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.612325 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbxbk\" (UniqueName: \"kubernetes.io/projected/797f1de2-68d2-44cb-88d8-37f0c13ffed8-kube-api-access-wbxbk\") pod \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.612380 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-nova-metadata-neutron-config-0\") pod \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.612440 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-inventory\") pod \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\" (UID: \"797f1de2-68d2-44cb-88d8-37f0c13ffed8\") " Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.617588 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "797f1de2-68d2-44cb-88d8-37f0c13ffed8" (UID: "797f1de2-68d2-44cb-88d8-37f0c13ffed8"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.620623 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/797f1de2-68d2-44cb-88d8-37f0c13ffed8-kube-api-access-wbxbk" (OuterVolumeSpecName: "kube-api-access-wbxbk") pod "797f1de2-68d2-44cb-88d8-37f0c13ffed8" (UID: "797f1de2-68d2-44cb-88d8-37f0c13ffed8"). InnerVolumeSpecName "kube-api-access-wbxbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.646857 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-inventory" (OuterVolumeSpecName: "inventory") pod "797f1de2-68d2-44cb-88d8-37f0c13ffed8" (UID: "797f1de2-68d2-44cb-88d8-37f0c13ffed8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.647826 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "797f1de2-68d2-44cb-88d8-37f0c13ffed8" (UID: "797f1de2-68d2-44cb-88d8-37f0c13ffed8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.650571 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "797f1de2-68d2-44cb-88d8-37f0c13ffed8" (UID: "797f1de2-68d2-44cb-88d8-37f0c13ffed8"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.658570 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "797f1de2-68d2-44cb-88d8-37f0c13ffed8" (UID: "797f1de2-68d2-44cb-88d8-37f0c13ffed8"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.715420 4812 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.715454 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.715469 4812 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.715483 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbxbk\" (UniqueName: \"kubernetes.io/projected/797f1de2-68d2-44cb-88d8-37f0c13ffed8-kube-api-access-wbxbk\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.715497 4812 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.715511 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/797f1de2-68d2-44cb-88d8-37f0c13ffed8-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.952057 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" event={"ID":"797f1de2-68d2-44cb-88d8-37f0c13ffed8","Type":"ContainerDied","Data":"3bfee98d80075ba3d579e45e48ac1b2249ecdb87e414068ec6770cb29332689f"} Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.952111 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bfee98d80075ba3d579e45e48ac1b2249ecdb87e414068ec6770cb29332689f" Nov 24 21:30:20 crc kubenswrapper[4812]: I1124 21:30:20.952186 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-8jdjj" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.085355 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-j29dm"] Nov 24 21:30:21 crc kubenswrapper[4812]: E1124 21:30:21.086050 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe00ac0-08df-4b9f-81a2-eef86e5b220d" containerName="collect-profiles" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.086066 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe00ac0-08df-4b9f-81a2-eef86e5b220d" containerName="collect-profiles" Nov 24 21:30:21 crc kubenswrapper[4812]: E1124 21:30:21.086089 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="797f1de2-68d2-44cb-88d8-37f0c13ffed8" containerName="neutron-metadata-openstack-openstack-cell1" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.086096 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="797f1de2-68d2-44cb-88d8-37f0c13ffed8" containerName="neutron-metadata-openstack-openstack-cell1" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.086294 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="797f1de2-68d2-44cb-88d8-37f0c13ffed8" containerName="neutron-metadata-openstack-openstack-cell1" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.086322 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe00ac0-08df-4b9f-81a2-eef86e5b220d" containerName="collect-profiles" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.087070 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-j29dm" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.092658 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.094361 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.094716 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2x7tj" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.094950 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.095130 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.098135 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-j29dm"] Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.230742 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-ssh-key\") pod \"libvirt-openstack-openstack-cell1-j29dm\" (UID: \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\") " pod="openstack/libvirt-openstack-openstack-cell1-j29dm" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.230827 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-inventory\") pod \"libvirt-openstack-openstack-cell1-j29dm\" (UID: \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\") " pod="openstack/libvirt-openstack-openstack-cell1-j29dm" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.230852 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcpfp\" (UniqueName: \"kubernetes.io/projected/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-kube-api-access-gcpfp\") pod \"libvirt-openstack-openstack-cell1-j29dm\" (UID: \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\") " pod="openstack/libvirt-openstack-openstack-cell1-j29dm" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.230899 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-j29dm\" (UID: \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\") " pod="openstack/libvirt-openstack-openstack-cell1-j29dm" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.230923 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-j29dm\" (UID: \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\") " pod="openstack/libvirt-openstack-openstack-cell1-j29dm" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.332514 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-ssh-key\") pod \"libvirt-openstack-openstack-cell1-j29dm\" (UID: \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\") " pod="openstack/libvirt-openstack-openstack-cell1-j29dm" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.332846 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-inventory\") pod \"libvirt-openstack-openstack-cell1-j29dm\" (UID: \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\") " pod="openstack/libvirt-openstack-openstack-cell1-j29dm" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.332993 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcpfp\" (UniqueName: \"kubernetes.io/projected/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-kube-api-access-gcpfp\") pod \"libvirt-openstack-openstack-cell1-j29dm\" (UID: \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\") " pod="openstack/libvirt-openstack-openstack-cell1-j29dm" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.333141 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-j29dm\" (UID: \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\") " pod="openstack/libvirt-openstack-openstack-cell1-j29dm" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.333245 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-j29dm\" (UID: \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\") " pod="openstack/libvirt-openstack-openstack-cell1-j29dm" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.338277 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-ssh-key\") pod \"libvirt-openstack-openstack-cell1-j29dm\" (UID: \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\") " pod="openstack/libvirt-openstack-openstack-cell1-j29dm" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.339015 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-j29dm\" (UID: \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\") " pod="openstack/libvirt-openstack-openstack-cell1-j29dm" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.339137 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-j29dm\" (UID: \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\") " pod="openstack/libvirt-openstack-openstack-cell1-j29dm" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.342893 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-inventory\") pod \"libvirt-openstack-openstack-cell1-j29dm\" (UID: \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\") " pod="openstack/libvirt-openstack-openstack-cell1-j29dm" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.356155 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcpfp\" (UniqueName: \"kubernetes.io/projected/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-kube-api-access-gcpfp\") pod \"libvirt-openstack-openstack-cell1-j29dm\" (UID: \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\") " pod="openstack/libvirt-openstack-openstack-cell1-j29dm" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.410279 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-j29dm" Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.967908 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-j29dm"] Nov 24 21:30:21 crc kubenswrapper[4812]: I1124 21:30:21.983674 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:30:22 crc kubenswrapper[4812]: I1124 21:30:22.977186 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-j29dm" event={"ID":"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028","Type":"ContainerStarted","Data":"872ea12dda9a596bd71308a7ec4af0e778a412adb3eeee469859d35c2d51835c"} Nov 24 21:30:22 crc kubenswrapper[4812]: I1124 21:30:22.977805 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-j29dm" event={"ID":"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028","Type":"ContainerStarted","Data":"b544591e953beef17f174f7390d50fa2bc7232468fc91a610459f81e19950880"} Nov 24 21:30:23 crc kubenswrapper[4812]: I1124 21:30:23.000474 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-j29dm" podStartSLOduration=1.423924453 podStartE2EDuration="2.000450303s" podCreationTimestamp="2025-11-24 21:30:21 +0000 UTC" firstStartedPulling="2025-11-24 21:30:21.983235633 +0000 UTC m=+8015.772188014" lastFinishedPulling="2025-11-24 21:30:22.559761483 +0000 UTC m=+8016.348713864" observedRunningTime="2025-11-24 21:30:22.987723373 +0000 UTC m=+8016.776675794" watchObservedRunningTime="2025-11-24 21:30:23.000450303 +0000 UTC m=+8016.789402694" Nov 24 21:30:48 crc kubenswrapper[4812]: I1124 21:30:48.068236 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zb66w"] Nov 24 21:30:48 crc kubenswrapper[4812]: I1124 21:30:48.071417 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zb66w" Nov 24 21:30:48 crc kubenswrapper[4812]: I1124 21:30:48.082841 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zb66w"] Nov 24 21:30:48 crc kubenswrapper[4812]: I1124 21:30:48.239323 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da7a2d41-1220-44c5-936a-401d88c113a6-utilities\") pod \"certified-operators-zb66w\" (UID: \"da7a2d41-1220-44c5-936a-401d88c113a6\") " pod="openshift-marketplace/certified-operators-zb66w" Nov 24 21:30:48 crc kubenswrapper[4812]: I1124 21:30:48.239689 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nj8v\" (UniqueName: \"kubernetes.io/projected/da7a2d41-1220-44c5-936a-401d88c113a6-kube-api-access-7nj8v\") pod \"certified-operators-zb66w\" (UID: \"da7a2d41-1220-44c5-936a-401d88c113a6\") " pod="openshift-marketplace/certified-operators-zb66w" Nov 24 21:30:48 crc kubenswrapper[4812]: I1124 21:30:48.239784 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da7a2d41-1220-44c5-936a-401d88c113a6-catalog-content\") pod \"certified-operators-zb66w\" (UID: \"da7a2d41-1220-44c5-936a-401d88c113a6\") " pod="openshift-marketplace/certified-operators-zb66w" Nov 24 21:30:48 crc kubenswrapper[4812]: I1124 21:30:48.341880 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da7a2d41-1220-44c5-936a-401d88c113a6-utilities\") pod \"certified-operators-zb66w\" (UID: \"da7a2d41-1220-44c5-936a-401d88c113a6\") " pod="openshift-marketplace/certified-operators-zb66w" Nov 24 21:30:48 crc kubenswrapper[4812]: I1124 21:30:48.342081 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nj8v\" (UniqueName: \"kubernetes.io/projected/da7a2d41-1220-44c5-936a-401d88c113a6-kube-api-access-7nj8v\") pod \"certified-operators-zb66w\" (UID: \"da7a2d41-1220-44c5-936a-401d88c113a6\") " pod="openshift-marketplace/certified-operators-zb66w" Nov 24 21:30:48 crc kubenswrapper[4812]: I1124 21:30:48.342148 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da7a2d41-1220-44c5-936a-401d88c113a6-catalog-content\") pod \"certified-operators-zb66w\" (UID: \"da7a2d41-1220-44c5-936a-401d88c113a6\") " pod="openshift-marketplace/certified-operators-zb66w" Nov 24 21:30:48 crc kubenswrapper[4812]: I1124 21:30:48.342530 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da7a2d41-1220-44c5-936a-401d88c113a6-utilities\") pod \"certified-operators-zb66w\" (UID: \"da7a2d41-1220-44c5-936a-401d88c113a6\") " pod="openshift-marketplace/certified-operators-zb66w" Nov 24 21:30:48 crc kubenswrapper[4812]: I1124 21:30:48.342598 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da7a2d41-1220-44c5-936a-401d88c113a6-catalog-content\") pod \"certified-operators-zb66w\" (UID: \"da7a2d41-1220-44c5-936a-401d88c113a6\") " pod="openshift-marketplace/certified-operators-zb66w" Nov 24 21:30:48 crc kubenswrapper[4812]: I1124 21:30:48.370158 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nj8v\" (UniqueName: \"kubernetes.io/projected/da7a2d41-1220-44c5-936a-401d88c113a6-kube-api-access-7nj8v\") pod \"certified-operators-zb66w\" (UID: \"da7a2d41-1220-44c5-936a-401d88c113a6\") " pod="openshift-marketplace/certified-operators-zb66w" Nov 24 21:30:48 crc kubenswrapper[4812]: I1124 21:30:48.392731 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zb66w" Nov 24 21:30:48 crc kubenswrapper[4812]: I1124 21:30:48.989792 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zb66w"] Nov 24 21:30:49 crc kubenswrapper[4812]: I1124 21:30:49.284081 4812 generic.go:334] "Generic (PLEG): container finished" podID="da7a2d41-1220-44c5-936a-401d88c113a6" containerID="2f9f5f57a610d97966a90cd027e5d392eff464776cbadbf17aae5c09a56180ef" exitCode=0 Nov 24 21:30:49 crc kubenswrapper[4812]: I1124 21:30:49.284280 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zb66w" event={"ID":"da7a2d41-1220-44c5-936a-401d88c113a6","Type":"ContainerDied","Data":"2f9f5f57a610d97966a90cd027e5d392eff464776cbadbf17aae5c09a56180ef"} Nov 24 21:30:49 crc kubenswrapper[4812]: I1124 21:30:49.284526 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zb66w" event={"ID":"da7a2d41-1220-44c5-936a-401d88c113a6","Type":"ContainerStarted","Data":"fcbe1226980a714b0e99e9169e36b0b360034b30d5d8b2cb276deb2aac329eb1"} Nov 24 21:30:50 crc kubenswrapper[4812]: I1124 21:30:50.301505 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zb66w" event={"ID":"da7a2d41-1220-44c5-936a-401d88c113a6","Type":"ContainerStarted","Data":"1566dc987c17db9c8bf692daf85ac449986a7cd73473be75a348b00e2275b2a0"} Nov 24 21:30:52 crc kubenswrapper[4812]: I1124 21:30:52.327442 4812 generic.go:334] "Generic (PLEG): container finished" podID="da7a2d41-1220-44c5-936a-401d88c113a6" containerID="1566dc987c17db9c8bf692daf85ac449986a7cd73473be75a348b00e2275b2a0" exitCode=0 Nov 24 21:30:52 crc kubenswrapper[4812]: I1124 21:30:52.327545 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zb66w" event={"ID":"da7a2d41-1220-44c5-936a-401d88c113a6","Type":"ContainerDied","Data":"1566dc987c17db9c8bf692daf85ac449986a7cd73473be75a348b00e2275b2a0"} Nov 24 21:30:53 crc kubenswrapper[4812]: I1124 21:30:53.338783 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zb66w" event={"ID":"da7a2d41-1220-44c5-936a-401d88c113a6","Type":"ContainerStarted","Data":"66ce56d789412b779dff757675a67df7701feabd19e6f0b03e5441d1b01855b5"} Nov 24 21:30:53 crc kubenswrapper[4812]: I1124 21:30:53.359839 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zb66w" podStartSLOduration=1.8535653349999999 podStartE2EDuration="5.359821418s" podCreationTimestamp="2025-11-24 21:30:48 +0000 UTC" firstStartedPulling="2025-11-24 21:30:49.285961398 +0000 UTC m=+8043.074913810" lastFinishedPulling="2025-11-24 21:30:52.792217522 +0000 UTC m=+8046.581169893" observedRunningTime="2025-11-24 21:30:53.357498672 +0000 UTC m=+8047.146451043" watchObservedRunningTime="2025-11-24 21:30:53.359821418 +0000 UTC m=+8047.148773779" Nov 24 21:30:58 crc kubenswrapper[4812]: I1124 21:30:58.393767 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zb66w" Nov 24 21:30:58 crc kubenswrapper[4812]: I1124 21:30:58.394427 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zb66w" Nov 24 21:30:58 crc kubenswrapper[4812]: I1124 21:30:58.474231 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zb66w" Nov 24 21:30:59 crc kubenswrapper[4812]: I1124 21:30:59.512997 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zb66w" Nov 24 21:30:59 crc kubenswrapper[4812]: I1124 21:30:59.589019 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zb66w"] Nov 24 21:31:01 crc kubenswrapper[4812]: I1124 21:31:01.446431 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zb66w" podUID="da7a2d41-1220-44c5-936a-401d88c113a6" containerName="registry-server" containerID="cri-o://66ce56d789412b779dff757675a67df7701feabd19e6f0b03e5441d1b01855b5" gracePeriod=2 Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.007484 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zb66w" Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.082774 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da7a2d41-1220-44c5-936a-401d88c113a6-utilities\") pod \"da7a2d41-1220-44c5-936a-401d88c113a6\" (UID: \"da7a2d41-1220-44c5-936a-401d88c113a6\") " Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.082900 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nj8v\" (UniqueName: \"kubernetes.io/projected/da7a2d41-1220-44c5-936a-401d88c113a6-kube-api-access-7nj8v\") pod \"da7a2d41-1220-44c5-936a-401d88c113a6\" (UID: \"da7a2d41-1220-44c5-936a-401d88c113a6\") " Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.082978 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da7a2d41-1220-44c5-936a-401d88c113a6-catalog-content\") pod \"da7a2d41-1220-44c5-936a-401d88c113a6\" (UID: \"da7a2d41-1220-44c5-936a-401d88c113a6\") " Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.084085 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da7a2d41-1220-44c5-936a-401d88c113a6-utilities" (OuterVolumeSpecName: "utilities") pod "da7a2d41-1220-44c5-936a-401d88c113a6" (UID: "da7a2d41-1220-44c5-936a-401d88c113a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.085057 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da7a2d41-1220-44c5-936a-401d88c113a6-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.090937 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da7a2d41-1220-44c5-936a-401d88c113a6-kube-api-access-7nj8v" (OuterVolumeSpecName: "kube-api-access-7nj8v") pod "da7a2d41-1220-44c5-936a-401d88c113a6" (UID: "da7a2d41-1220-44c5-936a-401d88c113a6"). InnerVolumeSpecName "kube-api-access-7nj8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.135186 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da7a2d41-1220-44c5-936a-401d88c113a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da7a2d41-1220-44c5-936a-401d88c113a6" (UID: "da7a2d41-1220-44c5-936a-401d88c113a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.187154 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nj8v\" (UniqueName: \"kubernetes.io/projected/da7a2d41-1220-44c5-936a-401d88c113a6-kube-api-access-7nj8v\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.187200 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da7a2d41-1220-44c5-936a-401d88c113a6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.459524 4812 generic.go:334] "Generic (PLEG): container finished" podID="da7a2d41-1220-44c5-936a-401d88c113a6" containerID="66ce56d789412b779dff757675a67df7701feabd19e6f0b03e5441d1b01855b5" exitCode=0 Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.459593 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zb66w" event={"ID":"da7a2d41-1220-44c5-936a-401d88c113a6","Type":"ContainerDied","Data":"66ce56d789412b779dff757675a67df7701feabd19e6f0b03e5441d1b01855b5"} Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.459614 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zb66w" Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.459644 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zb66w" event={"ID":"da7a2d41-1220-44c5-936a-401d88c113a6","Type":"ContainerDied","Data":"fcbe1226980a714b0e99e9169e36b0b360034b30d5d8b2cb276deb2aac329eb1"} Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.459674 4812 scope.go:117] "RemoveContainer" containerID="66ce56d789412b779dff757675a67df7701feabd19e6f0b03e5441d1b01855b5" Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.483940 4812 scope.go:117] "RemoveContainer" containerID="1566dc987c17db9c8bf692daf85ac449986a7cd73473be75a348b00e2275b2a0" Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.505277 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zb66w"] Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.515349 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zb66w"] Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.534187 4812 scope.go:117] "RemoveContainer" containerID="2f9f5f57a610d97966a90cd027e5d392eff464776cbadbf17aae5c09a56180ef" Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.591317 4812 scope.go:117] "RemoveContainer" containerID="66ce56d789412b779dff757675a67df7701feabd19e6f0b03e5441d1b01855b5" Nov 24 21:31:02 crc kubenswrapper[4812]: E1124 21:31:02.591656 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66ce56d789412b779dff757675a67df7701feabd19e6f0b03e5441d1b01855b5\": container with ID starting with 66ce56d789412b779dff757675a67df7701feabd19e6f0b03e5441d1b01855b5 not found: ID does not exist" containerID="66ce56d789412b779dff757675a67df7701feabd19e6f0b03e5441d1b01855b5" Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.591696 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66ce56d789412b779dff757675a67df7701feabd19e6f0b03e5441d1b01855b5"} err="failed to get container status \"66ce56d789412b779dff757675a67df7701feabd19e6f0b03e5441d1b01855b5\": rpc error: code = NotFound desc = could not find container \"66ce56d789412b779dff757675a67df7701feabd19e6f0b03e5441d1b01855b5\": container with ID starting with 66ce56d789412b779dff757675a67df7701feabd19e6f0b03e5441d1b01855b5 not found: ID does not exist" Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.591716 4812 scope.go:117] "RemoveContainer" containerID="1566dc987c17db9c8bf692daf85ac449986a7cd73473be75a348b00e2275b2a0" Nov 24 21:31:02 crc kubenswrapper[4812]: E1124 21:31:02.591977 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1566dc987c17db9c8bf692daf85ac449986a7cd73473be75a348b00e2275b2a0\": container with ID starting with 1566dc987c17db9c8bf692daf85ac449986a7cd73473be75a348b00e2275b2a0 not found: ID does not exist" containerID="1566dc987c17db9c8bf692daf85ac449986a7cd73473be75a348b00e2275b2a0" Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.592000 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1566dc987c17db9c8bf692daf85ac449986a7cd73473be75a348b00e2275b2a0"} err="failed to get container status \"1566dc987c17db9c8bf692daf85ac449986a7cd73473be75a348b00e2275b2a0\": rpc error: code = NotFound desc = could not find container \"1566dc987c17db9c8bf692daf85ac449986a7cd73473be75a348b00e2275b2a0\": container with ID starting with 1566dc987c17db9c8bf692daf85ac449986a7cd73473be75a348b00e2275b2a0 not found: ID does not exist" Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.592030 4812 scope.go:117] "RemoveContainer" containerID="2f9f5f57a610d97966a90cd027e5d392eff464776cbadbf17aae5c09a56180ef" Nov 24 21:31:02 crc kubenswrapper[4812]: E1124 21:31:02.592500 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f9f5f57a610d97966a90cd027e5d392eff464776cbadbf17aae5c09a56180ef\": container with ID starting with 2f9f5f57a610d97966a90cd027e5d392eff464776cbadbf17aae5c09a56180ef not found: ID does not exist" containerID="2f9f5f57a610d97966a90cd027e5d392eff464776cbadbf17aae5c09a56180ef" Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.592548 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f9f5f57a610d97966a90cd027e5d392eff464776cbadbf17aae5c09a56180ef"} err="failed to get container status \"2f9f5f57a610d97966a90cd027e5d392eff464776cbadbf17aae5c09a56180ef\": rpc error: code = NotFound desc = could not find container \"2f9f5f57a610d97966a90cd027e5d392eff464776cbadbf17aae5c09a56180ef\": container with ID starting with 2f9f5f57a610d97966a90cd027e5d392eff464776cbadbf17aae5c09a56180ef not found: ID does not exist" Nov 24 21:31:02 crc kubenswrapper[4812]: I1124 21:31:02.987767 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da7a2d41-1220-44c5-936a-401d88c113a6" path="/var/lib/kubelet/pods/da7a2d41-1220-44c5-936a-401d88c113a6/volumes" Nov 24 21:31:04 crc kubenswrapper[4812]: I1124 21:31:04.161838 4812 scope.go:117] "RemoveContainer" containerID="23a20c6ce136498b242d6000830dddc00eef79c8771a3144488a735792435aae" Nov 24 21:32:02 crc kubenswrapper[4812]: I1124 21:32:02.998018 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:32:02 crc kubenswrapper[4812]: I1124 21:32:02.998608 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:32:32 crc kubenswrapper[4812]: I1124 21:32:32.998810 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:32:33 crc kubenswrapper[4812]: I1124 21:32:33.000069 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:33:02 crc kubenswrapper[4812]: I1124 21:33:02.998605 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:33:03 crc kubenswrapper[4812]: I1124 21:33:02.999148 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:33:03 crc kubenswrapper[4812]: I1124 21:33:02.999202 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 21:33:03 crc kubenswrapper[4812]: I1124 21:33:03.000077 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:33:03 crc kubenswrapper[4812]: I1124 21:33:03.000135 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" gracePeriod=600 Nov 24 21:33:03 crc kubenswrapper[4812]: E1124 21:33:03.132080 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:33:03 crc kubenswrapper[4812]: I1124 21:33:03.902924 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" exitCode=0 Nov 24 21:33:03 crc kubenswrapper[4812]: I1124 21:33:03.902948 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930"} Nov 24 21:33:03 crc kubenswrapper[4812]: I1124 21:33:03.903474 4812 scope.go:117] "RemoveContainer" containerID="f12dd0b6651f8011b7bf36f9bcf3ca426e59a40997459cec8ec481a2878ef075" Nov 24 21:33:03 crc kubenswrapper[4812]: I1124 21:33:03.905422 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:33:03 crc kubenswrapper[4812]: E1124 21:33:03.905910 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:33:15 crc kubenswrapper[4812]: I1124 21:33:15.965791 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:33:15 crc kubenswrapper[4812]: E1124 21:33:15.966822 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:33:29 crc kubenswrapper[4812]: I1124 21:33:29.966239 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:33:29 crc kubenswrapper[4812]: E1124 21:33:29.968001 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:33:42 crc kubenswrapper[4812]: I1124 21:33:42.968322 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:33:42 crc kubenswrapper[4812]: E1124 21:33:42.969426 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:33:57 crc kubenswrapper[4812]: I1124 21:33:57.966393 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:33:57 crc kubenswrapper[4812]: E1124 21:33:57.967102 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:34:11 crc kubenswrapper[4812]: I1124 21:34:11.965823 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:34:11 crc kubenswrapper[4812]: E1124 21:34:11.966660 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:34:24 crc kubenswrapper[4812]: I1124 21:34:24.966281 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:34:24 crc kubenswrapper[4812]: E1124 21:34:24.967567 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:34:37 crc kubenswrapper[4812]: I1124 21:34:37.966578 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:34:37 crc kubenswrapper[4812]: E1124 21:34:37.967365 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:34:52 crc kubenswrapper[4812]: I1124 21:34:52.966640 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:34:52 crc kubenswrapper[4812]: E1124 21:34:52.967568 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:35:01 crc kubenswrapper[4812]: I1124 21:35:01.243369 4812 generic.go:334] "Generic (PLEG): container finished" podID="b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028" containerID="872ea12dda9a596bd71308a7ec4af0e778a412adb3eeee469859d35c2d51835c" exitCode=0 Nov 24 21:35:01 crc kubenswrapper[4812]: I1124 21:35:01.243448 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-j29dm" event={"ID":"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028","Type":"ContainerDied","Data":"872ea12dda9a596bd71308a7ec4af0e778a412adb3eeee469859d35c2d51835c"} Nov 24 21:35:02 crc kubenswrapper[4812]: I1124 21:35:02.738288 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-j29dm" Nov 24 21:35:02 crc kubenswrapper[4812]: I1124 21:35:02.930603 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-ssh-key\") pod \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\" (UID: \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\") " Nov 24 21:35:02 crc kubenswrapper[4812]: I1124 21:35:02.930860 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-libvirt-secret-0\") pod \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\" (UID: \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\") " Nov 24 21:35:02 crc kubenswrapper[4812]: I1124 21:35:02.930951 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcpfp\" (UniqueName: \"kubernetes.io/projected/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-kube-api-access-gcpfp\") pod \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\" (UID: \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\") " Nov 24 21:35:02 crc kubenswrapper[4812]: I1124 21:35:02.931007 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-libvirt-combined-ca-bundle\") pod \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\" (UID: \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\") " Nov 24 21:35:02 crc kubenswrapper[4812]: I1124 21:35:02.931094 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-inventory\") pod \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\" (UID: \"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028\") " Nov 24 21:35:02 crc kubenswrapper[4812]: I1124 21:35:02.935842 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028" (UID: "b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:35:02 crc kubenswrapper[4812]: I1124 21:35:02.948606 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-kube-api-access-gcpfp" (OuterVolumeSpecName: "kube-api-access-gcpfp") pod "b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028" (UID: "b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028"). InnerVolumeSpecName "kube-api-access-gcpfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:35:02 crc kubenswrapper[4812]: I1124 21:35:02.962221 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028" (UID: "b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:35:02 crc kubenswrapper[4812]: I1124 21:35:02.967013 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028" (UID: "b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:35:02 crc kubenswrapper[4812]: I1124 21:35:02.974286 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-inventory" (OuterVolumeSpecName: "inventory") pod "b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028" (UID: "b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.034305 4812 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.034368 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcpfp\" (UniqueName: \"kubernetes.io/projected/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-kube-api-access-gcpfp\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.034387 4812 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.034400 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.034418 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.268630 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-j29dm" event={"ID":"b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028","Type":"ContainerDied","Data":"b544591e953beef17f174f7390d50fa2bc7232468fc91a610459f81e19950880"} Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.268686 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b544591e953beef17f174f7390d50fa2bc7232468fc91a610459f81e19950880" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.269060 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-j29dm" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.391855 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-rjxhn"] Nov 24 21:35:03 crc kubenswrapper[4812]: E1124 21:35:03.392496 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028" containerName="libvirt-openstack-openstack-cell1" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.392522 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028" containerName="libvirt-openstack-openstack-cell1" Nov 24 21:35:03 crc kubenswrapper[4812]: E1124 21:35:03.392539 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da7a2d41-1220-44c5-936a-401d88c113a6" containerName="registry-server" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.392547 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7a2d41-1220-44c5-936a-401d88c113a6" containerName="registry-server" Nov 24 21:35:03 crc kubenswrapper[4812]: E1124 21:35:03.392594 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da7a2d41-1220-44c5-936a-401d88c113a6" containerName="extract-utilities" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.392602 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7a2d41-1220-44c5-936a-401d88c113a6" containerName="extract-utilities" Nov 24 21:35:03 crc kubenswrapper[4812]: E1124 21:35:03.392618 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da7a2d41-1220-44c5-936a-401d88c113a6" containerName="extract-content" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.392625 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7a2d41-1220-44c5-936a-401d88c113a6" containerName="extract-content" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.392868 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="da7a2d41-1220-44c5-936a-401d88c113a6" containerName="registry-server" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.392909 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028" containerName="libvirt-openstack-openstack-cell1" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.393877 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.398743 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.398867 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.399013 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.399114 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.399193 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.399281 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.401453 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-rjxhn"] Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.403693 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2x7tj" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.542725 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.542786 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-inventory\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.542877 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.542999 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnlcg\" (UniqueName: \"kubernetes.io/projected/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-kube-api-access-cnlcg\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.543020 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.543042 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.543067 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.543119 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.543160 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.644498 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.644572 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.644603 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-inventory\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.644701 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.644816 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnlcg\" (UniqueName: \"kubernetes.io/projected/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-kube-api-access-cnlcg\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.644836 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.645205 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.645243 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.645352 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.645738 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.649926 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.650563 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.651028 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-inventory\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.653435 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.657810 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.657858 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.658148 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.662212 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnlcg\" (UniqueName: \"kubernetes.io/projected/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-kube-api-access-cnlcg\") pod \"nova-cell1-openstack-openstack-cell1-rjxhn\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:03 crc kubenswrapper[4812]: I1124 21:35:03.729428 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:35:04 crc kubenswrapper[4812]: I1124 21:35:04.235424 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-rjxhn"] Nov 24 21:35:04 crc kubenswrapper[4812]: I1124 21:35:04.282213 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" event={"ID":"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199","Type":"ContainerStarted","Data":"c684ad8434cb74120b55d68f70e26a7f34bba917a56089630562f5dd98351e3b"} Nov 24 21:35:05 crc kubenswrapper[4812]: I1124 21:35:05.295452 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" event={"ID":"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199","Type":"ContainerStarted","Data":"592660d84aec6f22e303ffcb821d134592665cf0e49116df3736b114fc257fa4"} Nov 24 21:35:05 crc kubenswrapper[4812]: I1124 21:35:05.326839 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" podStartSLOduration=1.894185438 podStartE2EDuration="2.32680722s" podCreationTimestamp="2025-11-24 21:35:03 +0000 UTC" firstStartedPulling="2025-11-24 21:35:04.243035704 +0000 UTC m=+8298.031988075" lastFinishedPulling="2025-11-24 21:35:04.675657486 +0000 UTC m=+8298.464609857" observedRunningTime="2025-11-24 21:35:05.319857313 +0000 UTC m=+8299.108809714" watchObservedRunningTime="2025-11-24 21:35:05.32680722 +0000 UTC m=+8299.115759631" Nov 24 21:35:05 crc kubenswrapper[4812]: I1124 21:35:05.965582 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:35:05 crc kubenswrapper[4812]: E1124 21:35:05.966433 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:35:16 crc kubenswrapper[4812]: I1124 21:35:16.190762 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9hb6k"] Nov 24 21:35:16 crc kubenswrapper[4812]: I1124 21:35:16.194672 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hb6k" Nov 24 21:35:16 crc kubenswrapper[4812]: I1124 21:35:16.252606 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7b9h\" (UniqueName: \"kubernetes.io/projected/73e42f55-63cd-4463-a945-f8684e8947ab-kube-api-access-j7b9h\") pod \"community-operators-9hb6k\" (UID: \"73e42f55-63cd-4463-a945-f8684e8947ab\") " pod="openshift-marketplace/community-operators-9hb6k" Nov 24 21:35:16 crc kubenswrapper[4812]: I1124 21:35:16.252779 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73e42f55-63cd-4463-a945-f8684e8947ab-utilities\") pod \"community-operators-9hb6k\" (UID: \"73e42f55-63cd-4463-a945-f8684e8947ab\") " pod="openshift-marketplace/community-operators-9hb6k" Nov 24 21:35:16 crc kubenswrapper[4812]: I1124 21:35:16.252812 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73e42f55-63cd-4463-a945-f8684e8947ab-catalog-content\") pod \"community-operators-9hb6k\" (UID: \"73e42f55-63cd-4463-a945-f8684e8947ab\") " pod="openshift-marketplace/community-operators-9hb6k" Nov 24 21:35:16 crc kubenswrapper[4812]: I1124 21:35:16.266931 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9hb6k"] Nov 24 21:35:16 crc kubenswrapper[4812]: I1124 21:35:16.354762 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73e42f55-63cd-4463-a945-f8684e8947ab-utilities\") pod \"community-operators-9hb6k\" (UID: \"73e42f55-63cd-4463-a945-f8684e8947ab\") " pod="openshift-marketplace/community-operators-9hb6k" Nov 24 21:35:16 crc kubenswrapper[4812]: I1124 21:35:16.354819 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73e42f55-63cd-4463-a945-f8684e8947ab-catalog-content\") pod \"community-operators-9hb6k\" (UID: \"73e42f55-63cd-4463-a945-f8684e8947ab\") " pod="openshift-marketplace/community-operators-9hb6k" Nov 24 21:35:16 crc kubenswrapper[4812]: I1124 21:35:16.354919 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7b9h\" (UniqueName: \"kubernetes.io/projected/73e42f55-63cd-4463-a945-f8684e8947ab-kube-api-access-j7b9h\") pod \"community-operators-9hb6k\" (UID: \"73e42f55-63cd-4463-a945-f8684e8947ab\") " pod="openshift-marketplace/community-operators-9hb6k" Nov 24 21:35:16 crc kubenswrapper[4812]: I1124 21:35:16.355816 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73e42f55-63cd-4463-a945-f8684e8947ab-utilities\") pod \"community-operators-9hb6k\" (UID: \"73e42f55-63cd-4463-a945-f8684e8947ab\") " pod="openshift-marketplace/community-operators-9hb6k" Nov 24 21:35:16 crc kubenswrapper[4812]: I1124 21:35:16.356108 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73e42f55-63cd-4463-a945-f8684e8947ab-catalog-content\") pod \"community-operators-9hb6k\" (UID: \"73e42f55-63cd-4463-a945-f8684e8947ab\") " pod="openshift-marketplace/community-operators-9hb6k" Nov 24 21:35:16 crc kubenswrapper[4812]: I1124 21:35:16.375816 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7b9h\" (UniqueName: \"kubernetes.io/projected/73e42f55-63cd-4463-a945-f8684e8947ab-kube-api-access-j7b9h\") pod \"community-operators-9hb6k\" (UID: \"73e42f55-63cd-4463-a945-f8684e8947ab\") " pod="openshift-marketplace/community-operators-9hb6k" Nov 24 21:35:16 crc kubenswrapper[4812]: I1124 21:35:16.574181 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hb6k" Nov 24 21:35:17 crc kubenswrapper[4812]: I1124 21:35:17.098172 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9hb6k"] Nov 24 21:35:17 crc kubenswrapper[4812]: W1124 21:35:17.108108 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73e42f55_63cd_4463_a945_f8684e8947ab.slice/crio-3c3f0bc18148b5354226627ddf256ff56d6e50141627240c14acd22d6ec0e69d WatchSource:0}: Error finding container 3c3f0bc18148b5354226627ddf256ff56d6e50141627240c14acd22d6ec0e69d: Status 404 returned error can't find the container with id 3c3f0bc18148b5354226627ddf256ff56d6e50141627240c14acd22d6ec0e69d Nov 24 21:35:17 crc kubenswrapper[4812]: I1124 21:35:17.444149 4812 generic.go:334] "Generic (PLEG): container finished" podID="73e42f55-63cd-4463-a945-f8684e8947ab" containerID="b2f72dec72f53417bfab4289dda840ef938ea4465e2f4d9208e3fbb8d6151a9f" exitCode=0 Nov 24 21:35:17 crc kubenswrapper[4812]: I1124 21:35:17.444193 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hb6k" event={"ID":"73e42f55-63cd-4463-a945-f8684e8947ab","Type":"ContainerDied","Data":"b2f72dec72f53417bfab4289dda840ef938ea4465e2f4d9208e3fbb8d6151a9f"} Nov 24 21:35:17 crc kubenswrapper[4812]: I1124 21:35:17.444221 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hb6k" event={"ID":"73e42f55-63cd-4463-a945-f8684e8947ab","Type":"ContainerStarted","Data":"3c3f0bc18148b5354226627ddf256ff56d6e50141627240c14acd22d6ec0e69d"} Nov 24 21:35:17 crc kubenswrapper[4812]: I1124 21:35:17.969453 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:35:17 crc kubenswrapper[4812]: E1124 21:35:17.970308 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:35:18 crc kubenswrapper[4812]: I1124 21:35:18.458832 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hb6k" event={"ID":"73e42f55-63cd-4463-a945-f8684e8947ab","Type":"ContainerStarted","Data":"bf63b9520bac954320a42036f83cd8de92d0160f9eb0cb106646536432d576d2"} Nov 24 21:35:19 crc kubenswrapper[4812]: I1124 21:35:19.472865 4812 generic.go:334] "Generic (PLEG): container finished" podID="73e42f55-63cd-4463-a945-f8684e8947ab" containerID="bf63b9520bac954320a42036f83cd8de92d0160f9eb0cb106646536432d576d2" exitCode=0 Nov 24 21:35:19 crc kubenswrapper[4812]: I1124 21:35:19.472990 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hb6k" event={"ID":"73e42f55-63cd-4463-a945-f8684e8947ab","Type":"ContainerDied","Data":"bf63b9520bac954320a42036f83cd8de92d0160f9eb0cb106646536432d576d2"} Nov 24 21:35:20 crc kubenswrapper[4812]: I1124 21:35:20.485968 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hb6k" event={"ID":"73e42f55-63cd-4463-a945-f8684e8947ab","Type":"ContainerStarted","Data":"8ee5fc39ffb151b31c7508a15b05a4e6fddb32dc09746875ea5d47bb5708080e"} Nov 24 21:35:20 crc kubenswrapper[4812]: I1124 21:35:20.510358 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9hb6k" podStartSLOduration=2.002284013 podStartE2EDuration="4.510319815s" podCreationTimestamp="2025-11-24 21:35:16 +0000 UTC" firstStartedPulling="2025-11-24 21:35:17.448490278 +0000 UTC m=+8311.237442669" lastFinishedPulling="2025-11-24 21:35:19.95652606 +0000 UTC m=+8313.745478471" observedRunningTime="2025-11-24 21:35:20.506202208 +0000 UTC m=+8314.295154579" watchObservedRunningTime="2025-11-24 21:35:20.510319815 +0000 UTC m=+8314.299272206" Nov 24 21:35:24 crc kubenswrapper[4812]: I1124 21:35:24.527600 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4rvtd"] Nov 24 21:35:24 crc kubenswrapper[4812]: I1124 21:35:24.532822 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4rvtd" Nov 24 21:35:24 crc kubenswrapper[4812]: I1124 21:35:24.537833 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4rvtd"] Nov 24 21:35:24 crc kubenswrapper[4812]: I1124 21:35:24.680751 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a-utilities\") pod \"redhat-operators-4rvtd\" (UID: \"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a\") " pod="openshift-marketplace/redhat-operators-4rvtd" Nov 24 21:35:24 crc kubenswrapper[4812]: I1124 21:35:24.681168 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a-catalog-content\") pod \"redhat-operators-4rvtd\" (UID: \"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a\") " pod="openshift-marketplace/redhat-operators-4rvtd" Nov 24 21:35:24 crc kubenswrapper[4812]: I1124 21:35:24.681398 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbgsz\" (UniqueName: \"kubernetes.io/projected/d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a-kube-api-access-fbgsz\") pod \"redhat-operators-4rvtd\" (UID: \"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a\") " pod="openshift-marketplace/redhat-operators-4rvtd" Nov 24 21:35:24 crc kubenswrapper[4812]: I1124 21:35:24.783363 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a-catalog-content\") pod \"redhat-operators-4rvtd\" (UID: \"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a\") " pod="openshift-marketplace/redhat-operators-4rvtd" Nov 24 21:35:24 crc kubenswrapper[4812]: I1124 21:35:24.783443 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbgsz\" (UniqueName: \"kubernetes.io/projected/d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a-kube-api-access-fbgsz\") pod \"redhat-operators-4rvtd\" (UID: \"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a\") " pod="openshift-marketplace/redhat-operators-4rvtd" Nov 24 21:35:24 crc kubenswrapper[4812]: I1124 21:35:24.783522 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a-utilities\") pod \"redhat-operators-4rvtd\" (UID: \"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a\") " pod="openshift-marketplace/redhat-operators-4rvtd" Nov 24 21:35:24 crc kubenswrapper[4812]: I1124 21:35:24.783983 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a-catalog-content\") pod \"redhat-operators-4rvtd\" (UID: \"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a\") " pod="openshift-marketplace/redhat-operators-4rvtd" Nov 24 21:35:24 crc kubenswrapper[4812]: I1124 21:35:24.784010 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a-utilities\") pod \"redhat-operators-4rvtd\" (UID: \"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a\") " pod="openshift-marketplace/redhat-operators-4rvtd" Nov 24 21:35:24 crc kubenswrapper[4812]: I1124 21:35:24.804845 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbgsz\" (UniqueName: \"kubernetes.io/projected/d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a-kube-api-access-fbgsz\") pod \"redhat-operators-4rvtd\" (UID: \"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a\") " pod="openshift-marketplace/redhat-operators-4rvtd" Nov 24 21:35:24 crc kubenswrapper[4812]: I1124 21:35:24.885708 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4rvtd" Nov 24 21:35:25 crc kubenswrapper[4812]: W1124 21:35:25.366577 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0f9fb81_8c10_4224_a1d4_2d8afa7bd92a.slice/crio-f6701857d82b260366b5615052864f71d6efc56981fc57f038dbfea1527e107a WatchSource:0}: Error finding container f6701857d82b260366b5615052864f71d6efc56981fc57f038dbfea1527e107a: Status 404 returned error can't find the container with id f6701857d82b260366b5615052864f71d6efc56981fc57f038dbfea1527e107a Nov 24 21:35:25 crc kubenswrapper[4812]: I1124 21:35:25.367880 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4rvtd"] Nov 24 21:35:25 crc kubenswrapper[4812]: I1124 21:35:25.550643 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rvtd" event={"ID":"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a","Type":"ContainerStarted","Data":"f6701857d82b260366b5615052864f71d6efc56981fc57f038dbfea1527e107a"} Nov 24 21:35:26 crc kubenswrapper[4812]: I1124 21:35:26.570541 4812 generic.go:334] "Generic (PLEG): container finished" podID="d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a" containerID="2a7d906ee846d728b7d5b97dd0db696a813db0b04120b5e482f84e9c95f89dc6" exitCode=0 Nov 24 21:35:26 crc kubenswrapper[4812]: I1124 21:35:26.570624 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rvtd" event={"ID":"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a","Type":"ContainerDied","Data":"2a7d906ee846d728b7d5b97dd0db696a813db0b04120b5e482f84e9c95f89dc6"} Nov 24 21:35:26 crc kubenswrapper[4812]: I1124 21:35:26.574303 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9hb6k" Nov 24 21:35:26 crc kubenswrapper[4812]: I1124 21:35:26.574392 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9hb6k" Nov 24 21:35:26 crc kubenswrapper[4812]: I1124 21:35:26.574529 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:35:26 crc kubenswrapper[4812]: I1124 21:35:26.647206 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9hb6k" Nov 24 21:35:27 crc kubenswrapper[4812]: I1124 21:35:27.587555 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rvtd" event={"ID":"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a","Type":"ContainerStarted","Data":"ca4ab1fcbe1359b9cb6fa03e2a33f951765db628f25189be08359a1df90cd4f2"} Nov 24 21:35:27 crc kubenswrapper[4812]: I1124 21:35:27.676886 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9hb6k" Nov 24 21:35:29 crc kubenswrapper[4812]: I1124 21:35:29.110772 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9hb6k"] Nov 24 21:35:29 crc kubenswrapper[4812]: I1124 21:35:29.607834 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9hb6k" podUID="73e42f55-63cd-4463-a945-f8684e8947ab" containerName="registry-server" containerID="cri-o://8ee5fc39ffb151b31c7508a15b05a4e6fddb32dc09746875ea5d47bb5708080e" gracePeriod=2 Nov 24 21:35:29 crc kubenswrapper[4812]: I1124 21:35:29.966770 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:35:29 crc kubenswrapper[4812]: E1124 21:35:29.967172 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:35:30 crc kubenswrapper[4812]: I1124 21:35:30.622153 4812 generic.go:334] "Generic (PLEG): container finished" podID="73e42f55-63cd-4463-a945-f8684e8947ab" containerID="8ee5fc39ffb151b31c7508a15b05a4e6fddb32dc09746875ea5d47bb5708080e" exitCode=0 Nov 24 21:35:30 crc kubenswrapper[4812]: I1124 21:35:30.622236 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hb6k" event={"ID":"73e42f55-63cd-4463-a945-f8684e8947ab","Type":"ContainerDied","Data":"8ee5fc39ffb151b31c7508a15b05a4e6fddb32dc09746875ea5d47bb5708080e"} Nov 24 21:35:30 crc kubenswrapper[4812]: I1124 21:35:30.622587 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hb6k" event={"ID":"73e42f55-63cd-4463-a945-f8684e8947ab","Type":"ContainerDied","Data":"3c3f0bc18148b5354226627ddf256ff56d6e50141627240c14acd22d6ec0e69d"} Nov 24 21:35:30 crc kubenswrapper[4812]: I1124 21:35:30.622609 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c3f0bc18148b5354226627ddf256ff56d6e50141627240c14acd22d6ec0e69d" Nov 24 21:35:30 crc kubenswrapper[4812]: I1124 21:35:30.682846 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hb6k" Nov 24 21:35:30 crc kubenswrapper[4812]: I1124 21:35:30.795564 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7b9h\" (UniqueName: \"kubernetes.io/projected/73e42f55-63cd-4463-a945-f8684e8947ab-kube-api-access-j7b9h\") pod \"73e42f55-63cd-4463-a945-f8684e8947ab\" (UID: \"73e42f55-63cd-4463-a945-f8684e8947ab\") " Nov 24 21:35:30 crc kubenswrapper[4812]: I1124 21:35:30.795623 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73e42f55-63cd-4463-a945-f8684e8947ab-utilities\") pod \"73e42f55-63cd-4463-a945-f8684e8947ab\" (UID: \"73e42f55-63cd-4463-a945-f8684e8947ab\") " Nov 24 21:35:30 crc kubenswrapper[4812]: I1124 21:35:30.795899 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73e42f55-63cd-4463-a945-f8684e8947ab-catalog-content\") pod \"73e42f55-63cd-4463-a945-f8684e8947ab\" (UID: \"73e42f55-63cd-4463-a945-f8684e8947ab\") " Nov 24 21:35:30 crc kubenswrapper[4812]: I1124 21:35:30.797091 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73e42f55-63cd-4463-a945-f8684e8947ab-utilities" (OuterVolumeSpecName: "utilities") pod "73e42f55-63cd-4463-a945-f8684e8947ab" (UID: "73e42f55-63cd-4463-a945-f8684e8947ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:35:30 crc kubenswrapper[4812]: I1124 21:35:30.805878 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73e42f55-63cd-4463-a945-f8684e8947ab-kube-api-access-j7b9h" (OuterVolumeSpecName: "kube-api-access-j7b9h") pod "73e42f55-63cd-4463-a945-f8684e8947ab" (UID: "73e42f55-63cd-4463-a945-f8684e8947ab"). InnerVolumeSpecName "kube-api-access-j7b9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:35:30 crc kubenswrapper[4812]: I1124 21:35:30.847849 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73e42f55-63cd-4463-a945-f8684e8947ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73e42f55-63cd-4463-a945-f8684e8947ab" (UID: "73e42f55-63cd-4463-a945-f8684e8947ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:35:30 crc kubenswrapper[4812]: I1124 21:35:30.898510 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73e42f55-63cd-4463-a945-f8684e8947ab-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:30 crc kubenswrapper[4812]: I1124 21:35:30.898549 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7b9h\" (UniqueName: \"kubernetes.io/projected/73e42f55-63cd-4463-a945-f8684e8947ab-kube-api-access-j7b9h\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:30 crc kubenswrapper[4812]: I1124 21:35:30.898564 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73e42f55-63cd-4463-a945-f8684e8947ab-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:31 crc kubenswrapper[4812]: I1124 21:35:31.631106 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hb6k" Nov 24 21:35:31 crc kubenswrapper[4812]: I1124 21:35:31.658232 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9hb6k"] Nov 24 21:35:31 crc kubenswrapper[4812]: I1124 21:35:31.667858 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9hb6k"] Nov 24 21:35:32 crc kubenswrapper[4812]: I1124 21:35:32.643244 4812 generic.go:334] "Generic (PLEG): container finished" podID="d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a" containerID="ca4ab1fcbe1359b9cb6fa03e2a33f951765db628f25189be08359a1df90cd4f2" exitCode=0 Nov 24 21:35:32 crc kubenswrapper[4812]: I1124 21:35:32.643585 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rvtd" event={"ID":"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a","Type":"ContainerDied","Data":"ca4ab1fcbe1359b9cb6fa03e2a33f951765db628f25189be08359a1df90cd4f2"} Nov 24 21:35:32 crc kubenswrapper[4812]: I1124 21:35:32.977530 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73e42f55-63cd-4463-a945-f8684e8947ab" path="/var/lib/kubelet/pods/73e42f55-63cd-4463-a945-f8684e8947ab/volumes" Nov 24 21:35:33 crc kubenswrapper[4812]: I1124 21:35:33.654667 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rvtd" event={"ID":"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a","Type":"ContainerStarted","Data":"9d4ee2a8078c3fa891250e4e127ccdb79fcebdf645f564c84ecbd65ff370b50a"} Nov 24 21:35:33 crc kubenswrapper[4812]: I1124 21:35:33.681560 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4rvtd" podStartSLOduration=3.2306081 podStartE2EDuration="9.681543881s" podCreationTimestamp="2025-11-24 21:35:24 +0000 UTC" firstStartedPulling="2025-11-24 21:35:26.574115913 +0000 UTC m=+8320.363068284" lastFinishedPulling="2025-11-24 21:35:33.025051694 +0000 UTC m=+8326.814004065" observedRunningTime="2025-11-24 21:35:33.673697858 +0000 UTC m=+8327.462650229" watchObservedRunningTime="2025-11-24 21:35:33.681543881 +0000 UTC m=+8327.470496252" Nov 24 21:35:34 crc kubenswrapper[4812]: I1124 21:35:34.887276 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4rvtd" Nov 24 21:35:34 crc kubenswrapper[4812]: I1124 21:35:34.887587 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4rvtd" Nov 24 21:35:35 crc kubenswrapper[4812]: I1124 21:35:35.951642 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4rvtd" podUID="d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a" containerName="registry-server" probeResult="failure" output=< Nov 24 21:35:35 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 21:35:35 crc kubenswrapper[4812]: > Nov 24 21:35:40 crc kubenswrapper[4812]: I1124 21:35:40.966125 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:35:40 crc kubenswrapper[4812]: E1124 21:35:40.966923 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:35:45 crc kubenswrapper[4812]: I1124 21:35:45.948155 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4rvtd" podUID="d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a" containerName="registry-server" probeResult="failure" output=< Nov 24 21:35:45 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 21:35:45 crc kubenswrapper[4812]: > Nov 24 21:35:53 crc kubenswrapper[4812]: I1124 21:35:53.250727 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lwcg8"] Nov 24 21:35:53 crc kubenswrapper[4812]: E1124 21:35:53.253887 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e42f55-63cd-4463-a945-f8684e8947ab" containerName="registry-server" Nov 24 21:35:53 crc kubenswrapper[4812]: I1124 21:35:53.253909 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e42f55-63cd-4463-a945-f8684e8947ab" containerName="registry-server" Nov 24 21:35:53 crc kubenswrapper[4812]: E1124 21:35:53.253932 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e42f55-63cd-4463-a945-f8684e8947ab" containerName="extract-content" Nov 24 21:35:53 crc kubenswrapper[4812]: I1124 21:35:53.253940 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e42f55-63cd-4463-a945-f8684e8947ab" containerName="extract-content" Nov 24 21:35:53 crc kubenswrapper[4812]: E1124 21:35:53.253971 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e42f55-63cd-4463-a945-f8684e8947ab" containerName="extract-utilities" Nov 24 21:35:53 crc kubenswrapper[4812]: I1124 21:35:53.253980 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e42f55-63cd-4463-a945-f8684e8947ab" containerName="extract-utilities" Nov 24 21:35:53 crc kubenswrapper[4812]: I1124 21:35:53.254314 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="73e42f55-63cd-4463-a945-f8684e8947ab" containerName="registry-server" Nov 24 21:35:53 crc kubenswrapper[4812]: I1124 21:35:53.256591 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lwcg8" Nov 24 21:35:53 crc kubenswrapper[4812]: I1124 21:35:53.268672 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwcg8"] Nov 24 21:35:53 crc kubenswrapper[4812]: I1124 21:35:53.370874 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1083b5d3-b890-41cc-9639-2ad4161cf2d6-utilities\") pod \"redhat-marketplace-lwcg8\" (UID: \"1083b5d3-b890-41cc-9639-2ad4161cf2d6\") " pod="openshift-marketplace/redhat-marketplace-lwcg8" Nov 24 21:35:53 crc kubenswrapper[4812]: I1124 21:35:53.370969 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brkld\" (UniqueName: \"kubernetes.io/projected/1083b5d3-b890-41cc-9639-2ad4161cf2d6-kube-api-access-brkld\") pod \"redhat-marketplace-lwcg8\" (UID: \"1083b5d3-b890-41cc-9639-2ad4161cf2d6\") " pod="openshift-marketplace/redhat-marketplace-lwcg8" Nov 24 21:35:53 crc kubenswrapper[4812]: I1124 21:35:53.371049 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1083b5d3-b890-41cc-9639-2ad4161cf2d6-catalog-content\") pod \"redhat-marketplace-lwcg8\" (UID: \"1083b5d3-b890-41cc-9639-2ad4161cf2d6\") " pod="openshift-marketplace/redhat-marketplace-lwcg8" Nov 24 21:35:53 crc kubenswrapper[4812]: I1124 21:35:53.473043 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brkld\" (UniqueName: \"kubernetes.io/projected/1083b5d3-b890-41cc-9639-2ad4161cf2d6-kube-api-access-brkld\") pod \"redhat-marketplace-lwcg8\" (UID: \"1083b5d3-b890-41cc-9639-2ad4161cf2d6\") " pod="openshift-marketplace/redhat-marketplace-lwcg8" Nov 24 21:35:53 crc kubenswrapper[4812]: I1124 21:35:53.473191 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1083b5d3-b890-41cc-9639-2ad4161cf2d6-catalog-content\") pod \"redhat-marketplace-lwcg8\" (UID: \"1083b5d3-b890-41cc-9639-2ad4161cf2d6\") " pod="openshift-marketplace/redhat-marketplace-lwcg8" Nov 24 21:35:53 crc kubenswrapper[4812]: I1124 21:35:53.473352 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1083b5d3-b890-41cc-9639-2ad4161cf2d6-utilities\") pod \"redhat-marketplace-lwcg8\" (UID: \"1083b5d3-b890-41cc-9639-2ad4161cf2d6\") " pod="openshift-marketplace/redhat-marketplace-lwcg8" Nov 24 21:35:53 crc kubenswrapper[4812]: I1124 21:35:53.473693 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1083b5d3-b890-41cc-9639-2ad4161cf2d6-catalog-content\") pod \"redhat-marketplace-lwcg8\" (UID: \"1083b5d3-b890-41cc-9639-2ad4161cf2d6\") " pod="openshift-marketplace/redhat-marketplace-lwcg8" Nov 24 21:35:53 crc kubenswrapper[4812]: I1124 21:35:53.473781 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1083b5d3-b890-41cc-9639-2ad4161cf2d6-utilities\") pod \"redhat-marketplace-lwcg8\" (UID: \"1083b5d3-b890-41cc-9639-2ad4161cf2d6\") " pod="openshift-marketplace/redhat-marketplace-lwcg8" Nov 24 21:35:53 crc kubenswrapper[4812]: I1124 21:35:53.502914 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brkld\" (UniqueName: \"kubernetes.io/projected/1083b5d3-b890-41cc-9639-2ad4161cf2d6-kube-api-access-brkld\") pod \"redhat-marketplace-lwcg8\" (UID: \"1083b5d3-b890-41cc-9639-2ad4161cf2d6\") " pod="openshift-marketplace/redhat-marketplace-lwcg8" Nov 24 21:35:53 crc kubenswrapper[4812]: I1124 21:35:53.594553 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lwcg8" Nov 24 21:35:53 crc kubenswrapper[4812]: I1124 21:35:53.965711 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:35:53 crc kubenswrapper[4812]: E1124 21:35:53.966370 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:35:54 crc kubenswrapper[4812]: I1124 21:35:54.096718 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwcg8"] Nov 24 21:35:54 crc kubenswrapper[4812]: I1124 21:35:54.911769 4812 generic.go:334] "Generic (PLEG): container finished" podID="1083b5d3-b890-41cc-9639-2ad4161cf2d6" containerID="3c8fd32ec99e56c52f65df7722f2acab2f605496ef222c17e768d2fac4a15c05" exitCode=0 Nov 24 21:35:54 crc kubenswrapper[4812]: I1124 21:35:54.911825 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwcg8" event={"ID":"1083b5d3-b890-41cc-9639-2ad4161cf2d6","Type":"ContainerDied","Data":"3c8fd32ec99e56c52f65df7722f2acab2f605496ef222c17e768d2fac4a15c05"} Nov 24 21:35:54 crc kubenswrapper[4812]: I1124 21:35:54.912142 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwcg8" event={"ID":"1083b5d3-b890-41cc-9639-2ad4161cf2d6","Type":"ContainerStarted","Data":"b5011a92a2a457e7055fa26428789d62b7087c2bbee4b1f916a9734267526a3c"} Nov 24 21:35:54 crc kubenswrapper[4812]: I1124 21:35:54.986223 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4rvtd" Nov 24 21:35:55 crc kubenswrapper[4812]: I1124 21:35:55.039125 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4rvtd" Nov 24 21:35:55 crc kubenswrapper[4812]: I1124 21:35:55.929262 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwcg8" event={"ID":"1083b5d3-b890-41cc-9639-2ad4161cf2d6","Type":"ContainerStarted","Data":"f322cb6f47baa97d2ed8c9e13dfa13b87896565f93f08af3351b88cf64be140d"} Nov 24 21:35:56 crc kubenswrapper[4812]: I1124 21:35:56.817593 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4rvtd"] Nov 24 21:35:56 crc kubenswrapper[4812]: I1124 21:35:56.944038 4812 generic.go:334] "Generic (PLEG): container finished" podID="1083b5d3-b890-41cc-9639-2ad4161cf2d6" containerID="f322cb6f47baa97d2ed8c9e13dfa13b87896565f93f08af3351b88cf64be140d" exitCode=0 Nov 24 21:35:56 crc kubenswrapper[4812]: I1124 21:35:56.944150 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwcg8" event={"ID":"1083b5d3-b890-41cc-9639-2ad4161cf2d6","Type":"ContainerDied","Data":"f322cb6f47baa97d2ed8c9e13dfa13b87896565f93f08af3351b88cf64be140d"} Nov 24 21:35:56 crc kubenswrapper[4812]: I1124 21:35:56.944468 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4rvtd" podUID="d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a" containerName="registry-server" containerID="cri-o://9d4ee2a8078c3fa891250e4e127ccdb79fcebdf645f564c84ecbd65ff370b50a" gracePeriod=2 Nov 24 21:35:57 crc kubenswrapper[4812]: I1124 21:35:57.439655 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4rvtd" Nov 24 21:35:57 crc kubenswrapper[4812]: I1124 21:35:57.581715 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a-utilities\") pod \"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a\" (UID: \"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a\") " Nov 24 21:35:57 crc kubenswrapper[4812]: I1124 21:35:57.582190 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a-catalog-content\") pod \"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a\" (UID: \"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a\") " Nov 24 21:35:57 crc kubenswrapper[4812]: I1124 21:35:57.582397 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbgsz\" (UniqueName: \"kubernetes.io/projected/d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a-kube-api-access-fbgsz\") pod \"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a\" (UID: \"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a\") " Nov 24 21:35:57 crc kubenswrapper[4812]: I1124 21:35:57.582529 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a-utilities" (OuterVolumeSpecName: "utilities") pod "d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a" (UID: "d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:35:57 crc kubenswrapper[4812]: I1124 21:35:57.583137 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:57 crc kubenswrapper[4812]: I1124 21:35:57.588228 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a-kube-api-access-fbgsz" (OuterVolumeSpecName: "kube-api-access-fbgsz") pod "d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a" (UID: "d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a"). InnerVolumeSpecName "kube-api-access-fbgsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:35:57 crc kubenswrapper[4812]: I1124 21:35:57.668525 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a" (UID: "d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:35:57 crc kubenswrapper[4812]: I1124 21:35:57.685483 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:57 crc kubenswrapper[4812]: I1124 21:35:57.685511 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbgsz\" (UniqueName: \"kubernetes.io/projected/d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a-kube-api-access-fbgsz\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:57 crc kubenswrapper[4812]: I1124 21:35:57.955487 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwcg8" event={"ID":"1083b5d3-b890-41cc-9639-2ad4161cf2d6","Type":"ContainerStarted","Data":"e1f4fec193f56a0e50e30bbbc61a68d09f6d11b85ebdadc2e35d96a845b70107"} Nov 24 21:35:57 crc kubenswrapper[4812]: I1124 21:35:57.958260 4812 generic.go:334] "Generic (PLEG): container finished" podID="d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a" containerID="9d4ee2a8078c3fa891250e4e127ccdb79fcebdf645f564c84ecbd65ff370b50a" exitCode=0 Nov 24 21:35:57 crc kubenswrapper[4812]: I1124 21:35:57.958296 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rvtd" event={"ID":"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a","Type":"ContainerDied","Data":"9d4ee2a8078c3fa891250e4e127ccdb79fcebdf645f564c84ecbd65ff370b50a"} Nov 24 21:35:57 crc kubenswrapper[4812]: I1124 21:35:57.958346 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4rvtd" event={"ID":"d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a","Type":"ContainerDied","Data":"f6701857d82b260366b5615052864f71d6efc56981fc57f038dbfea1527e107a"} Nov 24 21:35:57 crc kubenswrapper[4812]: I1124 21:35:57.958352 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4rvtd" Nov 24 21:35:57 crc kubenswrapper[4812]: I1124 21:35:57.958369 4812 scope.go:117] "RemoveContainer" containerID="9d4ee2a8078c3fa891250e4e127ccdb79fcebdf645f564c84ecbd65ff370b50a" Nov 24 21:35:57 crc kubenswrapper[4812]: I1124 21:35:57.977378 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lwcg8" podStartSLOduration=2.44482831 podStartE2EDuration="4.977350875s" podCreationTimestamp="2025-11-24 21:35:53 +0000 UTC" firstStartedPulling="2025-11-24 21:35:54.914043546 +0000 UTC m=+8348.702995927" lastFinishedPulling="2025-11-24 21:35:57.446566121 +0000 UTC m=+8351.235518492" observedRunningTime="2025-11-24 21:35:57.973093704 +0000 UTC m=+8351.762046075" watchObservedRunningTime="2025-11-24 21:35:57.977350875 +0000 UTC m=+8351.766303266" Nov 24 21:35:57 crc kubenswrapper[4812]: I1124 21:35:57.986252 4812 scope.go:117] "RemoveContainer" containerID="ca4ab1fcbe1359b9cb6fa03e2a33f951765db628f25189be08359a1df90cd4f2" Nov 24 21:35:58 crc kubenswrapper[4812]: I1124 21:35:58.005262 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4rvtd"] Nov 24 21:35:58 crc kubenswrapper[4812]: I1124 21:35:58.017331 4812 scope.go:117] "RemoveContainer" containerID="2a7d906ee846d728b7d5b97dd0db696a813db0b04120b5e482f84e9c95f89dc6" Nov 24 21:35:58 crc kubenswrapper[4812]: I1124 21:35:58.017579 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4rvtd"] Nov 24 21:35:58 crc kubenswrapper[4812]: I1124 21:35:58.039514 4812 scope.go:117] "RemoveContainer" containerID="9d4ee2a8078c3fa891250e4e127ccdb79fcebdf645f564c84ecbd65ff370b50a" Nov 24 21:35:58 crc kubenswrapper[4812]: E1124 21:35:58.039931 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d4ee2a8078c3fa891250e4e127ccdb79fcebdf645f564c84ecbd65ff370b50a\": container with ID starting with 9d4ee2a8078c3fa891250e4e127ccdb79fcebdf645f564c84ecbd65ff370b50a not found: ID does not exist" containerID="9d4ee2a8078c3fa891250e4e127ccdb79fcebdf645f564c84ecbd65ff370b50a" Nov 24 21:35:58 crc kubenswrapper[4812]: I1124 21:35:58.039961 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d4ee2a8078c3fa891250e4e127ccdb79fcebdf645f564c84ecbd65ff370b50a"} err="failed to get container status \"9d4ee2a8078c3fa891250e4e127ccdb79fcebdf645f564c84ecbd65ff370b50a\": rpc error: code = NotFound desc = could not find container \"9d4ee2a8078c3fa891250e4e127ccdb79fcebdf645f564c84ecbd65ff370b50a\": container with ID starting with 9d4ee2a8078c3fa891250e4e127ccdb79fcebdf645f564c84ecbd65ff370b50a not found: ID does not exist" Nov 24 21:35:58 crc kubenswrapper[4812]: I1124 21:35:58.039982 4812 scope.go:117] "RemoveContainer" containerID="ca4ab1fcbe1359b9cb6fa03e2a33f951765db628f25189be08359a1df90cd4f2" Nov 24 21:35:58 crc kubenswrapper[4812]: E1124 21:35:58.040264 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca4ab1fcbe1359b9cb6fa03e2a33f951765db628f25189be08359a1df90cd4f2\": container with ID starting with ca4ab1fcbe1359b9cb6fa03e2a33f951765db628f25189be08359a1df90cd4f2 not found: ID does not exist" containerID="ca4ab1fcbe1359b9cb6fa03e2a33f951765db628f25189be08359a1df90cd4f2" Nov 24 21:35:58 crc kubenswrapper[4812]: I1124 21:35:58.040286 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca4ab1fcbe1359b9cb6fa03e2a33f951765db628f25189be08359a1df90cd4f2"} err="failed to get container status \"ca4ab1fcbe1359b9cb6fa03e2a33f951765db628f25189be08359a1df90cd4f2\": rpc error: code = NotFound desc = could not find container \"ca4ab1fcbe1359b9cb6fa03e2a33f951765db628f25189be08359a1df90cd4f2\": container with ID starting with ca4ab1fcbe1359b9cb6fa03e2a33f951765db628f25189be08359a1df90cd4f2 not found: ID does not exist" Nov 24 21:35:58 crc kubenswrapper[4812]: I1124 21:35:58.040299 4812 scope.go:117] "RemoveContainer" containerID="2a7d906ee846d728b7d5b97dd0db696a813db0b04120b5e482f84e9c95f89dc6" Nov 24 21:35:58 crc kubenswrapper[4812]: E1124 21:35:58.040584 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a7d906ee846d728b7d5b97dd0db696a813db0b04120b5e482f84e9c95f89dc6\": container with ID starting with 2a7d906ee846d728b7d5b97dd0db696a813db0b04120b5e482f84e9c95f89dc6 not found: ID does not exist" containerID="2a7d906ee846d728b7d5b97dd0db696a813db0b04120b5e482f84e9c95f89dc6" Nov 24 21:35:58 crc kubenswrapper[4812]: I1124 21:35:58.040635 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a7d906ee846d728b7d5b97dd0db696a813db0b04120b5e482f84e9c95f89dc6"} err="failed to get container status \"2a7d906ee846d728b7d5b97dd0db696a813db0b04120b5e482f84e9c95f89dc6\": rpc error: code = NotFound desc = could not find container \"2a7d906ee846d728b7d5b97dd0db696a813db0b04120b5e482f84e9c95f89dc6\": container with ID starting with 2a7d906ee846d728b7d5b97dd0db696a813db0b04120b5e482f84e9c95f89dc6 not found: ID does not exist" Nov 24 21:35:58 crc kubenswrapper[4812]: I1124 21:35:58.980947 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a" path="/var/lib/kubelet/pods/d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a/volumes" Nov 24 21:36:03 crc kubenswrapper[4812]: I1124 21:36:03.595401 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lwcg8" Nov 24 21:36:03 crc kubenswrapper[4812]: I1124 21:36:03.595998 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lwcg8" Nov 24 21:36:03 crc kubenswrapper[4812]: I1124 21:36:03.658985 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lwcg8" Nov 24 21:36:04 crc kubenswrapper[4812]: I1124 21:36:04.119629 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lwcg8" Nov 24 21:36:04 crc kubenswrapper[4812]: I1124 21:36:04.176490 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwcg8"] Nov 24 21:36:05 crc kubenswrapper[4812]: I1124 21:36:05.966313 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:36:05 crc kubenswrapper[4812]: E1124 21:36:05.967473 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:36:06 crc kubenswrapper[4812]: I1124 21:36:06.069009 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lwcg8" podUID="1083b5d3-b890-41cc-9639-2ad4161cf2d6" containerName="registry-server" containerID="cri-o://e1f4fec193f56a0e50e30bbbc61a68d09f6d11b85ebdadc2e35d96a845b70107" gracePeriod=2 Nov 24 21:36:06 crc kubenswrapper[4812]: I1124 21:36:06.626732 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lwcg8" Nov 24 21:36:06 crc kubenswrapper[4812]: I1124 21:36:06.816643 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brkld\" (UniqueName: \"kubernetes.io/projected/1083b5d3-b890-41cc-9639-2ad4161cf2d6-kube-api-access-brkld\") pod \"1083b5d3-b890-41cc-9639-2ad4161cf2d6\" (UID: \"1083b5d3-b890-41cc-9639-2ad4161cf2d6\") " Nov 24 21:36:06 crc kubenswrapper[4812]: I1124 21:36:06.816710 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1083b5d3-b890-41cc-9639-2ad4161cf2d6-utilities\") pod \"1083b5d3-b890-41cc-9639-2ad4161cf2d6\" (UID: \"1083b5d3-b890-41cc-9639-2ad4161cf2d6\") " Nov 24 21:36:06 crc kubenswrapper[4812]: I1124 21:36:06.816973 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1083b5d3-b890-41cc-9639-2ad4161cf2d6-catalog-content\") pod \"1083b5d3-b890-41cc-9639-2ad4161cf2d6\" (UID: \"1083b5d3-b890-41cc-9639-2ad4161cf2d6\") " Nov 24 21:36:06 crc kubenswrapper[4812]: I1124 21:36:06.817558 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1083b5d3-b890-41cc-9639-2ad4161cf2d6-utilities" (OuterVolumeSpecName: "utilities") pod "1083b5d3-b890-41cc-9639-2ad4161cf2d6" (UID: "1083b5d3-b890-41cc-9639-2ad4161cf2d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:36:06 crc kubenswrapper[4812]: I1124 21:36:06.817690 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1083b5d3-b890-41cc-9639-2ad4161cf2d6-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:36:06 crc kubenswrapper[4812]: I1124 21:36:06.824384 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1083b5d3-b890-41cc-9639-2ad4161cf2d6-kube-api-access-brkld" (OuterVolumeSpecName: "kube-api-access-brkld") pod "1083b5d3-b890-41cc-9639-2ad4161cf2d6" (UID: "1083b5d3-b890-41cc-9639-2ad4161cf2d6"). InnerVolumeSpecName "kube-api-access-brkld". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:36:06 crc kubenswrapper[4812]: I1124 21:36:06.853618 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1083b5d3-b890-41cc-9639-2ad4161cf2d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1083b5d3-b890-41cc-9639-2ad4161cf2d6" (UID: "1083b5d3-b890-41cc-9639-2ad4161cf2d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:36:06 crc kubenswrapper[4812]: I1124 21:36:06.919843 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1083b5d3-b890-41cc-9639-2ad4161cf2d6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:36:06 crc kubenswrapper[4812]: I1124 21:36:06.920139 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brkld\" (UniqueName: \"kubernetes.io/projected/1083b5d3-b890-41cc-9639-2ad4161cf2d6-kube-api-access-brkld\") on node \"crc\" DevicePath \"\"" Nov 24 21:36:07 crc kubenswrapper[4812]: I1124 21:36:07.082395 4812 generic.go:334] "Generic (PLEG): container finished" podID="1083b5d3-b890-41cc-9639-2ad4161cf2d6" containerID="e1f4fec193f56a0e50e30bbbc61a68d09f6d11b85ebdadc2e35d96a845b70107" exitCode=0 Nov 24 21:36:07 crc kubenswrapper[4812]: I1124 21:36:07.082438 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwcg8" event={"ID":"1083b5d3-b890-41cc-9639-2ad4161cf2d6","Type":"ContainerDied","Data":"e1f4fec193f56a0e50e30bbbc61a68d09f6d11b85ebdadc2e35d96a845b70107"} Nov 24 21:36:07 crc kubenswrapper[4812]: I1124 21:36:07.082466 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lwcg8" event={"ID":"1083b5d3-b890-41cc-9639-2ad4161cf2d6","Type":"ContainerDied","Data":"b5011a92a2a457e7055fa26428789d62b7087c2bbee4b1f916a9734267526a3c"} Nov 24 21:36:07 crc kubenswrapper[4812]: I1124 21:36:07.082481 4812 scope.go:117] "RemoveContainer" containerID="e1f4fec193f56a0e50e30bbbc61a68d09f6d11b85ebdadc2e35d96a845b70107" Nov 24 21:36:07 crc kubenswrapper[4812]: I1124 21:36:07.082648 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lwcg8" Nov 24 21:36:07 crc kubenswrapper[4812]: I1124 21:36:07.111199 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwcg8"] Nov 24 21:36:07 crc kubenswrapper[4812]: I1124 21:36:07.116283 4812 scope.go:117] "RemoveContainer" containerID="f322cb6f47baa97d2ed8c9e13dfa13b87896565f93f08af3351b88cf64be140d" Nov 24 21:36:07 crc kubenswrapper[4812]: I1124 21:36:07.120985 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lwcg8"] Nov 24 21:36:07 crc kubenswrapper[4812]: I1124 21:36:07.147230 4812 scope.go:117] "RemoveContainer" containerID="3c8fd32ec99e56c52f65df7722f2acab2f605496ef222c17e768d2fac4a15c05" Nov 24 21:36:07 crc kubenswrapper[4812]: I1124 21:36:07.191128 4812 scope.go:117] "RemoveContainer" containerID="e1f4fec193f56a0e50e30bbbc61a68d09f6d11b85ebdadc2e35d96a845b70107" Nov 24 21:36:07 crc kubenswrapper[4812]: E1124 21:36:07.191632 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1f4fec193f56a0e50e30bbbc61a68d09f6d11b85ebdadc2e35d96a845b70107\": container with ID starting with e1f4fec193f56a0e50e30bbbc61a68d09f6d11b85ebdadc2e35d96a845b70107 not found: ID does not exist" containerID="e1f4fec193f56a0e50e30bbbc61a68d09f6d11b85ebdadc2e35d96a845b70107" Nov 24 21:36:07 crc kubenswrapper[4812]: I1124 21:36:07.191683 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1f4fec193f56a0e50e30bbbc61a68d09f6d11b85ebdadc2e35d96a845b70107"} err="failed to get container status \"e1f4fec193f56a0e50e30bbbc61a68d09f6d11b85ebdadc2e35d96a845b70107\": rpc error: code = NotFound desc = could not find container \"e1f4fec193f56a0e50e30bbbc61a68d09f6d11b85ebdadc2e35d96a845b70107\": container with ID starting with e1f4fec193f56a0e50e30bbbc61a68d09f6d11b85ebdadc2e35d96a845b70107 not found: ID does not exist" Nov 24 21:36:07 crc kubenswrapper[4812]: I1124 21:36:07.191715 4812 scope.go:117] "RemoveContainer" containerID="f322cb6f47baa97d2ed8c9e13dfa13b87896565f93f08af3351b88cf64be140d" Nov 24 21:36:07 crc kubenswrapper[4812]: E1124 21:36:07.192173 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f322cb6f47baa97d2ed8c9e13dfa13b87896565f93f08af3351b88cf64be140d\": container with ID starting with f322cb6f47baa97d2ed8c9e13dfa13b87896565f93f08af3351b88cf64be140d not found: ID does not exist" containerID="f322cb6f47baa97d2ed8c9e13dfa13b87896565f93f08af3351b88cf64be140d" Nov 24 21:36:07 crc kubenswrapper[4812]: I1124 21:36:07.192204 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f322cb6f47baa97d2ed8c9e13dfa13b87896565f93f08af3351b88cf64be140d"} err="failed to get container status \"f322cb6f47baa97d2ed8c9e13dfa13b87896565f93f08af3351b88cf64be140d\": rpc error: code = NotFound desc = could not find container \"f322cb6f47baa97d2ed8c9e13dfa13b87896565f93f08af3351b88cf64be140d\": container with ID starting with f322cb6f47baa97d2ed8c9e13dfa13b87896565f93f08af3351b88cf64be140d not found: ID does not exist" Nov 24 21:36:07 crc kubenswrapper[4812]: I1124 21:36:07.192224 4812 scope.go:117] "RemoveContainer" containerID="3c8fd32ec99e56c52f65df7722f2acab2f605496ef222c17e768d2fac4a15c05" Nov 24 21:36:07 crc kubenswrapper[4812]: E1124 21:36:07.192500 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c8fd32ec99e56c52f65df7722f2acab2f605496ef222c17e768d2fac4a15c05\": container with ID starting with 3c8fd32ec99e56c52f65df7722f2acab2f605496ef222c17e768d2fac4a15c05 not found: ID does not exist" containerID="3c8fd32ec99e56c52f65df7722f2acab2f605496ef222c17e768d2fac4a15c05" Nov 24 21:36:07 crc kubenswrapper[4812]: I1124 21:36:07.192518 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c8fd32ec99e56c52f65df7722f2acab2f605496ef222c17e768d2fac4a15c05"} err="failed to get container status \"3c8fd32ec99e56c52f65df7722f2acab2f605496ef222c17e768d2fac4a15c05\": rpc error: code = NotFound desc = could not find container \"3c8fd32ec99e56c52f65df7722f2acab2f605496ef222c17e768d2fac4a15c05\": container with ID starting with 3c8fd32ec99e56c52f65df7722f2acab2f605496ef222c17e768d2fac4a15c05 not found: ID does not exist" Nov 24 21:36:08 crc kubenswrapper[4812]: I1124 21:36:08.978514 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1083b5d3-b890-41cc-9639-2ad4161cf2d6" path="/var/lib/kubelet/pods/1083b5d3-b890-41cc-9639-2ad4161cf2d6/volumes" Nov 24 21:36:18 crc kubenswrapper[4812]: I1124 21:36:18.967057 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:36:18 crc kubenswrapper[4812]: E1124 21:36:18.968709 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:36:30 crc kubenswrapper[4812]: I1124 21:36:30.966791 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:36:30 crc kubenswrapper[4812]: E1124 21:36:30.967763 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:36:41 crc kubenswrapper[4812]: I1124 21:36:41.966581 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:36:41 crc kubenswrapper[4812]: E1124 21:36:41.967797 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:36:53 crc kubenswrapper[4812]: I1124 21:36:53.966134 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:36:53 crc kubenswrapper[4812]: E1124 21:36:53.967434 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:37:07 crc kubenswrapper[4812]: I1124 21:37:07.965748 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:37:07 crc kubenswrapper[4812]: E1124 21:37:07.966562 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:37:19 crc kubenswrapper[4812]: I1124 21:37:19.965636 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:37:19 crc kubenswrapper[4812]: E1124 21:37:19.966679 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:37:32 crc kubenswrapper[4812]: I1124 21:37:32.966887 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:37:32 crc kubenswrapper[4812]: E1124 21:37:32.968798 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:37:46 crc kubenswrapper[4812]: I1124 21:37:46.974821 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:37:46 crc kubenswrapper[4812]: E1124 21:37:46.975776 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:38:01 crc kubenswrapper[4812]: I1124 21:38:01.965380 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:38:01 crc kubenswrapper[4812]: E1124 21:38:01.966284 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:38:12 crc kubenswrapper[4812]: I1124 21:38:12.968484 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:38:13 crc kubenswrapper[4812]: I1124 21:38:13.602729 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"b3f56c073c00193c8a886f24891a362855bcd6c55ec97ee2a6fb66dec8e26073"} Nov 24 21:38:21 crc kubenswrapper[4812]: I1124 21:38:21.691128 4812 generic.go:334] "Generic (PLEG): container finished" podID="f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199" containerID="592660d84aec6f22e303ffcb821d134592665cf0e49116df3736b114fc257fa4" exitCode=0 Nov 24 21:38:21 crc kubenswrapper[4812]: I1124 21:38:21.691230 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" event={"ID":"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199","Type":"ContainerDied","Data":"592660d84aec6f22e303ffcb821d134592665cf0e49116df3736b114fc257fa4"} Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.204855 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.358478 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-migration-ssh-key-1\") pod \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.358528 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cells-global-config-0\") pod \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.358572 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-migration-ssh-key-0\") pod \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.358653 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cell1-compute-config-1\") pod \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.358710 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cell1-combined-ca-bundle\") pod \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.358780 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-ssh-key\") pod \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.358831 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnlcg\" (UniqueName: \"kubernetes.io/projected/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-kube-api-access-cnlcg\") pod \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.358894 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cell1-compute-config-0\") pod \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.358964 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-inventory\") pod \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\" (UID: \"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199\") " Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.366072 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-kube-api-access-cnlcg" (OuterVolumeSpecName: "kube-api-access-cnlcg") pod "f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199" (UID: "f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199"). InnerVolumeSpecName "kube-api-access-cnlcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.366744 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199" (UID: "f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.391691 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199" (UID: "f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.393897 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199" (UID: "f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.397631 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199" (UID: "f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.398532 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199" (UID: "f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.404049 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-inventory" (OuterVolumeSpecName: "inventory") pod "f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199" (UID: "f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.424537 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199" (UID: "f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.426581 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199" (UID: "f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.463095 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnlcg\" (UniqueName: \"kubernetes.io/projected/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-kube-api-access-cnlcg\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.463153 4812 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.463165 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.463177 4812 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.463197 4812 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.463210 4812 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.463224 4812 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.463236 4812 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.463249 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.750149 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" event={"ID":"f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199","Type":"ContainerDied","Data":"c684ad8434cb74120b55d68f70e26a7f34bba917a56089630562f5dd98351e3b"} Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.750208 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c684ad8434cb74120b55d68f70e26a7f34bba917a56089630562f5dd98351e3b" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.750289 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-rjxhn" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.828451 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-hwsjw"] Nov 24 21:38:23 crc kubenswrapper[4812]: E1124 21:38:23.828932 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1083b5d3-b890-41cc-9639-2ad4161cf2d6" containerName="registry-server" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.828949 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1083b5d3-b890-41cc-9639-2ad4161cf2d6" containerName="registry-server" Nov 24 21:38:23 crc kubenswrapper[4812]: E1124 21:38:23.828979 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a" containerName="extract-utilities" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.828986 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a" containerName="extract-utilities" Nov 24 21:38:23 crc kubenswrapper[4812]: E1124 21:38:23.828994 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1083b5d3-b890-41cc-9639-2ad4161cf2d6" containerName="extract-utilities" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.829001 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1083b5d3-b890-41cc-9639-2ad4161cf2d6" containerName="extract-utilities" Nov 24 21:38:23 crc kubenswrapper[4812]: E1124 21:38:23.829014 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1083b5d3-b890-41cc-9639-2ad4161cf2d6" containerName="extract-content" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.829019 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1083b5d3-b890-41cc-9639-2ad4161cf2d6" containerName="extract-content" Nov 24 21:38:23 crc kubenswrapper[4812]: E1124 21:38:23.829031 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a" containerName="extract-content" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.829038 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a" containerName="extract-content" Nov 24 21:38:23 crc kubenswrapper[4812]: E1124 21:38:23.829056 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199" containerName="nova-cell1-openstack-openstack-cell1" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.829063 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199" containerName="nova-cell1-openstack-openstack-cell1" Nov 24 21:38:23 crc kubenswrapper[4812]: E1124 21:38:23.829072 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a" containerName="registry-server" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.829078 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a" containerName="registry-server" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.829268 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1083b5d3-b890-41cc-9639-2ad4161cf2d6" containerName="registry-server" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.829283 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199" containerName="nova-cell1-openstack-openstack-cell1" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.829299 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0f9fb81-8c10-4224-a1d4-2d8afa7bd92a" containerName="registry-server" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.830002 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.833051 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.833321 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.833946 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2x7tj" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.834287 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.834590 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.847608 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-hwsjw"] Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.974576 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-hwsjw\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.974643 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-hwsjw\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.974827 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-inventory\") pod \"telemetry-openstack-openstack-cell1-hwsjw\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.975077 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-hwsjw\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.975242 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ssh-key\") pod \"telemetry-openstack-openstack-cell1-hwsjw\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.975358 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-hwsjw\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:23 crc kubenswrapper[4812]: I1124 21:38:23.975396 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnsqq\" (UniqueName: \"kubernetes.io/projected/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-kube-api-access-bnsqq\") pod \"telemetry-openstack-openstack-cell1-hwsjw\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:24 crc kubenswrapper[4812]: I1124 21:38:24.077365 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-hwsjw\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:24 crc kubenswrapper[4812]: I1124 21:38:24.077480 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ssh-key\") pod \"telemetry-openstack-openstack-cell1-hwsjw\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:24 crc kubenswrapper[4812]: I1124 21:38:24.077553 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-hwsjw\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:24 crc kubenswrapper[4812]: I1124 21:38:24.077584 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnsqq\" (UniqueName: \"kubernetes.io/projected/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-kube-api-access-bnsqq\") pod \"telemetry-openstack-openstack-cell1-hwsjw\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:24 crc kubenswrapper[4812]: I1124 21:38:24.077643 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-hwsjw\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:24 crc kubenswrapper[4812]: I1124 21:38:24.077701 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-hwsjw\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:24 crc kubenswrapper[4812]: I1124 21:38:24.077790 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-inventory\") pod \"telemetry-openstack-openstack-cell1-hwsjw\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:24 crc kubenswrapper[4812]: I1124 21:38:24.082890 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ssh-key\") pod \"telemetry-openstack-openstack-cell1-hwsjw\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:24 crc kubenswrapper[4812]: I1124 21:38:24.083738 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-inventory\") pod \"telemetry-openstack-openstack-cell1-hwsjw\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:24 crc kubenswrapper[4812]: I1124 21:38:24.085449 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-hwsjw\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:24 crc kubenswrapper[4812]: I1124 21:38:24.085852 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-hwsjw\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:24 crc kubenswrapper[4812]: I1124 21:38:24.087000 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-hwsjw\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:24 crc kubenswrapper[4812]: I1124 21:38:24.087941 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-hwsjw\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:24 crc kubenswrapper[4812]: I1124 21:38:24.112474 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnsqq\" (UniqueName: \"kubernetes.io/projected/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-kube-api-access-bnsqq\") pod \"telemetry-openstack-openstack-cell1-hwsjw\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:24 crc kubenswrapper[4812]: I1124 21:38:24.164404 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:38:24 crc kubenswrapper[4812]: I1124 21:38:24.766522 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-hwsjw"] Nov 24 21:38:24 crc kubenswrapper[4812]: I1124 21:38:24.773378 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" event={"ID":"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe","Type":"ContainerStarted","Data":"fd2b881a90f07b165baf0ad442018c5c050d17dd52aa21dcb095962baf5b3558"} Nov 24 21:38:25 crc kubenswrapper[4812]: I1124 21:38:25.787996 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" event={"ID":"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe","Type":"ContainerStarted","Data":"8167fbebcafa6ff70190fccee352fc95fa2fa3abe2c4630a98ef00171a095e3d"} Nov 24 21:38:25 crc kubenswrapper[4812]: I1124 21:38:25.817288 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" podStartSLOduration=2.346726749 podStartE2EDuration="2.817271076s" podCreationTimestamp="2025-11-24 21:38:23 +0000 UTC" firstStartedPulling="2025-11-24 21:38:24.743344642 +0000 UTC m=+8498.532297013" lastFinishedPulling="2025-11-24 21:38:25.213888969 +0000 UTC m=+8499.002841340" observedRunningTime="2025-11-24 21:38:25.816085773 +0000 UTC m=+8499.605038154" watchObservedRunningTime="2025-11-24 21:38:25.817271076 +0000 UTC m=+8499.606223437" Nov 24 21:40:32 crc kubenswrapper[4812]: I1124 21:40:32.998294 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:40:32 crc kubenswrapper[4812]: I1124 21:40:32.999041 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:41:03 crc kubenswrapper[4812]: I1124 21:41:02.998392 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:41:03 crc kubenswrapper[4812]: I1124 21:41:03.000350 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:41:33 crc kubenswrapper[4812]: I1124 21:41:32.998447 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:41:33 crc kubenswrapper[4812]: I1124 21:41:32.999162 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:41:33 crc kubenswrapper[4812]: I1124 21:41:32.999232 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 21:41:33 crc kubenswrapper[4812]: I1124 21:41:33.002524 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b3f56c073c00193c8a886f24891a362855bcd6c55ec97ee2a6fb66dec8e26073"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:41:33 crc kubenswrapper[4812]: I1124 21:41:33.002627 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://b3f56c073c00193c8a886f24891a362855bcd6c55ec97ee2a6fb66dec8e26073" gracePeriod=600 Nov 24 21:41:33 crc kubenswrapper[4812]: I1124 21:41:33.155997 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="b3f56c073c00193c8a886f24891a362855bcd6c55ec97ee2a6fb66dec8e26073" exitCode=0 Nov 24 21:41:33 crc kubenswrapper[4812]: I1124 21:41:33.156075 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"b3f56c073c00193c8a886f24891a362855bcd6c55ec97ee2a6fb66dec8e26073"} Nov 24 21:41:33 crc kubenswrapper[4812]: I1124 21:41:33.156298 4812 scope.go:117] "RemoveContainer" containerID="4942ea662df6c5d9f7dd6db5e2d2b577c81b52d5a8cbdc5e67f72fcae3169930" Nov 24 21:41:34 crc kubenswrapper[4812]: I1124 21:41:34.170536 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337"} Nov 24 21:42:04 crc kubenswrapper[4812]: I1124 21:42:04.554662 4812 scope.go:117] "RemoveContainer" containerID="bf63b9520bac954320a42036f83cd8de92d0160f9eb0cb106646536432d576d2" Nov 24 21:42:04 crc kubenswrapper[4812]: I1124 21:42:04.611873 4812 scope.go:117] "RemoveContainer" containerID="8ee5fc39ffb151b31c7508a15b05a4e6fddb32dc09746875ea5d47bb5708080e" Nov 24 21:42:04 crc kubenswrapper[4812]: I1124 21:42:04.644465 4812 scope.go:117] "RemoveContainer" containerID="b2f72dec72f53417bfab4289dda840ef938ea4465e2f4d9208e3fbb8d6151a9f" Nov 24 21:42:30 crc kubenswrapper[4812]: I1124 21:42:30.893364 4812 generic.go:334] "Generic (PLEG): container finished" podID="6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe" containerID="8167fbebcafa6ff70190fccee352fc95fa2fa3abe2c4630a98ef00171a095e3d" exitCode=0 Nov 24 21:42:30 crc kubenswrapper[4812]: I1124 21:42:30.893470 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" event={"ID":"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe","Type":"ContainerDied","Data":"8167fbebcafa6ff70190fccee352fc95fa2fa3abe2c4630a98ef00171a095e3d"} Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.362100 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.437509 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnsqq\" (UniqueName: \"kubernetes.io/projected/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-kube-api-access-bnsqq\") pod \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.437598 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-telemetry-combined-ca-bundle\") pod \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.437625 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ceilometer-compute-config-data-1\") pod \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.437658 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ceilometer-compute-config-data-2\") pod \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.437695 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ceilometer-compute-config-data-0\") pod \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.437778 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-inventory\") pod \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.437943 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ssh-key\") pod \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\" (UID: \"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe\") " Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.448439 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe" (UID: "6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.460297 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-kube-api-access-bnsqq" (OuterVolumeSpecName: "kube-api-access-bnsqq") pod "6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe" (UID: "6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe"). InnerVolumeSpecName "kube-api-access-bnsqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.474705 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe" (UID: "6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.477185 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-inventory" (OuterVolumeSpecName: "inventory") pod "6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe" (UID: "6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.477785 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe" (UID: "6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.480561 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe" (UID: "6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.499538 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe" (UID: "6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.540870 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.541110 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnsqq\" (UniqueName: \"kubernetes.io/projected/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-kube-api-access-bnsqq\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.541190 4812 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.541261 4812 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.541426 4812 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.541511 4812 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.541587 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.920050 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" event={"ID":"6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe","Type":"ContainerDied","Data":"fd2b881a90f07b165baf0ad442018c5c050d17dd52aa21dcb095962baf5b3558"} Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.920383 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd2b881a90f07b165baf0ad442018c5c050d17dd52aa21dcb095962baf5b3558" Nov 24 21:42:32 crc kubenswrapper[4812]: I1124 21:42:32.920452 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-hwsjw" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.057480 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-kdd46"] Nov 24 21:42:33 crc kubenswrapper[4812]: E1124 21:42:33.057921 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe" containerName="telemetry-openstack-openstack-cell1" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.057937 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe" containerName="telemetry-openstack-openstack-cell1" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.058155 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe" containerName="telemetry-openstack-openstack-cell1" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.058887 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.063262 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.063302 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.063273 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.063492 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.065450 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2x7tj" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.071916 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-kdd46"] Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.157795 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-kdd46\" (UID: \"6a59db0a-69f8-4843-9570-3dad3cb5d916\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.157900 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-kdd46\" (UID: \"6a59db0a-69f8-4843-9570-3dad3cb5d916\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.158008 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-kdd46\" (UID: \"6a59db0a-69f8-4843-9570-3dad3cb5d916\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.158125 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-kdd46\" (UID: \"6a59db0a-69f8-4843-9570-3dad3cb5d916\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.158152 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbpr5\" (UniqueName: \"kubernetes.io/projected/6a59db0a-69f8-4843-9570-3dad3cb5d916-kube-api-access-pbpr5\") pod \"neutron-sriov-openstack-openstack-cell1-kdd46\" (UID: \"6a59db0a-69f8-4843-9570-3dad3cb5d916\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.260176 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-kdd46\" (UID: \"6a59db0a-69f8-4843-9570-3dad3cb5d916\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.260399 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-kdd46\" (UID: \"6a59db0a-69f8-4843-9570-3dad3cb5d916\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.260603 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-kdd46\" (UID: \"6a59db0a-69f8-4843-9570-3dad3cb5d916\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.260660 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbpr5\" (UniqueName: \"kubernetes.io/projected/6a59db0a-69f8-4843-9570-3dad3cb5d916-kube-api-access-pbpr5\") pod \"neutron-sriov-openstack-openstack-cell1-kdd46\" (UID: \"6a59db0a-69f8-4843-9570-3dad3cb5d916\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.260782 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-kdd46\" (UID: \"6a59db0a-69f8-4843-9570-3dad3cb5d916\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.265643 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-kdd46\" (UID: \"6a59db0a-69f8-4843-9570-3dad3cb5d916\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.267030 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-kdd46\" (UID: \"6a59db0a-69f8-4843-9570-3dad3cb5d916\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.280576 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-kdd46\" (UID: \"6a59db0a-69f8-4843-9570-3dad3cb5d916\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.283469 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbpr5\" (UniqueName: \"kubernetes.io/projected/6a59db0a-69f8-4843-9570-3dad3cb5d916-kube-api-access-pbpr5\") pod \"neutron-sriov-openstack-openstack-cell1-kdd46\" (UID: \"6a59db0a-69f8-4843-9570-3dad3cb5d916\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.284624 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-kdd46\" (UID: \"6a59db0a-69f8-4843-9570-3dad3cb5d916\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" Nov 24 21:42:33 crc kubenswrapper[4812]: I1124 21:42:33.377042 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" Nov 24 21:42:34 crc kubenswrapper[4812]: I1124 21:42:34.076263 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-kdd46"] Nov 24 21:42:34 crc kubenswrapper[4812]: I1124 21:42:34.077776 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:42:34 crc kubenswrapper[4812]: I1124 21:42:34.946583 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" event={"ID":"6a59db0a-69f8-4843-9570-3dad3cb5d916","Type":"ContainerStarted","Data":"48f7f82dbaadb839ffd511ec3c1fe1d0c960d1c87f556ba733c13dc9341d9678"} Nov 24 21:42:35 crc kubenswrapper[4812]: I1124 21:42:35.955599 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" event={"ID":"6a59db0a-69f8-4843-9570-3dad3cb5d916","Type":"ContainerStarted","Data":"e3834309878d350b09f53aedc5ba8c6bed0d0a04d14b49d7b5956e2d6681227f"} Nov 24 21:42:35 crc kubenswrapper[4812]: I1124 21:42:35.977592 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" podStartSLOduration=2.202874571 podStartE2EDuration="2.977565406s" podCreationTimestamp="2025-11-24 21:42:33 +0000 UTC" firstStartedPulling="2025-11-24 21:42:34.07750509 +0000 UTC m=+8747.866457461" lastFinishedPulling="2025-11-24 21:42:34.852195885 +0000 UTC m=+8748.641148296" observedRunningTime="2025-11-24 21:42:35.970930428 +0000 UTC m=+8749.759882849" watchObservedRunningTime="2025-11-24 21:42:35.977565406 +0000 UTC m=+8749.766517787" Nov 24 21:42:45 crc kubenswrapper[4812]: I1124 21:42:45.240498 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-swdnx"] Nov 24 21:42:45 crc kubenswrapper[4812]: I1124 21:42:45.243934 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swdnx" Nov 24 21:42:45 crc kubenswrapper[4812]: I1124 21:42:45.251869 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-swdnx"] Nov 24 21:42:45 crc kubenswrapper[4812]: I1124 21:42:45.345961 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmfmg\" (UniqueName: \"kubernetes.io/projected/c60b8759-7e0c-4465-8724-c6ff5a9d04b4-kube-api-access-kmfmg\") pod \"certified-operators-swdnx\" (UID: \"c60b8759-7e0c-4465-8724-c6ff5a9d04b4\") " pod="openshift-marketplace/certified-operators-swdnx" Nov 24 21:42:45 crc kubenswrapper[4812]: I1124 21:42:45.346240 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c60b8759-7e0c-4465-8724-c6ff5a9d04b4-utilities\") pod \"certified-operators-swdnx\" (UID: \"c60b8759-7e0c-4465-8724-c6ff5a9d04b4\") " pod="openshift-marketplace/certified-operators-swdnx" Nov 24 21:42:45 crc kubenswrapper[4812]: I1124 21:42:45.346358 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c60b8759-7e0c-4465-8724-c6ff5a9d04b4-catalog-content\") pod \"certified-operators-swdnx\" (UID: \"c60b8759-7e0c-4465-8724-c6ff5a9d04b4\") " pod="openshift-marketplace/certified-operators-swdnx" Nov 24 21:42:45 crc kubenswrapper[4812]: I1124 21:42:45.448781 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmfmg\" (UniqueName: \"kubernetes.io/projected/c60b8759-7e0c-4465-8724-c6ff5a9d04b4-kube-api-access-kmfmg\") pod \"certified-operators-swdnx\" (UID: \"c60b8759-7e0c-4465-8724-c6ff5a9d04b4\") " pod="openshift-marketplace/certified-operators-swdnx" Nov 24 21:42:45 crc kubenswrapper[4812]: I1124 21:42:45.448895 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c60b8759-7e0c-4465-8724-c6ff5a9d04b4-utilities\") pod \"certified-operators-swdnx\" (UID: \"c60b8759-7e0c-4465-8724-c6ff5a9d04b4\") " pod="openshift-marketplace/certified-operators-swdnx" Nov 24 21:42:45 crc kubenswrapper[4812]: I1124 21:42:45.448983 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c60b8759-7e0c-4465-8724-c6ff5a9d04b4-catalog-content\") pod \"certified-operators-swdnx\" (UID: \"c60b8759-7e0c-4465-8724-c6ff5a9d04b4\") " pod="openshift-marketplace/certified-operators-swdnx" Nov 24 21:42:45 crc kubenswrapper[4812]: I1124 21:42:45.449690 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c60b8759-7e0c-4465-8724-c6ff5a9d04b4-utilities\") pod \"certified-operators-swdnx\" (UID: \"c60b8759-7e0c-4465-8724-c6ff5a9d04b4\") " pod="openshift-marketplace/certified-operators-swdnx" Nov 24 21:42:45 crc kubenswrapper[4812]: I1124 21:42:45.449789 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c60b8759-7e0c-4465-8724-c6ff5a9d04b4-catalog-content\") pod \"certified-operators-swdnx\" (UID: \"c60b8759-7e0c-4465-8724-c6ff5a9d04b4\") " pod="openshift-marketplace/certified-operators-swdnx" Nov 24 21:42:45 crc kubenswrapper[4812]: I1124 21:42:45.473211 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmfmg\" (UniqueName: \"kubernetes.io/projected/c60b8759-7e0c-4465-8724-c6ff5a9d04b4-kube-api-access-kmfmg\") pod \"certified-operators-swdnx\" (UID: \"c60b8759-7e0c-4465-8724-c6ff5a9d04b4\") " pod="openshift-marketplace/certified-operators-swdnx" Nov 24 21:42:45 crc kubenswrapper[4812]: I1124 21:42:45.561381 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swdnx" Nov 24 21:42:46 crc kubenswrapper[4812]: I1124 21:42:46.116954 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-swdnx"] Nov 24 21:42:46 crc kubenswrapper[4812]: W1124 21:42:46.124141 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc60b8759_7e0c_4465_8724_c6ff5a9d04b4.slice/crio-49b2d3bd8c8dec03153ee4acfaf20bde242a52916f1dea8ee06386978ffa437b WatchSource:0}: Error finding container 49b2d3bd8c8dec03153ee4acfaf20bde242a52916f1dea8ee06386978ffa437b: Status 404 returned error can't find the container with id 49b2d3bd8c8dec03153ee4acfaf20bde242a52916f1dea8ee06386978ffa437b Nov 24 21:42:46 crc kubenswrapper[4812]: E1124 21:42:46.807772 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc60b8759_7e0c_4465_8724_c6ff5a9d04b4.slice/crio-9b8cfdc550a27d7cff79aafa218612a5c41e92a864364ab9aa9196f25bd33735.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc60b8759_7e0c_4465_8724_c6ff5a9d04b4.slice/crio-conmon-9b8cfdc550a27d7cff79aafa218612a5c41e92a864364ab9aa9196f25bd33735.scope\": RecentStats: unable to find data in memory cache]" Nov 24 21:42:47 crc kubenswrapper[4812]: I1124 21:42:47.083137 4812 generic.go:334] "Generic (PLEG): container finished" podID="c60b8759-7e0c-4465-8724-c6ff5a9d04b4" containerID="9b8cfdc550a27d7cff79aafa218612a5c41e92a864364ab9aa9196f25bd33735" exitCode=0 Nov 24 21:42:47 crc kubenswrapper[4812]: I1124 21:42:47.083205 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swdnx" event={"ID":"c60b8759-7e0c-4465-8724-c6ff5a9d04b4","Type":"ContainerDied","Data":"9b8cfdc550a27d7cff79aafa218612a5c41e92a864364ab9aa9196f25bd33735"} Nov 24 21:42:47 crc kubenswrapper[4812]: I1124 21:42:47.083244 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swdnx" event={"ID":"c60b8759-7e0c-4465-8724-c6ff5a9d04b4","Type":"ContainerStarted","Data":"49b2d3bd8c8dec03153ee4acfaf20bde242a52916f1dea8ee06386978ffa437b"} Nov 24 21:42:49 crc kubenswrapper[4812]: I1124 21:42:49.114798 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swdnx" event={"ID":"c60b8759-7e0c-4465-8724-c6ff5a9d04b4","Type":"ContainerStarted","Data":"98f1398cc62f0e19ce33674b8da3d4113b804ffddfcde04a031be330048c152d"} Nov 24 21:42:52 crc kubenswrapper[4812]: I1124 21:42:52.166916 4812 generic.go:334] "Generic (PLEG): container finished" podID="c60b8759-7e0c-4465-8724-c6ff5a9d04b4" containerID="98f1398cc62f0e19ce33674b8da3d4113b804ffddfcde04a031be330048c152d" exitCode=0 Nov 24 21:42:52 crc kubenswrapper[4812]: I1124 21:42:52.167699 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swdnx" event={"ID":"c60b8759-7e0c-4465-8724-c6ff5a9d04b4","Type":"ContainerDied","Data":"98f1398cc62f0e19ce33674b8da3d4113b804ffddfcde04a031be330048c152d"} Nov 24 21:42:55 crc kubenswrapper[4812]: I1124 21:42:55.200113 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swdnx" event={"ID":"c60b8759-7e0c-4465-8724-c6ff5a9d04b4","Type":"ContainerStarted","Data":"a5c443d1fbf97cc7ab2b61b02d7912854988ca6e0c74c4d7f7acaddb4ad57c58"} Nov 24 21:42:55 crc kubenswrapper[4812]: I1124 21:42:55.250109 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-swdnx" podStartSLOduration=3.906583644 podStartE2EDuration="10.250077888s" podCreationTimestamp="2025-11-24 21:42:45 +0000 UTC" firstStartedPulling="2025-11-24 21:42:47.088658822 +0000 UTC m=+8760.877611193" lastFinishedPulling="2025-11-24 21:42:53.432153066 +0000 UTC m=+8767.221105437" observedRunningTime="2025-11-24 21:42:55.237129072 +0000 UTC m=+8769.026081453" watchObservedRunningTime="2025-11-24 21:42:55.250077888 +0000 UTC m=+8769.039030279" Nov 24 21:42:55 crc kubenswrapper[4812]: I1124 21:42:55.562109 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-swdnx" Nov 24 21:42:55 crc kubenswrapper[4812]: I1124 21:42:55.562261 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-swdnx" Nov 24 21:42:56 crc kubenswrapper[4812]: I1124 21:42:56.610053 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-swdnx" podUID="c60b8759-7e0c-4465-8724-c6ff5a9d04b4" containerName="registry-server" probeResult="failure" output=< Nov 24 21:42:56 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 21:42:56 crc kubenswrapper[4812]: > Nov 24 21:43:05 crc kubenswrapper[4812]: I1124 21:43:05.638521 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-swdnx" Nov 24 21:43:05 crc kubenswrapper[4812]: I1124 21:43:05.695718 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-swdnx" Nov 24 21:43:05 crc kubenswrapper[4812]: I1124 21:43:05.879992 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-swdnx"] Nov 24 21:43:07 crc kubenswrapper[4812]: I1124 21:43:07.331157 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-swdnx" podUID="c60b8759-7e0c-4465-8724-c6ff5a9d04b4" containerName="registry-server" containerID="cri-o://a5c443d1fbf97cc7ab2b61b02d7912854988ca6e0c74c4d7f7acaddb4ad57c58" gracePeriod=2 Nov 24 21:43:08 crc kubenswrapper[4812]: I1124 21:43:08.344693 4812 generic.go:334] "Generic (PLEG): container finished" podID="c60b8759-7e0c-4465-8724-c6ff5a9d04b4" containerID="a5c443d1fbf97cc7ab2b61b02d7912854988ca6e0c74c4d7f7acaddb4ad57c58" exitCode=0 Nov 24 21:43:08 crc kubenswrapper[4812]: I1124 21:43:08.344776 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swdnx" event={"ID":"c60b8759-7e0c-4465-8724-c6ff5a9d04b4","Type":"ContainerDied","Data":"a5c443d1fbf97cc7ab2b61b02d7912854988ca6e0c74c4d7f7acaddb4ad57c58"} Nov 24 21:43:08 crc kubenswrapper[4812]: I1124 21:43:08.497626 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swdnx" Nov 24 21:43:08 crc kubenswrapper[4812]: I1124 21:43:08.662349 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c60b8759-7e0c-4465-8724-c6ff5a9d04b4-catalog-content\") pod \"c60b8759-7e0c-4465-8724-c6ff5a9d04b4\" (UID: \"c60b8759-7e0c-4465-8724-c6ff5a9d04b4\") " Nov 24 21:43:08 crc kubenswrapper[4812]: I1124 21:43:08.662483 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmfmg\" (UniqueName: \"kubernetes.io/projected/c60b8759-7e0c-4465-8724-c6ff5a9d04b4-kube-api-access-kmfmg\") pod \"c60b8759-7e0c-4465-8724-c6ff5a9d04b4\" (UID: \"c60b8759-7e0c-4465-8724-c6ff5a9d04b4\") " Nov 24 21:43:08 crc kubenswrapper[4812]: I1124 21:43:08.662517 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c60b8759-7e0c-4465-8724-c6ff5a9d04b4-utilities\") pod \"c60b8759-7e0c-4465-8724-c6ff5a9d04b4\" (UID: \"c60b8759-7e0c-4465-8724-c6ff5a9d04b4\") " Nov 24 21:43:08 crc kubenswrapper[4812]: I1124 21:43:08.664079 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c60b8759-7e0c-4465-8724-c6ff5a9d04b4-utilities" (OuterVolumeSpecName: "utilities") pod "c60b8759-7e0c-4465-8724-c6ff5a9d04b4" (UID: "c60b8759-7e0c-4465-8724-c6ff5a9d04b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:43:08 crc kubenswrapper[4812]: I1124 21:43:08.669273 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c60b8759-7e0c-4465-8724-c6ff5a9d04b4-kube-api-access-kmfmg" (OuterVolumeSpecName: "kube-api-access-kmfmg") pod "c60b8759-7e0c-4465-8724-c6ff5a9d04b4" (UID: "c60b8759-7e0c-4465-8724-c6ff5a9d04b4"). InnerVolumeSpecName "kube-api-access-kmfmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:43:08 crc kubenswrapper[4812]: I1124 21:43:08.717557 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c60b8759-7e0c-4465-8724-c6ff5a9d04b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c60b8759-7e0c-4465-8724-c6ff5a9d04b4" (UID: "c60b8759-7e0c-4465-8724-c6ff5a9d04b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:43:08 crc kubenswrapper[4812]: I1124 21:43:08.765103 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c60b8759-7e0c-4465-8724-c6ff5a9d04b4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:08 crc kubenswrapper[4812]: I1124 21:43:08.765156 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmfmg\" (UniqueName: \"kubernetes.io/projected/c60b8759-7e0c-4465-8724-c6ff5a9d04b4-kube-api-access-kmfmg\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:08 crc kubenswrapper[4812]: I1124 21:43:08.765175 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c60b8759-7e0c-4465-8724-c6ff5a9d04b4-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:09 crc kubenswrapper[4812]: I1124 21:43:09.362424 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swdnx" event={"ID":"c60b8759-7e0c-4465-8724-c6ff5a9d04b4","Type":"ContainerDied","Data":"49b2d3bd8c8dec03153ee4acfaf20bde242a52916f1dea8ee06386978ffa437b"} Nov 24 21:43:09 crc kubenswrapper[4812]: I1124 21:43:09.362744 4812 scope.go:117] "RemoveContainer" containerID="a5c443d1fbf97cc7ab2b61b02d7912854988ca6e0c74c4d7f7acaddb4ad57c58" Nov 24 21:43:09 crc kubenswrapper[4812]: I1124 21:43:09.362530 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swdnx" Nov 24 21:43:09 crc kubenswrapper[4812]: I1124 21:43:09.407441 4812 scope.go:117] "RemoveContainer" containerID="98f1398cc62f0e19ce33674b8da3d4113b804ffddfcde04a031be330048c152d" Nov 24 21:43:09 crc kubenswrapper[4812]: I1124 21:43:09.419948 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-swdnx"] Nov 24 21:43:09 crc kubenswrapper[4812]: I1124 21:43:09.426106 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-swdnx"] Nov 24 21:43:09 crc kubenswrapper[4812]: I1124 21:43:09.434540 4812 scope.go:117] "RemoveContainer" containerID="9b8cfdc550a27d7cff79aafa218612a5c41e92a864364ab9aa9196f25bd33735" Nov 24 21:43:10 crc kubenswrapper[4812]: I1124 21:43:10.987292 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c60b8759-7e0c-4465-8724-c6ff5a9d04b4" path="/var/lib/kubelet/pods/c60b8759-7e0c-4465-8724-c6ff5a9d04b4/volumes" Nov 24 21:44:03 crc kubenswrapper[4812]: I1124 21:44:02.999536 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:44:03 crc kubenswrapper[4812]: I1124 21:44:03.000063 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:44:32 crc kubenswrapper[4812]: I1124 21:44:32.998564 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:44:32 crc kubenswrapper[4812]: I1124 21:44:32.999050 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:45:00 crc kubenswrapper[4812]: I1124 21:45:00.174565 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400345-d7nz7"] Nov 24 21:45:00 crc kubenswrapper[4812]: E1124 21:45:00.175965 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c60b8759-7e0c-4465-8724-c6ff5a9d04b4" containerName="extract-utilities" Nov 24 21:45:00 crc kubenswrapper[4812]: I1124 21:45:00.175991 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c60b8759-7e0c-4465-8724-c6ff5a9d04b4" containerName="extract-utilities" Nov 24 21:45:00 crc kubenswrapper[4812]: E1124 21:45:00.176030 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c60b8759-7e0c-4465-8724-c6ff5a9d04b4" containerName="registry-server" Nov 24 21:45:00 crc kubenswrapper[4812]: I1124 21:45:00.176043 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c60b8759-7e0c-4465-8724-c6ff5a9d04b4" containerName="registry-server" Nov 24 21:45:00 crc kubenswrapper[4812]: E1124 21:45:00.176104 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c60b8759-7e0c-4465-8724-c6ff5a9d04b4" containerName="extract-content" Nov 24 21:45:00 crc kubenswrapper[4812]: I1124 21:45:00.176115 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c60b8759-7e0c-4465-8724-c6ff5a9d04b4" containerName="extract-content" Nov 24 21:45:00 crc kubenswrapper[4812]: I1124 21:45:00.176711 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c60b8759-7e0c-4465-8724-c6ff5a9d04b4" containerName="registry-server" Nov 24 21:45:00 crc kubenswrapper[4812]: I1124 21:45:00.177728 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-d7nz7" Nov 24 21:45:00 crc kubenswrapper[4812]: I1124 21:45:00.180629 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 21:45:00 crc kubenswrapper[4812]: I1124 21:45:00.181586 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 21:45:00 crc kubenswrapper[4812]: I1124 21:45:00.184909 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400345-d7nz7"] Nov 24 21:45:00 crc kubenswrapper[4812]: I1124 21:45:00.360446 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp55x\" (UniqueName: \"kubernetes.io/projected/6458dccd-2042-45e9-8f57-9b1098c6ad01-kube-api-access-hp55x\") pod \"collect-profiles-29400345-d7nz7\" (UID: \"6458dccd-2042-45e9-8f57-9b1098c6ad01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-d7nz7" Nov 24 21:45:00 crc kubenswrapper[4812]: I1124 21:45:00.360530 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6458dccd-2042-45e9-8f57-9b1098c6ad01-config-volume\") pod \"collect-profiles-29400345-d7nz7\" (UID: \"6458dccd-2042-45e9-8f57-9b1098c6ad01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-d7nz7" Nov 24 21:45:00 crc kubenswrapper[4812]: I1124 21:45:00.360567 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6458dccd-2042-45e9-8f57-9b1098c6ad01-secret-volume\") pod \"collect-profiles-29400345-d7nz7\" (UID: \"6458dccd-2042-45e9-8f57-9b1098c6ad01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-d7nz7" Nov 24 21:45:00 crc kubenswrapper[4812]: I1124 21:45:00.462466 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp55x\" (UniqueName: \"kubernetes.io/projected/6458dccd-2042-45e9-8f57-9b1098c6ad01-kube-api-access-hp55x\") pod \"collect-profiles-29400345-d7nz7\" (UID: \"6458dccd-2042-45e9-8f57-9b1098c6ad01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-d7nz7" Nov 24 21:45:00 crc kubenswrapper[4812]: I1124 21:45:00.462515 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6458dccd-2042-45e9-8f57-9b1098c6ad01-config-volume\") pod \"collect-profiles-29400345-d7nz7\" (UID: \"6458dccd-2042-45e9-8f57-9b1098c6ad01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-d7nz7" Nov 24 21:45:00 crc kubenswrapper[4812]: I1124 21:45:00.462534 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6458dccd-2042-45e9-8f57-9b1098c6ad01-secret-volume\") pod \"collect-profiles-29400345-d7nz7\" (UID: \"6458dccd-2042-45e9-8f57-9b1098c6ad01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-d7nz7" Nov 24 21:45:00 crc kubenswrapper[4812]: I1124 21:45:00.463537 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6458dccd-2042-45e9-8f57-9b1098c6ad01-config-volume\") pod \"collect-profiles-29400345-d7nz7\" (UID: \"6458dccd-2042-45e9-8f57-9b1098c6ad01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-d7nz7" Nov 24 21:45:00 crc kubenswrapper[4812]: I1124 21:45:00.472753 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6458dccd-2042-45e9-8f57-9b1098c6ad01-secret-volume\") pod \"collect-profiles-29400345-d7nz7\" (UID: \"6458dccd-2042-45e9-8f57-9b1098c6ad01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-d7nz7" Nov 24 21:45:00 crc kubenswrapper[4812]: I1124 21:45:00.478429 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp55x\" (UniqueName: \"kubernetes.io/projected/6458dccd-2042-45e9-8f57-9b1098c6ad01-kube-api-access-hp55x\") pod \"collect-profiles-29400345-d7nz7\" (UID: \"6458dccd-2042-45e9-8f57-9b1098c6ad01\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-d7nz7" Nov 24 21:45:00 crc kubenswrapper[4812]: I1124 21:45:00.515103 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-d7nz7" Nov 24 21:45:01 crc kubenswrapper[4812]: I1124 21:45:01.042741 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400345-d7nz7"] Nov 24 21:45:01 crc kubenswrapper[4812]: W1124 21:45:01.047462 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6458dccd_2042_45e9_8f57_9b1098c6ad01.slice/crio-f5108ed8c9d4d6bdb53666c473bb4a2369f9726fb8bb806d921193afb5c4dace WatchSource:0}: Error finding container f5108ed8c9d4d6bdb53666c473bb4a2369f9726fb8bb806d921193afb5c4dace: Status 404 returned error can't find the container with id f5108ed8c9d4d6bdb53666c473bb4a2369f9726fb8bb806d921193afb5c4dace Nov 24 21:45:01 crc kubenswrapper[4812]: I1124 21:45:01.421315 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-d7nz7" event={"ID":"6458dccd-2042-45e9-8f57-9b1098c6ad01","Type":"ContainerStarted","Data":"285468b88e1115752fa72deed5752e23cd91279627f4daecce4ad100ed4ed6ef"} Nov 24 21:45:01 crc kubenswrapper[4812]: I1124 21:45:01.421372 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-d7nz7" event={"ID":"6458dccd-2042-45e9-8f57-9b1098c6ad01","Type":"ContainerStarted","Data":"f5108ed8c9d4d6bdb53666c473bb4a2369f9726fb8bb806d921193afb5c4dace"} Nov 24 21:45:01 crc kubenswrapper[4812]: I1124 21:45:01.445866 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-d7nz7" podStartSLOduration=1.44584402 podStartE2EDuration="1.44584402s" podCreationTimestamp="2025-11-24 21:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:45:01.436099044 +0000 UTC m=+8895.225051415" watchObservedRunningTime="2025-11-24 21:45:01.44584402 +0000 UTC m=+8895.234796391" Nov 24 21:45:02 crc kubenswrapper[4812]: I1124 21:45:02.437384 4812 generic.go:334] "Generic (PLEG): container finished" podID="6458dccd-2042-45e9-8f57-9b1098c6ad01" containerID="285468b88e1115752fa72deed5752e23cd91279627f4daecce4ad100ed4ed6ef" exitCode=0 Nov 24 21:45:02 crc kubenswrapper[4812]: I1124 21:45:02.437480 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-d7nz7" event={"ID":"6458dccd-2042-45e9-8f57-9b1098c6ad01","Type":"ContainerDied","Data":"285468b88e1115752fa72deed5752e23cd91279627f4daecce4ad100ed4ed6ef"} Nov 24 21:45:02 crc kubenswrapper[4812]: I1124 21:45:02.999076 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:45:02 crc kubenswrapper[4812]: I1124 21:45:02.999448 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:45:02 crc kubenswrapper[4812]: I1124 21:45:02.999517 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 21:45:03 crc kubenswrapper[4812]: I1124 21:45:03.000574 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:45:03 crc kubenswrapper[4812]: I1124 21:45:03.000679 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" gracePeriod=600 Nov 24 21:45:03 crc kubenswrapper[4812]: E1124 21:45:03.150080 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:45:03 crc kubenswrapper[4812]: I1124 21:45:03.458029 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" exitCode=0 Nov 24 21:45:03 crc kubenswrapper[4812]: I1124 21:45:03.458099 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337"} Nov 24 21:45:03 crc kubenswrapper[4812]: I1124 21:45:03.458194 4812 scope.go:117] "RemoveContainer" containerID="b3f56c073c00193c8a886f24891a362855bcd6c55ec97ee2a6fb66dec8e26073" Nov 24 21:45:03 crc kubenswrapper[4812]: I1124 21:45:03.458988 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:45:03 crc kubenswrapper[4812]: E1124 21:45:03.459479 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:45:03 crc kubenswrapper[4812]: I1124 21:45:03.863562 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-d7nz7" Nov 24 21:45:04 crc kubenswrapper[4812]: I1124 21:45:04.052118 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6458dccd-2042-45e9-8f57-9b1098c6ad01-secret-volume\") pod \"6458dccd-2042-45e9-8f57-9b1098c6ad01\" (UID: \"6458dccd-2042-45e9-8f57-9b1098c6ad01\") " Nov 24 21:45:04 crc kubenswrapper[4812]: I1124 21:45:04.052268 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6458dccd-2042-45e9-8f57-9b1098c6ad01-config-volume\") pod \"6458dccd-2042-45e9-8f57-9b1098c6ad01\" (UID: \"6458dccd-2042-45e9-8f57-9b1098c6ad01\") " Nov 24 21:45:04 crc kubenswrapper[4812]: I1124 21:45:04.052370 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp55x\" (UniqueName: \"kubernetes.io/projected/6458dccd-2042-45e9-8f57-9b1098c6ad01-kube-api-access-hp55x\") pod \"6458dccd-2042-45e9-8f57-9b1098c6ad01\" (UID: \"6458dccd-2042-45e9-8f57-9b1098c6ad01\") " Nov 24 21:45:04 crc kubenswrapper[4812]: I1124 21:45:04.053018 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6458dccd-2042-45e9-8f57-9b1098c6ad01-config-volume" (OuterVolumeSpecName: "config-volume") pod "6458dccd-2042-45e9-8f57-9b1098c6ad01" (UID: "6458dccd-2042-45e9-8f57-9b1098c6ad01"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:45:04 crc kubenswrapper[4812]: I1124 21:45:04.061053 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6458dccd-2042-45e9-8f57-9b1098c6ad01-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6458dccd-2042-45e9-8f57-9b1098c6ad01" (UID: "6458dccd-2042-45e9-8f57-9b1098c6ad01"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:04 crc kubenswrapper[4812]: I1124 21:45:04.061264 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6458dccd-2042-45e9-8f57-9b1098c6ad01-kube-api-access-hp55x" (OuterVolumeSpecName: "kube-api-access-hp55x") pod "6458dccd-2042-45e9-8f57-9b1098c6ad01" (UID: "6458dccd-2042-45e9-8f57-9b1098c6ad01"). InnerVolumeSpecName "kube-api-access-hp55x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:45:04 crc kubenswrapper[4812]: I1124 21:45:04.155927 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp55x\" (UniqueName: \"kubernetes.io/projected/6458dccd-2042-45e9-8f57-9b1098c6ad01-kube-api-access-hp55x\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:04 crc kubenswrapper[4812]: I1124 21:45:04.155993 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6458dccd-2042-45e9-8f57-9b1098c6ad01-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:04 crc kubenswrapper[4812]: I1124 21:45:04.156012 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6458dccd-2042-45e9-8f57-9b1098c6ad01-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:04 crc kubenswrapper[4812]: I1124 21:45:04.471956 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-d7nz7" event={"ID":"6458dccd-2042-45e9-8f57-9b1098c6ad01","Type":"ContainerDied","Data":"f5108ed8c9d4d6bdb53666c473bb4a2369f9726fb8bb806d921193afb5c4dace"} Nov 24 21:45:04 crc kubenswrapper[4812]: I1124 21:45:04.472246 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5108ed8c9d4d6bdb53666c473bb4a2369f9726fb8bb806d921193afb5c4dace" Nov 24 21:45:04 crc kubenswrapper[4812]: I1124 21:45:04.472046 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-d7nz7" Nov 24 21:45:04 crc kubenswrapper[4812]: I1124 21:45:04.522841 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn"] Nov 24 21:45:04 crc kubenswrapper[4812]: I1124 21:45:04.531812 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400300-4tcpn"] Nov 24 21:45:04 crc kubenswrapper[4812]: I1124 21:45:04.982555 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb5383a9-1cd1-478f-88d5-fe697f9dfe4d" path="/var/lib/kubelet/pods/fb5383a9-1cd1-478f-88d5-fe697f9dfe4d/volumes" Nov 24 21:45:18 crc kubenswrapper[4812]: I1124 21:45:18.967679 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:45:18 crc kubenswrapper[4812]: E1124 21:45:18.969709 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:45:32 crc kubenswrapper[4812]: I1124 21:45:32.966546 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:45:32 crc kubenswrapper[4812]: E1124 21:45:32.967808 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:45:45 crc kubenswrapper[4812]: I1124 21:45:45.966475 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:45:45 crc kubenswrapper[4812]: E1124 21:45:45.967300 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:45:57 crc kubenswrapper[4812]: I1124 21:45:57.965810 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:45:57 crc kubenswrapper[4812]: E1124 21:45:57.966785 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:46:04 crc kubenswrapper[4812]: I1124 21:46:04.769892 4812 scope.go:117] "RemoveContainer" containerID="93efdf1f7c1d67048fb9f244bb3f895fbf3a3f9b97b2534cdb480eb31402bfc7" Nov 24 21:46:08 crc kubenswrapper[4812]: I1124 21:46:08.966153 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:46:08 crc kubenswrapper[4812]: E1124 21:46:08.967162 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:46:20 crc kubenswrapper[4812]: I1124 21:46:20.967506 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:46:20 crc kubenswrapper[4812]: E1124 21:46:20.968922 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:46:31 crc kubenswrapper[4812]: I1124 21:46:31.966801 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:46:31 crc kubenswrapper[4812]: E1124 21:46:31.967734 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:46:45 crc kubenswrapper[4812]: I1124 21:46:45.972306 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:46:45 crc kubenswrapper[4812]: E1124 21:46:45.975022 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:46:56 crc kubenswrapper[4812]: I1124 21:46:56.976226 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:46:56 crc kubenswrapper[4812]: E1124 21:46:56.977552 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:47:00 crc kubenswrapper[4812]: I1124 21:47:00.436314 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hgg9f"] Nov 24 21:47:00 crc kubenswrapper[4812]: E1124 21:47:00.437531 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6458dccd-2042-45e9-8f57-9b1098c6ad01" containerName="collect-profiles" Nov 24 21:47:00 crc kubenswrapper[4812]: I1124 21:47:00.437549 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6458dccd-2042-45e9-8f57-9b1098c6ad01" containerName="collect-profiles" Nov 24 21:47:00 crc kubenswrapper[4812]: I1124 21:47:00.439472 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6458dccd-2042-45e9-8f57-9b1098c6ad01" containerName="collect-profiles" Nov 24 21:47:00 crc kubenswrapper[4812]: I1124 21:47:00.442201 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgg9f" Nov 24 21:47:00 crc kubenswrapper[4812]: I1124 21:47:00.449312 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hgg9f"] Nov 24 21:47:00 crc kubenswrapper[4812]: I1124 21:47:00.584131 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8b9d632-055a-43db-b040-a40bd9ec5a46-utilities\") pod \"community-operators-hgg9f\" (UID: \"e8b9d632-055a-43db-b040-a40bd9ec5a46\") " pod="openshift-marketplace/community-operators-hgg9f" Nov 24 21:47:00 crc kubenswrapper[4812]: I1124 21:47:00.584360 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzvbv\" (UniqueName: \"kubernetes.io/projected/e8b9d632-055a-43db-b040-a40bd9ec5a46-kube-api-access-kzvbv\") pod \"community-operators-hgg9f\" (UID: \"e8b9d632-055a-43db-b040-a40bd9ec5a46\") " pod="openshift-marketplace/community-operators-hgg9f" Nov 24 21:47:00 crc kubenswrapper[4812]: I1124 21:47:00.584565 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8b9d632-055a-43db-b040-a40bd9ec5a46-catalog-content\") pod \"community-operators-hgg9f\" (UID: \"e8b9d632-055a-43db-b040-a40bd9ec5a46\") " pod="openshift-marketplace/community-operators-hgg9f" Nov 24 21:47:00 crc kubenswrapper[4812]: I1124 21:47:00.686893 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzvbv\" (UniqueName: \"kubernetes.io/projected/e8b9d632-055a-43db-b040-a40bd9ec5a46-kube-api-access-kzvbv\") pod \"community-operators-hgg9f\" (UID: \"e8b9d632-055a-43db-b040-a40bd9ec5a46\") " pod="openshift-marketplace/community-operators-hgg9f" Nov 24 21:47:00 crc kubenswrapper[4812]: I1124 21:47:00.687046 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8b9d632-055a-43db-b040-a40bd9ec5a46-catalog-content\") pod \"community-operators-hgg9f\" (UID: \"e8b9d632-055a-43db-b040-a40bd9ec5a46\") " pod="openshift-marketplace/community-operators-hgg9f" Nov 24 21:47:00 crc kubenswrapper[4812]: I1124 21:47:00.687248 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8b9d632-055a-43db-b040-a40bd9ec5a46-utilities\") pod \"community-operators-hgg9f\" (UID: \"e8b9d632-055a-43db-b040-a40bd9ec5a46\") " pod="openshift-marketplace/community-operators-hgg9f" Nov 24 21:47:00 crc kubenswrapper[4812]: I1124 21:47:00.687720 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8b9d632-055a-43db-b040-a40bd9ec5a46-catalog-content\") pod \"community-operators-hgg9f\" (UID: \"e8b9d632-055a-43db-b040-a40bd9ec5a46\") " pod="openshift-marketplace/community-operators-hgg9f" Nov 24 21:47:00 crc kubenswrapper[4812]: I1124 21:47:00.687954 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8b9d632-055a-43db-b040-a40bd9ec5a46-utilities\") pod \"community-operators-hgg9f\" (UID: \"e8b9d632-055a-43db-b040-a40bd9ec5a46\") " pod="openshift-marketplace/community-operators-hgg9f" Nov 24 21:47:00 crc kubenswrapper[4812]: I1124 21:47:00.720444 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzvbv\" (UniqueName: \"kubernetes.io/projected/e8b9d632-055a-43db-b040-a40bd9ec5a46-kube-api-access-kzvbv\") pod \"community-operators-hgg9f\" (UID: \"e8b9d632-055a-43db-b040-a40bd9ec5a46\") " pod="openshift-marketplace/community-operators-hgg9f" Nov 24 21:47:00 crc kubenswrapper[4812]: I1124 21:47:00.793677 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgg9f" Nov 24 21:47:01 crc kubenswrapper[4812]: I1124 21:47:01.301374 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hgg9f"] Nov 24 21:47:01 crc kubenswrapper[4812]: I1124 21:47:01.992069 4812 generic.go:334] "Generic (PLEG): container finished" podID="e8b9d632-055a-43db-b040-a40bd9ec5a46" containerID="df97250397a600a908abbcad5de3c4b4e6faf67fbe118b0c7b292af621bfec68" exitCode=0 Nov 24 21:47:01 crc kubenswrapper[4812]: I1124 21:47:01.992127 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgg9f" event={"ID":"e8b9d632-055a-43db-b040-a40bd9ec5a46","Type":"ContainerDied","Data":"df97250397a600a908abbcad5de3c4b4e6faf67fbe118b0c7b292af621bfec68"} Nov 24 21:47:01 crc kubenswrapper[4812]: I1124 21:47:01.992439 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgg9f" event={"ID":"e8b9d632-055a-43db-b040-a40bd9ec5a46","Type":"ContainerStarted","Data":"73a31f99b92cc2104b717b03ccab6cd1ac0bbb18e372ec50fe5cbec214ad7e00"} Nov 24 21:47:03 crc kubenswrapper[4812]: I1124 21:47:03.011936 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgg9f" event={"ID":"e8b9d632-055a-43db-b040-a40bd9ec5a46","Type":"ContainerStarted","Data":"bd987b7287e8ee4049d4a85377b89077dce25f9fd40012b36032b3b18a56e1b2"} Nov 24 21:47:04 crc kubenswrapper[4812]: I1124 21:47:04.031006 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jsbzf"] Nov 24 21:47:04 crc kubenswrapper[4812]: I1124 21:47:04.034237 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jsbzf" Nov 24 21:47:04 crc kubenswrapper[4812]: I1124 21:47:04.042473 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jsbzf"] Nov 24 21:47:04 crc kubenswrapper[4812]: I1124 21:47:04.169980 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trtmc\" (UniqueName: \"kubernetes.io/projected/c4623cb8-dbe4-4a6c-aebb-486f06b1b613-kube-api-access-trtmc\") pod \"redhat-marketplace-jsbzf\" (UID: \"c4623cb8-dbe4-4a6c-aebb-486f06b1b613\") " pod="openshift-marketplace/redhat-marketplace-jsbzf" Nov 24 21:47:04 crc kubenswrapper[4812]: I1124 21:47:04.170073 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4623cb8-dbe4-4a6c-aebb-486f06b1b613-utilities\") pod \"redhat-marketplace-jsbzf\" (UID: \"c4623cb8-dbe4-4a6c-aebb-486f06b1b613\") " pod="openshift-marketplace/redhat-marketplace-jsbzf" Nov 24 21:47:04 crc kubenswrapper[4812]: I1124 21:47:04.170244 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4623cb8-dbe4-4a6c-aebb-486f06b1b613-catalog-content\") pod \"redhat-marketplace-jsbzf\" (UID: \"c4623cb8-dbe4-4a6c-aebb-486f06b1b613\") " pod="openshift-marketplace/redhat-marketplace-jsbzf" Nov 24 21:47:04 crc kubenswrapper[4812]: I1124 21:47:04.271843 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trtmc\" (UniqueName: \"kubernetes.io/projected/c4623cb8-dbe4-4a6c-aebb-486f06b1b613-kube-api-access-trtmc\") pod \"redhat-marketplace-jsbzf\" (UID: \"c4623cb8-dbe4-4a6c-aebb-486f06b1b613\") " pod="openshift-marketplace/redhat-marketplace-jsbzf" Nov 24 21:47:04 crc kubenswrapper[4812]: I1124 21:47:04.271911 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4623cb8-dbe4-4a6c-aebb-486f06b1b613-utilities\") pod \"redhat-marketplace-jsbzf\" (UID: \"c4623cb8-dbe4-4a6c-aebb-486f06b1b613\") " pod="openshift-marketplace/redhat-marketplace-jsbzf" Nov 24 21:47:04 crc kubenswrapper[4812]: I1124 21:47:04.272005 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4623cb8-dbe4-4a6c-aebb-486f06b1b613-catalog-content\") pod \"redhat-marketplace-jsbzf\" (UID: \"c4623cb8-dbe4-4a6c-aebb-486f06b1b613\") " pod="openshift-marketplace/redhat-marketplace-jsbzf" Nov 24 21:47:04 crc kubenswrapper[4812]: I1124 21:47:04.272746 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4623cb8-dbe4-4a6c-aebb-486f06b1b613-catalog-content\") pod \"redhat-marketplace-jsbzf\" (UID: \"c4623cb8-dbe4-4a6c-aebb-486f06b1b613\") " pod="openshift-marketplace/redhat-marketplace-jsbzf" Nov 24 21:47:04 crc kubenswrapper[4812]: I1124 21:47:04.272986 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4623cb8-dbe4-4a6c-aebb-486f06b1b613-utilities\") pod \"redhat-marketplace-jsbzf\" (UID: \"c4623cb8-dbe4-4a6c-aebb-486f06b1b613\") " pod="openshift-marketplace/redhat-marketplace-jsbzf" Nov 24 21:47:04 crc kubenswrapper[4812]: I1124 21:47:04.298218 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trtmc\" (UniqueName: \"kubernetes.io/projected/c4623cb8-dbe4-4a6c-aebb-486f06b1b613-kube-api-access-trtmc\") pod \"redhat-marketplace-jsbzf\" (UID: \"c4623cb8-dbe4-4a6c-aebb-486f06b1b613\") " pod="openshift-marketplace/redhat-marketplace-jsbzf" Nov 24 21:47:04 crc kubenswrapper[4812]: I1124 21:47:04.394248 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jsbzf" Nov 24 21:47:04 crc kubenswrapper[4812]: I1124 21:47:04.882399 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jsbzf"] Nov 24 21:47:05 crc kubenswrapper[4812]: I1124 21:47:05.047079 4812 generic.go:334] "Generic (PLEG): container finished" podID="e8b9d632-055a-43db-b040-a40bd9ec5a46" containerID="bd987b7287e8ee4049d4a85377b89077dce25f9fd40012b36032b3b18a56e1b2" exitCode=0 Nov 24 21:47:05 crc kubenswrapper[4812]: I1124 21:47:05.047151 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgg9f" event={"ID":"e8b9d632-055a-43db-b040-a40bd9ec5a46","Type":"ContainerDied","Data":"bd987b7287e8ee4049d4a85377b89077dce25f9fd40012b36032b3b18a56e1b2"} Nov 24 21:47:05 crc kubenswrapper[4812]: I1124 21:47:05.049471 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsbzf" event={"ID":"c4623cb8-dbe4-4a6c-aebb-486f06b1b613","Type":"ContainerStarted","Data":"79b6910996a4c56945971feeb96a6dac88069c8f09543f407fcebe03d76a5e6d"} Nov 24 21:47:05 crc kubenswrapper[4812]: I1124 21:47:05.444407 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jrrxz"] Nov 24 21:47:05 crc kubenswrapper[4812]: I1124 21:47:05.448810 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrrxz" Nov 24 21:47:05 crc kubenswrapper[4812]: I1124 21:47:05.456457 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jrrxz"] Nov 24 21:47:05 crc kubenswrapper[4812]: I1124 21:47:05.500619 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/677b7c4b-e071-46a9-bb24-4c337c434acc-utilities\") pod \"redhat-operators-jrrxz\" (UID: \"677b7c4b-e071-46a9-bb24-4c337c434acc\") " pod="openshift-marketplace/redhat-operators-jrrxz" Nov 24 21:47:05 crc kubenswrapper[4812]: I1124 21:47:05.500828 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/677b7c4b-e071-46a9-bb24-4c337c434acc-catalog-content\") pod \"redhat-operators-jrrxz\" (UID: \"677b7c4b-e071-46a9-bb24-4c337c434acc\") " pod="openshift-marketplace/redhat-operators-jrrxz" Nov 24 21:47:05 crc kubenswrapper[4812]: I1124 21:47:05.500953 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79bz7\" (UniqueName: \"kubernetes.io/projected/677b7c4b-e071-46a9-bb24-4c337c434acc-kube-api-access-79bz7\") pod \"redhat-operators-jrrxz\" (UID: \"677b7c4b-e071-46a9-bb24-4c337c434acc\") " pod="openshift-marketplace/redhat-operators-jrrxz" Nov 24 21:47:05 crc kubenswrapper[4812]: I1124 21:47:05.603044 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/677b7c4b-e071-46a9-bb24-4c337c434acc-utilities\") pod \"redhat-operators-jrrxz\" (UID: \"677b7c4b-e071-46a9-bb24-4c337c434acc\") " pod="openshift-marketplace/redhat-operators-jrrxz" Nov 24 21:47:05 crc kubenswrapper[4812]: I1124 21:47:05.603209 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/677b7c4b-e071-46a9-bb24-4c337c434acc-catalog-content\") pod \"redhat-operators-jrrxz\" (UID: \"677b7c4b-e071-46a9-bb24-4c337c434acc\") " pod="openshift-marketplace/redhat-operators-jrrxz" Nov 24 21:47:05 crc kubenswrapper[4812]: I1124 21:47:05.603362 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79bz7\" (UniqueName: \"kubernetes.io/projected/677b7c4b-e071-46a9-bb24-4c337c434acc-kube-api-access-79bz7\") pod \"redhat-operators-jrrxz\" (UID: \"677b7c4b-e071-46a9-bb24-4c337c434acc\") " pod="openshift-marketplace/redhat-operators-jrrxz" Nov 24 21:47:05 crc kubenswrapper[4812]: I1124 21:47:05.603908 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/677b7c4b-e071-46a9-bb24-4c337c434acc-utilities\") pod \"redhat-operators-jrrxz\" (UID: \"677b7c4b-e071-46a9-bb24-4c337c434acc\") " pod="openshift-marketplace/redhat-operators-jrrxz" Nov 24 21:47:05 crc kubenswrapper[4812]: I1124 21:47:05.603983 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/677b7c4b-e071-46a9-bb24-4c337c434acc-catalog-content\") pod \"redhat-operators-jrrxz\" (UID: \"677b7c4b-e071-46a9-bb24-4c337c434acc\") " pod="openshift-marketplace/redhat-operators-jrrxz" Nov 24 21:47:06 crc kubenswrapper[4812]: I1124 21:47:06.021151 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79bz7\" (UniqueName: \"kubernetes.io/projected/677b7c4b-e071-46a9-bb24-4c337c434acc-kube-api-access-79bz7\") pod \"redhat-operators-jrrxz\" (UID: \"677b7c4b-e071-46a9-bb24-4c337c434acc\") " pod="openshift-marketplace/redhat-operators-jrrxz" Nov 24 21:47:06 crc kubenswrapper[4812]: I1124 21:47:06.079905 4812 generic.go:334] "Generic (PLEG): container finished" podID="c4623cb8-dbe4-4a6c-aebb-486f06b1b613" containerID="0a5ea4c46b5c49852327162912e4b9a8154e31a4b048ac6baf12ac5fe074b654" exitCode=0 Nov 24 21:47:06 crc kubenswrapper[4812]: I1124 21:47:06.079955 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsbzf" event={"ID":"c4623cb8-dbe4-4a6c-aebb-486f06b1b613","Type":"ContainerDied","Data":"0a5ea4c46b5c49852327162912e4b9a8154e31a4b048ac6baf12ac5fe074b654"} Nov 24 21:47:06 crc kubenswrapper[4812]: I1124 21:47:06.103902 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrrxz" Nov 24 21:47:06 crc kubenswrapper[4812]: I1124 21:47:06.695302 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jrrxz"] Nov 24 21:47:07 crc kubenswrapper[4812]: I1124 21:47:07.104948 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgg9f" event={"ID":"e8b9d632-055a-43db-b040-a40bd9ec5a46","Type":"ContainerStarted","Data":"7d58a6f63b102f54a66c4d52e49e42f0907438313f141ec300bbb936b0c6a935"} Nov 24 21:47:07 crc kubenswrapper[4812]: I1124 21:47:07.109947 4812 generic.go:334] "Generic (PLEG): container finished" podID="677b7c4b-e071-46a9-bb24-4c337c434acc" containerID="63196c553c3a9b4623a192168ec8f9897b7a6d7aae9a3e6a00ed87b2565aaa67" exitCode=0 Nov 24 21:47:07 crc kubenswrapper[4812]: I1124 21:47:07.109992 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrrxz" event={"ID":"677b7c4b-e071-46a9-bb24-4c337c434acc","Type":"ContainerDied","Data":"63196c553c3a9b4623a192168ec8f9897b7a6d7aae9a3e6a00ed87b2565aaa67"} Nov 24 21:47:07 crc kubenswrapper[4812]: I1124 21:47:07.110007 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrrxz" event={"ID":"677b7c4b-e071-46a9-bb24-4c337c434acc","Type":"ContainerStarted","Data":"4ceb72c115f24fdcaa912e18528ae909f55a3f7d9edf244c7770e1748726743b"} Nov 24 21:47:07 crc kubenswrapper[4812]: I1124 21:47:07.113132 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsbzf" event={"ID":"c4623cb8-dbe4-4a6c-aebb-486f06b1b613","Type":"ContainerStarted","Data":"28d0ab93f59c3de0b614dfea878b5694d93e74f7cfc4b804036776c4d4a6aaa8"} Nov 24 21:47:07 crc kubenswrapper[4812]: I1124 21:47:07.128284 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hgg9f" podStartSLOduration=3.574676783 podStartE2EDuration="7.128258627s" podCreationTimestamp="2025-11-24 21:47:00 +0000 UTC" firstStartedPulling="2025-11-24 21:47:01.994162812 +0000 UTC m=+9015.783115183" lastFinishedPulling="2025-11-24 21:47:05.547744636 +0000 UTC m=+9019.336697027" observedRunningTime="2025-11-24 21:47:07.127313391 +0000 UTC m=+9020.916265762" watchObservedRunningTime="2025-11-24 21:47:07.128258627 +0000 UTC m=+9020.917211018" Nov 24 21:47:07 crc kubenswrapper[4812]: I1124 21:47:07.965862 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:47:07 crc kubenswrapper[4812]: E1124 21:47:07.966524 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:47:08 crc kubenswrapper[4812]: I1124 21:47:08.130870 4812 generic.go:334] "Generic (PLEG): container finished" podID="c4623cb8-dbe4-4a6c-aebb-486f06b1b613" containerID="28d0ab93f59c3de0b614dfea878b5694d93e74f7cfc4b804036776c4d4a6aaa8" exitCode=0 Nov 24 21:47:08 crc kubenswrapper[4812]: I1124 21:47:08.130911 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsbzf" event={"ID":"c4623cb8-dbe4-4a6c-aebb-486f06b1b613","Type":"ContainerDied","Data":"28d0ab93f59c3de0b614dfea878b5694d93e74f7cfc4b804036776c4d4a6aaa8"} Nov 24 21:47:09 crc kubenswrapper[4812]: I1124 21:47:09.146159 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrrxz" event={"ID":"677b7c4b-e071-46a9-bb24-4c337c434acc","Type":"ContainerStarted","Data":"75c8fa7885cd15caa9fe5d4217b68e9a092b00bff1235bee01fe4d2c73fe0fd9"} Nov 24 21:47:09 crc kubenswrapper[4812]: I1124 21:47:09.150161 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsbzf" event={"ID":"c4623cb8-dbe4-4a6c-aebb-486f06b1b613","Type":"ContainerStarted","Data":"f42ca3d0066ede7771b0f71920924b26c0a37cf7a16e6e5c6199e967a3fb8d0e"} Nov 24 21:47:09 crc kubenswrapper[4812]: I1124 21:47:09.186024 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jsbzf" podStartSLOduration=2.703696532 podStartE2EDuration="5.186006626s" podCreationTimestamp="2025-11-24 21:47:04 +0000 UTC" firstStartedPulling="2025-11-24 21:47:06.081607836 +0000 UTC m=+9019.870560207" lastFinishedPulling="2025-11-24 21:47:08.56391793 +0000 UTC m=+9022.352870301" observedRunningTime="2025-11-24 21:47:09.181082777 +0000 UTC m=+9022.970035148" watchObservedRunningTime="2025-11-24 21:47:09.186006626 +0000 UTC m=+9022.974958987" Nov 24 21:47:10 crc kubenswrapper[4812]: I1124 21:47:10.794016 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hgg9f" Nov 24 21:47:10 crc kubenswrapper[4812]: I1124 21:47:10.794513 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hgg9f" Nov 24 21:47:10 crc kubenswrapper[4812]: I1124 21:47:10.852361 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hgg9f" Nov 24 21:47:11 crc kubenswrapper[4812]: I1124 21:47:11.256916 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hgg9f" Nov 24 21:47:14 crc kubenswrapper[4812]: I1124 21:47:14.238490 4812 generic.go:334] "Generic (PLEG): container finished" podID="677b7c4b-e071-46a9-bb24-4c337c434acc" containerID="75c8fa7885cd15caa9fe5d4217b68e9a092b00bff1235bee01fe4d2c73fe0fd9" exitCode=0 Nov 24 21:47:14 crc kubenswrapper[4812]: I1124 21:47:14.238586 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrrxz" event={"ID":"677b7c4b-e071-46a9-bb24-4c337c434acc","Type":"ContainerDied","Data":"75c8fa7885cd15caa9fe5d4217b68e9a092b00bff1235bee01fe4d2c73fe0fd9"} Nov 24 21:47:14 crc kubenswrapper[4812]: I1124 21:47:14.394624 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jsbzf" Nov 24 21:47:14 crc kubenswrapper[4812]: I1124 21:47:14.394679 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jsbzf" Nov 24 21:47:14 crc kubenswrapper[4812]: I1124 21:47:14.416922 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hgg9f"] Nov 24 21:47:14 crc kubenswrapper[4812]: I1124 21:47:14.417725 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hgg9f" podUID="e8b9d632-055a-43db-b040-a40bd9ec5a46" containerName="registry-server" containerID="cri-o://7d58a6f63b102f54a66c4d52e49e42f0907438313f141ec300bbb936b0c6a935" gracePeriod=2 Nov 24 21:47:14 crc kubenswrapper[4812]: I1124 21:47:14.465213 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jsbzf" Nov 24 21:47:14 crc kubenswrapper[4812]: E1124 21:47:14.513451 4812 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8b9d632_055a_43db_b040_a40bd9ec5a46.slice/crio-7d58a6f63b102f54a66c4d52e49e42f0907438313f141ec300bbb936b0c6a935.scope\": RecentStats: unable to find data in memory cache]" Nov 24 21:47:14 crc kubenswrapper[4812]: I1124 21:47:14.940312 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgg9f" Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.026655 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzvbv\" (UniqueName: \"kubernetes.io/projected/e8b9d632-055a-43db-b040-a40bd9ec5a46-kube-api-access-kzvbv\") pod \"e8b9d632-055a-43db-b040-a40bd9ec5a46\" (UID: \"e8b9d632-055a-43db-b040-a40bd9ec5a46\") " Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.026814 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8b9d632-055a-43db-b040-a40bd9ec5a46-utilities\") pod \"e8b9d632-055a-43db-b040-a40bd9ec5a46\" (UID: \"e8b9d632-055a-43db-b040-a40bd9ec5a46\") " Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.026858 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8b9d632-055a-43db-b040-a40bd9ec5a46-catalog-content\") pod \"e8b9d632-055a-43db-b040-a40bd9ec5a46\" (UID: \"e8b9d632-055a-43db-b040-a40bd9ec5a46\") " Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.027644 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8b9d632-055a-43db-b040-a40bd9ec5a46-utilities" (OuterVolumeSpecName: "utilities") pod "e8b9d632-055a-43db-b040-a40bd9ec5a46" (UID: "e8b9d632-055a-43db-b040-a40bd9ec5a46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.036914 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8b9d632-055a-43db-b040-a40bd9ec5a46-kube-api-access-kzvbv" (OuterVolumeSpecName: "kube-api-access-kzvbv") pod "e8b9d632-055a-43db-b040-a40bd9ec5a46" (UID: "e8b9d632-055a-43db-b040-a40bd9ec5a46"). InnerVolumeSpecName "kube-api-access-kzvbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.070170 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8b9d632-055a-43db-b040-a40bd9ec5a46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8b9d632-055a-43db-b040-a40bd9ec5a46" (UID: "e8b9d632-055a-43db-b040-a40bd9ec5a46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.128878 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzvbv\" (UniqueName: \"kubernetes.io/projected/e8b9d632-055a-43db-b040-a40bd9ec5a46-kube-api-access-kzvbv\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.128913 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8b9d632-055a-43db-b040-a40bd9ec5a46-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.128923 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8b9d632-055a-43db-b040-a40bd9ec5a46-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.251537 4812 generic.go:334] "Generic (PLEG): container finished" podID="e8b9d632-055a-43db-b040-a40bd9ec5a46" containerID="7d58a6f63b102f54a66c4d52e49e42f0907438313f141ec300bbb936b0c6a935" exitCode=0 Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.251610 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hgg9f" Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.251635 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgg9f" event={"ID":"e8b9d632-055a-43db-b040-a40bd9ec5a46","Type":"ContainerDied","Data":"7d58a6f63b102f54a66c4d52e49e42f0907438313f141ec300bbb936b0c6a935"} Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.251815 4812 scope.go:117] "RemoveContainer" containerID="7d58a6f63b102f54a66c4d52e49e42f0907438313f141ec300bbb936b0c6a935" Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.252181 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hgg9f" event={"ID":"e8b9d632-055a-43db-b040-a40bd9ec5a46","Type":"ContainerDied","Data":"73a31f99b92cc2104b717b03ccab6cd1ac0bbb18e372ec50fe5cbec214ad7e00"} Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.256229 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrrxz" event={"ID":"677b7c4b-e071-46a9-bb24-4c337c434acc","Type":"ContainerStarted","Data":"e2077719ba204cb2683a8ddc98f72984b7032f2fc66aca046f34e538da585672"} Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.284415 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jrrxz" podStartSLOduration=2.734423773 podStartE2EDuration="10.284393093s" podCreationTimestamp="2025-11-24 21:47:05 +0000 UTC" firstStartedPulling="2025-11-24 21:47:07.111225155 +0000 UTC m=+9020.900177526" lastFinishedPulling="2025-11-24 21:47:14.661194485 +0000 UTC m=+9028.450146846" observedRunningTime="2025-11-24 21:47:15.275471981 +0000 UTC m=+9029.064424372" watchObservedRunningTime="2025-11-24 21:47:15.284393093 +0000 UTC m=+9029.073345474" Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.293213 4812 scope.go:117] "RemoveContainer" containerID="bd987b7287e8ee4049d4a85377b89077dce25f9fd40012b36032b3b18a56e1b2" Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.308114 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hgg9f"] Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.319908 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hgg9f"] Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.324031 4812 scope.go:117] "RemoveContainer" containerID="df97250397a600a908abbcad5de3c4b4e6faf67fbe118b0c7b292af621bfec68" Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.334386 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jsbzf" Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.345996 4812 scope.go:117] "RemoveContainer" containerID="7d58a6f63b102f54a66c4d52e49e42f0907438313f141ec300bbb936b0c6a935" Nov 24 21:47:15 crc kubenswrapper[4812]: E1124 21:47:15.346351 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d58a6f63b102f54a66c4d52e49e42f0907438313f141ec300bbb936b0c6a935\": container with ID starting with 7d58a6f63b102f54a66c4d52e49e42f0907438313f141ec300bbb936b0c6a935 not found: ID does not exist" containerID="7d58a6f63b102f54a66c4d52e49e42f0907438313f141ec300bbb936b0c6a935" Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.346387 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d58a6f63b102f54a66c4d52e49e42f0907438313f141ec300bbb936b0c6a935"} err="failed to get container status \"7d58a6f63b102f54a66c4d52e49e42f0907438313f141ec300bbb936b0c6a935\": rpc error: code = NotFound desc = could not find container \"7d58a6f63b102f54a66c4d52e49e42f0907438313f141ec300bbb936b0c6a935\": container with ID starting with 7d58a6f63b102f54a66c4d52e49e42f0907438313f141ec300bbb936b0c6a935 not found: ID does not exist" Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.346410 4812 scope.go:117] "RemoveContainer" containerID="bd987b7287e8ee4049d4a85377b89077dce25f9fd40012b36032b3b18a56e1b2" Nov 24 21:47:15 crc kubenswrapper[4812]: E1124 21:47:15.346780 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd987b7287e8ee4049d4a85377b89077dce25f9fd40012b36032b3b18a56e1b2\": container with ID starting with bd987b7287e8ee4049d4a85377b89077dce25f9fd40012b36032b3b18a56e1b2 not found: ID does not exist" containerID="bd987b7287e8ee4049d4a85377b89077dce25f9fd40012b36032b3b18a56e1b2" Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.346809 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd987b7287e8ee4049d4a85377b89077dce25f9fd40012b36032b3b18a56e1b2"} err="failed to get container status \"bd987b7287e8ee4049d4a85377b89077dce25f9fd40012b36032b3b18a56e1b2\": rpc error: code = NotFound desc = could not find container \"bd987b7287e8ee4049d4a85377b89077dce25f9fd40012b36032b3b18a56e1b2\": container with ID starting with bd987b7287e8ee4049d4a85377b89077dce25f9fd40012b36032b3b18a56e1b2 not found: ID does not exist" Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.346825 4812 scope.go:117] "RemoveContainer" containerID="df97250397a600a908abbcad5de3c4b4e6faf67fbe118b0c7b292af621bfec68" Nov 24 21:47:15 crc kubenswrapper[4812]: E1124 21:47:15.347055 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df97250397a600a908abbcad5de3c4b4e6faf67fbe118b0c7b292af621bfec68\": container with ID starting with df97250397a600a908abbcad5de3c4b4e6faf67fbe118b0c7b292af621bfec68 not found: ID does not exist" containerID="df97250397a600a908abbcad5de3c4b4e6faf67fbe118b0c7b292af621bfec68" Nov 24 21:47:15 crc kubenswrapper[4812]: I1124 21:47:15.347081 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df97250397a600a908abbcad5de3c4b4e6faf67fbe118b0c7b292af621bfec68"} err="failed to get container status \"df97250397a600a908abbcad5de3c4b4e6faf67fbe118b0c7b292af621bfec68\": rpc error: code = NotFound desc = could not find container \"df97250397a600a908abbcad5de3c4b4e6faf67fbe118b0c7b292af621bfec68\": container with ID starting with df97250397a600a908abbcad5de3c4b4e6faf67fbe118b0c7b292af621bfec68 not found: ID does not exist" Nov 24 21:47:16 crc kubenswrapper[4812]: I1124 21:47:16.104119 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jrrxz" Nov 24 21:47:16 crc kubenswrapper[4812]: I1124 21:47:16.104539 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jrrxz" Nov 24 21:47:16 crc kubenswrapper[4812]: I1124 21:47:16.990949 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8b9d632-055a-43db-b040-a40bd9ec5a46" path="/var/lib/kubelet/pods/e8b9d632-055a-43db-b040-a40bd9ec5a46/volumes" Nov 24 21:47:17 crc kubenswrapper[4812]: I1124 21:47:17.588063 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jrrxz" podUID="677b7c4b-e071-46a9-bb24-4c337c434acc" containerName="registry-server" probeResult="failure" output=< Nov 24 21:47:17 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 21:47:17 crc kubenswrapper[4812]: > Nov 24 21:47:17 crc kubenswrapper[4812]: I1124 21:47:17.614825 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jsbzf"] Nov 24 21:47:17 crc kubenswrapper[4812]: I1124 21:47:17.615072 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jsbzf" podUID="c4623cb8-dbe4-4a6c-aebb-486f06b1b613" containerName="registry-server" containerID="cri-o://f42ca3d0066ede7771b0f71920924b26c0a37cf7a16e6e5c6199e967a3fb8d0e" gracePeriod=2 Nov 24 21:47:18 crc kubenswrapper[4812]: I1124 21:47:18.300780 4812 generic.go:334] "Generic (PLEG): container finished" podID="c4623cb8-dbe4-4a6c-aebb-486f06b1b613" containerID="f42ca3d0066ede7771b0f71920924b26c0a37cf7a16e6e5c6199e967a3fb8d0e" exitCode=0 Nov 24 21:47:18 crc kubenswrapper[4812]: I1124 21:47:18.301007 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsbzf" event={"ID":"c4623cb8-dbe4-4a6c-aebb-486f06b1b613","Type":"ContainerDied","Data":"f42ca3d0066ede7771b0f71920924b26c0a37cf7a16e6e5c6199e967a3fb8d0e"} Nov 24 21:47:18 crc kubenswrapper[4812]: I1124 21:47:18.301185 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jsbzf" event={"ID":"c4623cb8-dbe4-4a6c-aebb-486f06b1b613","Type":"ContainerDied","Data":"79b6910996a4c56945971feeb96a6dac88069c8f09543f407fcebe03d76a5e6d"} Nov 24 21:47:18 crc kubenswrapper[4812]: I1124 21:47:18.301210 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79b6910996a4c56945971feeb96a6dac88069c8f09543f407fcebe03d76a5e6d" Nov 24 21:47:18 crc kubenswrapper[4812]: I1124 21:47:18.783634 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jsbzf" Nov 24 21:47:18 crc kubenswrapper[4812]: I1124 21:47:18.910627 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trtmc\" (UniqueName: \"kubernetes.io/projected/c4623cb8-dbe4-4a6c-aebb-486f06b1b613-kube-api-access-trtmc\") pod \"c4623cb8-dbe4-4a6c-aebb-486f06b1b613\" (UID: \"c4623cb8-dbe4-4a6c-aebb-486f06b1b613\") " Nov 24 21:47:18 crc kubenswrapper[4812]: I1124 21:47:18.910823 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4623cb8-dbe4-4a6c-aebb-486f06b1b613-utilities\") pod \"c4623cb8-dbe4-4a6c-aebb-486f06b1b613\" (UID: \"c4623cb8-dbe4-4a6c-aebb-486f06b1b613\") " Nov 24 21:47:18 crc kubenswrapper[4812]: I1124 21:47:18.911014 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4623cb8-dbe4-4a6c-aebb-486f06b1b613-catalog-content\") pod \"c4623cb8-dbe4-4a6c-aebb-486f06b1b613\" (UID: \"c4623cb8-dbe4-4a6c-aebb-486f06b1b613\") " Nov 24 21:47:18 crc kubenswrapper[4812]: I1124 21:47:18.911894 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4623cb8-dbe4-4a6c-aebb-486f06b1b613-utilities" (OuterVolumeSpecName: "utilities") pod "c4623cb8-dbe4-4a6c-aebb-486f06b1b613" (UID: "c4623cb8-dbe4-4a6c-aebb-486f06b1b613"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:47:18 crc kubenswrapper[4812]: I1124 21:47:18.918011 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4623cb8-dbe4-4a6c-aebb-486f06b1b613-kube-api-access-trtmc" (OuterVolumeSpecName: "kube-api-access-trtmc") pod "c4623cb8-dbe4-4a6c-aebb-486f06b1b613" (UID: "c4623cb8-dbe4-4a6c-aebb-486f06b1b613"). InnerVolumeSpecName "kube-api-access-trtmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:47:18 crc kubenswrapper[4812]: I1124 21:47:18.934567 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4623cb8-dbe4-4a6c-aebb-486f06b1b613-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4623cb8-dbe4-4a6c-aebb-486f06b1b613" (UID: "c4623cb8-dbe4-4a6c-aebb-486f06b1b613"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:47:19 crc kubenswrapper[4812]: I1124 21:47:19.014515 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trtmc\" (UniqueName: \"kubernetes.io/projected/c4623cb8-dbe4-4a6c-aebb-486f06b1b613-kube-api-access-trtmc\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:19 crc kubenswrapper[4812]: I1124 21:47:19.015164 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4623cb8-dbe4-4a6c-aebb-486f06b1b613-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:19 crc kubenswrapper[4812]: I1124 21:47:19.015273 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4623cb8-dbe4-4a6c-aebb-486f06b1b613-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:19 crc kubenswrapper[4812]: I1124 21:47:19.317938 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jsbzf" Nov 24 21:47:19 crc kubenswrapper[4812]: I1124 21:47:19.351302 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jsbzf"] Nov 24 21:47:19 crc kubenswrapper[4812]: I1124 21:47:19.361320 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jsbzf"] Nov 24 21:47:20 crc kubenswrapper[4812]: I1124 21:47:20.965673 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:47:20 crc kubenswrapper[4812]: E1124 21:47:20.966024 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:47:20 crc kubenswrapper[4812]: I1124 21:47:20.979481 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4623cb8-dbe4-4a6c-aebb-486f06b1b613" path="/var/lib/kubelet/pods/c4623cb8-dbe4-4a6c-aebb-486f06b1b613/volumes" Nov 24 21:47:27 crc kubenswrapper[4812]: I1124 21:47:27.186670 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jrrxz" podUID="677b7c4b-e071-46a9-bb24-4c337c434acc" containerName="registry-server" probeResult="failure" output=< Nov 24 21:47:27 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 21:47:27 crc kubenswrapper[4812]: > Nov 24 21:47:35 crc kubenswrapper[4812]: I1124 21:47:35.966033 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:47:35 crc kubenswrapper[4812]: E1124 21:47:35.967676 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:47:36 crc kubenswrapper[4812]: I1124 21:47:36.151401 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jrrxz" Nov 24 21:47:36 crc kubenswrapper[4812]: I1124 21:47:36.194546 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jrrxz" Nov 24 21:47:36 crc kubenswrapper[4812]: I1124 21:47:36.640527 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jrrxz"] Nov 24 21:47:37 crc kubenswrapper[4812]: I1124 21:47:37.513099 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jrrxz" podUID="677b7c4b-e071-46a9-bb24-4c337c434acc" containerName="registry-server" containerID="cri-o://e2077719ba204cb2683a8ddc98f72984b7032f2fc66aca046f34e538da585672" gracePeriod=2 Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.027831 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrrxz" Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.100225 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/677b7c4b-e071-46a9-bb24-4c337c434acc-utilities\") pod \"677b7c4b-e071-46a9-bb24-4c337c434acc\" (UID: \"677b7c4b-e071-46a9-bb24-4c337c434acc\") " Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.100605 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/677b7c4b-e071-46a9-bb24-4c337c434acc-catalog-content\") pod \"677b7c4b-e071-46a9-bb24-4c337c434acc\" (UID: \"677b7c4b-e071-46a9-bb24-4c337c434acc\") " Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.100787 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/677b7c4b-e071-46a9-bb24-4c337c434acc-utilities" (OuterVolumeSpecName: "utilities") pod "677b7c4b-e071-46a9-bb24-4c337c434acc" (UID: "677b7c4b-e071-46a9-bb24-4c337c434acc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.100829 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79bz7\" (UniqueName: \"kubernetes.io/projected/677b7c4b-e071-46a9-bb24-4c337c434acc-kube-api-access-79bz7\") pod \"677b7c4b-e071-46a9-bb24-4c337c434acc\" (UID: \"677b7c4b-e071-46a9-bb24-4c337c434acc\") " Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.101500 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/677b7c4b-e071-46a9-bb24-4c337c434acc-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.122686 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/677b7c4b-e071-46a9-bb24-4c337c434acc-kube-api-access-79bz7" (OuterVolumeSpecName: "kube-api-access-79bz7") pod "677b7c4b-e071-46a9-bb24-4c337c434acc" (UID: "677b7c4b-e071-46a9-bb24-4c337c434acc"). InnerVolumeSpecName "kube-api-access-79bz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.208590 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79bz7\" (UniqueName: \"kubernetes.io/projected/677b7c4b-e071-46a9-bb24-4c337c434acc-kube-api-access-79bz7\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.238936 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/677b7c4b-e071-46a9-bb24-4c337c434acc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "677b7c4b-e071-46a9-bb24-4c337c434acc" (UID: "677b7c4b-e071-46a9-bb24-4c337c434acc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.312376 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/677b7c4b-e071-46a9-bb24-4c337c434acc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.526707 4812 generic.go:334] "Generic (PLEG): container finished" podID="677b7c4b-e071-46a9-bb24-4c337c434acc" containerID="e2077719ba204cb2683a8ddc98f72984b7032f2fc66aca046f34e538da585672" exitCode=0 Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.526749 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrrxz" event={"ID":"677b7c4b-e071-46a9-bb24-4c337c434acc","Type":"ContainerDied","Data":"e2077719ba204cb2683a8ddc98f72984b7032f2fc66aca046f34e538da585672"} Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.526776 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrrxz" event={"ID":"677b7c4b-e071-46a9-bb24-4c337c434acc","Type":"ContainerDied","Data":"4ceb72c115f24fdcaa912e18528ae909f55a3f7d9edf244c7770e1748726743b"} Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.526793 4812 scope.go:117] "RemoveContainer" containerID="e2077719ba204cb2683a8ddc98f72984b7032f2fc66aca046f34e538da585672" Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.526801 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrrxz" Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.565634 4812 scope.go:117] "RemoveContainer" containerID="75c8fa7885cd15caa9fe5d4217b68e9a092b00bff1235bee01fe4d2c73fe0fd9" Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.574881 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jrrxz"] Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.586263 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jrrxz"] Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.618038 4812 scope.go:117] "RemoveContainer" containerID="63196c553c3a9b4623a192168ec8f9897b7a6d7aae9a3e6a00ed87b2565aaa67" Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.639872 4812 scope.go:117] "RemoveContainer" containerID="e2077719ba204cb2683a8ddc98f72984b7032f2fc66aca046f34e538da585672" Nov 24 21:47:38 crc kubenswrapper[4812]: E1124 21:47:38.640248 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2077719ba204cb2683a8ddc98f72984b7032f2fc66aca046f34e538da585672\": container with ID starting with e2077719ba204cb2683a8ddc98f72984b7032f2fc66aca046f34e538da585672 not found: ID does not exist" containerID="e2077719ba204cb2683a8ddc98f72984b7032f2fc66aca046f34e538da585672" Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.640281 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2077719ba204cb2683a8ddc98f72984b7032f2fc66aca046f34e538da585672"} err="failed to get container status \"e2077719ba204cb2683a8ddc98f72984b7032f2fc66aca046f34e538da585672\": rpc error: code = NotFound desc = could not find container \"e2077719ba204cb2683a8ddc98f72984b7032f2fc66aca046f34e538da585672\": container with ID starting with e2077719ba204cb2683a8ddc98f72984b7032f2fc66aca046f34e538da585672 not found: ID does not exist" Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.640301 4812 scope.go:117] "RemoveContainer" containerID="75c8fa7885cd15caa9fe5d4217b68e9a092b00bff1235bee01fe4d2c73fe0fd9" Nov 24 21:47:38 crc kubenswrapper[4812]: E1124 21:47:38.640821 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75c8fa7885cd15caa9fe5d4217b68e9a092b00bff1235bee01fe4d2c73fe0fd9\": container with ID starting with 75c8fa7885cd15caa9fe5d4217b68e9a092b00bff1235bee01fe4d2c73fe0fd9 not found: ID does not exist" containerID="75c8fa7885cd15caa9fe5d4217b68e9a092b00bff1235bee01fe4d2c73fe0fd9" Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.640843 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75c8fa7885cd15caa9fe5d4217b68e9a092b00bff1235bee01fe4d2c73fe0fd9"} err="failed to get container status \"75c8fa7885cd15caa9fe5d4217b68e9a092b00bff1235bee01fe4d2c73fe0fd9\": rpc error: code = NotFound desc = could not find container \"75c8fa7885cd15caa9fe5d4217b68e9a092b00bff1235bee01fe4d2c73fe0fd9\": container with ID starting with 75c8fa7885cd15caa9fe5d4217b68e9a092b00bff1235bee01fe4d2c73fe0fd9 not found: ID does not exist" Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.640859 4812 scope.go:117] "RemoveContainer" containerID="63196c553c3a9b4623a192168ec8f9897b7a6d7aae9a3e6a00ed87b2565aaa67" Nov 24 21:47:38 crc kubenswrapper[4812]: E1124 21:47:38.641189 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63196c553c3a9b4623a192168ec8f9897b7a6d7aae9a3e6a00ed87b2565aaa67\": container with ID starting with 63196c553c3a9b4623a192168ec8f9897b7a6d7aae9a3e6a00ed87b2565aaa67 not found: ID does not exist" containerID="63196c553c3a9b4623a192168ec8f9897b7a6d7aae9a3e6a00ed87b2565aaa67" Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.641209 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63196c553c3a9b4623a192168ec8f9897b7a6d7aae9a3e6a00ed87b2565aaa67"} err="failed to get container status \"63196c553c3a9b4623a192168ec8f9897b7a6d7aae9a3e6a00ed87b2565aaa67\": rpc error: code = NotFound desc = could not find container \"63196c553c3a9b4623a192168ec8f9897b7a6d7aae9a3e6a00ed87b2565aaa67\": container with ID starting with 63196c553c3a9b4623a192168ec8f9897b7a6d7aae9a3e6a00ed87b2565aaa67 not found: ID does not exist" Nov 24 21:47:38 crc kubenswrapper[4812]: I1124 21:47:38.979532 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="677b7c4b-e071-46a9-bb24-4c337c434acc" path="/var/lib/kubelet/pods/677b7c4b-e071-46a9-bb24-4c337c434acc/volumes" Nov 24 21:47:48 crc kubenswrapper[4812]: I1124 21:47:48.966823 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:47:48 crc kubenswrapper[4812]: E1124 21:47:48.967966 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:48:03 crc kubenswrapper[4812]: I1124 21:48:03.966283 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:48:03 crc kubenswrapper[4812]: E1124 21:48:03.967240 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:48:16 crc kubenswrapper[4812]: I1124 21:48:16.972737 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:48:16 crc kubenswrapper[4812]: E1124 21:48:16.973626 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:48:31 crc kubenswrapper[4812]: I1124 21:48:31.965925 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:48:31 crc kubenswrapper[4812]: E1124 21:48:31.967237 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:48:32 crc kubenswrapper[4812]: I1124 21:48:32.203002 4812 generic.go:334] "Generic (PLEG): container finished" podID="6a59db0a-69f8-4843-9570-3dad3cb5d916" containerID="e3834309878d350b09f53aedc5ba8c6bed0d0a04d14b49d7b5956e2d6681227f" exitCode=0 Nov 24 21:48:32 crc kubenswrapper[4812]: I1124 21:48:32.203090 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" event={"ID":"6a59db0a-69f8-4843-9570-3dad3cb5d916","Type":"ContainerDied","Data":"e3834309878d350b09f53aedc5ba8c6bed0d0a04d14b49d7b5956e2d6681227f"} Nov 24 21:48:34 crc kubenswrapper[4812]: I1124 21:48:34.238437 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" event={"ID":"6a59db0a-69f8-4843-9570-3dad3cb5d916","Type":"ContainerDied","Data":"48f7f82dbaadb839ffd511ec3c1fe1d0c960d1c87f556ba733c13dc9341d9678"} Nov 24 21:48:34 crc kubenswrapper[4812]: I1124 21:48:34.239180 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48f7f82dbaadb839ffd511ec3c1fe1d0c960d1c87f556ba733c13dc9341d9678" Nov 24 21:48:34 crc kubenswrapper[4812]: I1124 21:48:34.523593 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" Nov 24 21:48:34 crc kubenswrapper[4812]: I1124 21:48:34.697396 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-neutron-sriov-agent-neutron-config-0\") pod \"6a59db0a-69f8-4843-9570-3dad3cb5d916\" (UID: \"6a59db0a-69f8-4843-9570-3dad3cb5d916\") " Nov 24 21:48:34 crc kubenswrapper[4812]: I1124 21:48:34.697454 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-inventory\") pod \"6a59db0a-69f8-4843-9570-3dad3cb5d916\" (UID: \"6a59db0a-69f8-4843-9570-3dad3cb5d916\") " Nov 24 21:48:34 crc kubenswrapper[4812]: I1124 21:48:34.697495 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-ssh-key\") pod \"6a59db0a-69f8-4843-9570-3dad3cb5d916\" (UID: \"6a59db0a-69f8-4843-9570-3dad3cb5d916\") " Nov 24 21:48:34 crc kubenswrapper[4812]: I1124 21:48:34.697668 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbpr5\" (UniqueName: \"kubernetes.io/projected/6a59db0a-69f8-4843-9570-3dad3cb5d916-kube-api-access-pbpr5\") pod \"6a59db0a-69f8-4843-9570-3dad3cb5d916\" (UID: \"6a59db0a-69f8-4843-9570-3dad3cb5d916\") " Nov 24 21:48:34 crc kubenswrapper[4812]: I1124 21:48:34.697718 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-neutron-sriov-combined-ca-bundle\") pod \"6a59db0a-69f8-4843-9570-3dad3cb5d916\" (UID: \"6a59db0a-69f8-4843-9570-3dad3cb5d916\") " Nov 24 21:48:34 crc kubenswrapper[4812]: I1124 21:48:34.719290 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "6a59db0a-69f8-4843-9570-3dad3cb5d916" (UID: "6a59db0a-69f8-4843-9570-3dad3cb5d916"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:48:34 crc kubenswrapper[4812]: I1124 21:48:34.719419 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a59db0a-69f8-4843-9570-3dad3cb5d916-kube-api-access-pbpr5" (OuterVolumeSpecName: "kube-api-access-pbpr5") pod "6a59db0a-69f8-4843-9570-3dad3cb5d916" (UID: "6a59db0a-69f8-4843-9570-3dad3cb5d916"). InnerVolumeSpecName "kube-api-access-pbpr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:48:34 crc kubenswrapper[4812]: I1124 21:48:34.741278 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "6a59db0a-69f8-4843-9570-3dad3cb5d916" (UID: "6a59db0a-69f8-4843-9570-3dad3cb5d916"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:48:34 crc kubenswrapper[4812]: I1124 21:48:34.744149 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-inventory" (OuterVolumeSpecName: "inventory") pod "6a59db0a-69f8-4843-9570-3dad3cb5d916" (UID: "6a59db0a-69f8-4843-9570-3dad3cb5d916"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:48:34 crc kubenswrapper[4812]: I1124 21:48:34.747139 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6a59db0a-69f8-4843-9570-3dad3cb5d916" (UID: "6a59db0a-69f8-4843-9570-3dad3cb5d916"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:48:34 crc kubenswrapper[4812]: I1124 21:48:34.801137 4812 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:34 crc kubenswrapper[4812]: I1124 21:48:34.801171 4812 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:34 crc kubenswrapper[4812]: I1124 21:48:34.801182 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:34 crc kubenswrapper[4812]: I1124 21:48:34.801194 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6a59db0a-69f8-4843-9570-3dad3cb5d916-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:34 crc kubenswrapper[4812]: I1124 21:48:34.801203 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbpr5\" (UniqueName: \"kubernetes.io/projected/6a59db0a-69f8-4843-9570-3dad3cb5d916-kube-api-access-pbpr5\") on node \"crc\" DevicePath \"\"" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.255491 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-kdd46" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.676199 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm"] Nov 24 21:48:35 crc kubenswrapper[4812]: E1124 21:48:35.676685 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b9d632-055a-43db-b040-a40bd9ec5a46" containerName="extract-utilities" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.676706 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b9d632-055a-43db-b040-a40bd9ec5a46" containerName="extract-utilities" Nov 24 21:48:35 crc kubenswrapper[4812]: E1124 21:48:35.679090 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4623cb8-dbe4-4a6c-aebb-486f06b1b613" containerName="registry-server" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.679104 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4623cb8-dbe4-4a6c-aebb-486f06b1b613" containerName="registry-server" Nov 24 21:48:35 crc kubenswrapper[4812]: E1124 21:48:35.679126 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b9d632-055a-43db-b040-a40bd9ec5a46" containerName="registry-server" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.679135 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b9d632-055a-43db-b040-a40bd9ec5a46" containerName="registry-server" Nov 24 21:48:35 crc kubenswrapper[4812]: E1124 21:48:35.679158 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4623cb8-dbe4-4a6c-aebb-486f06b1b613" containerName="extract-content" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.679166 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4623cb8-dbe4-4a6c-aebb-486f06b1b613" containerName="extract-content" Nov 24 21:48:35 crc kubenswrapper[4812]: E1124 21:48:35.679176 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="677b7c4b-e071-46a9-bb24-4c337c434acc" containerName="extract-content" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.679184 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="677b7c4b-e071-46a9-bb24-4c337c434acc" containerName="extract-content" Nov 24 21:48:35 crc kubenswrapper[4812]: E1124 21:48:35.679201 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4623cb8-dbe4-4a6c-aebb-486f06b1b613" containerName="extract-utilities" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.679209 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4623cb8-dbe4-4a6c-aebb-486f06b1b613" containerName="extract-utilities" Nov 24 21:48:35 crc kubenswrapper[4812]: E1124 21:48:35.679227 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8b9d632-055a-43db-b040-a40bd9ec5a46" containerName="extract-content" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.679234 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8b9d632-055a-43db-b040-a40bd9ec5a46" containerName="extract-content" Nov 24 21:48:35 crc kubenswrapper[4812]: E1124 21:48:35.679243 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="677b7c4b-e071-46a9-bb24-4c337c434acc" containerName="extract-utilities" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.679251 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="677b7c4b-e071-46a9-bb24-4c337c434acc" containerName="extract-utilities" Nov 24 21:48:35 crc kubenswrapper[4812]: E1124 21:48:35.679268 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="677b7c4b-e071-46a9-bb24-4c337c434acc" containerName="registry-server" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.679275 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="677b7c4b-e071-46a9-bb24-4c337c434acc" containerName="registry-server" Nov 24 21:48:35 crc kubenswrapper[4812]: E1124 21:48:35.679297 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a59db0a-69f8-4843-9570-3dad3cb5d916" containerName="neutron-sriov-openstack-openstack-cell1" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.679304 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a59db0a-69f8-4843-9570-3dad3cb5d916" containerName="neutron-sriov-openstack-openstack-cell1" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.679598 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8b9d632-055a-43db-b040-a40bd9ec5a46" containerName="registry-server" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.679620 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4623cb8-dbe4-4a6c-aebb-486f06b1b613" containerName="registry-server" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.679636 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="677b7c4b-e071-46a9-bb24-4c337c434acc" containerName="registry-server" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.679663 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a59db0a-69f8-4843-9570-3dad3cb5d916" containerName="neutron-sriov-openstack-openstack-cell1" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.680518 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.682781 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.683199 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.683272 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.683283 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.683365 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2x7tj" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.694103 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm"] Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.831793 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-lpvlm\" (UID: \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.832186 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-lpvlm\" (UID: \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.832400 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjlbs\" (UniqueName: \"kubernetes.io/projected/b613d8c2-5066-4eb3-bad2-4e662e2b5078-kube-api-access-gjlbs\") pod \"neutron-dhcp-openstack-openstack-cell1-lpvlm\" (UID: \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.832506 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-lpvlm\" (UID: \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.832840 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-lpvlm\" (UID: \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.935872 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjlbs\" (UniqueName: \"kubernetes.io/projected/b613d8c2-5066-4eb3-bad2-4e662e2b5078-kube-api-access-gjlbs\") pod \"neutron-dhcp-openstack-openstack-cell1-lpvlm\" (UID: \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.936030 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-lpvlm\" (UID: \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.936145 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-lpvlm\" (UID: \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.936240 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-lpvlm\" (UID: \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.936301 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-lpvlm\" (UID: \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.941576 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-lpvlm\" (UID: \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.941597 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-lpvlm\" (UID: \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.942330 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-lpvlm\" (UID: \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.944955 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-lpvlm\" (UID: \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" Nov 24 21:48:35 crc kubenswrapper[4812]: I1124 21:48:35.963300 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjlbs\" (UniqueName: \"kubernetes.io/projected/b613d8c2-5066-4eb3-bad2-4e662e2b5078-kube-api-access-gjlbs\") pod \"neutron-dhcp-openstack-openstack-cell1-lpvlm\" (UID: \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" Nov 24 21:48:36 crc kubenswrapper[4812]: I1124 21:48:36.002271 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" Nov 24 21:48:36 crc kubenswrapper[4812]: I1124 21:48:36.640986 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm"] Nov 24 21:48:36 crc kubenswrapper[4812]: I1124 21:48:36.934619 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:48:37 crc kubenswrapper[4812]: I1124 21:48:37.282771 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" event={"ID":"b613d8c2-5066-4eb3-bad2-4e662e2b5078","Type":"ContainerStarted","Data":"51166c5222268009d6de2ee39e6464874788c74c3d4544267c0ffa27fa869c2f"} Nov 24 21:48:38 crc kubenswrapper[4812]: I1124 21:48:38.298761 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" event={"ID":"b613d8c2-5066-4eb3-bad2-4e662e2b5078","Type":"ContainerStarted","Data":"8773c77354d94be0c831fc17e9491f7bc66d0e076d301831d4ff89e133e64181"} Nov 24 21:48:38 crc kubenswrapper[4812]: I1124 21:48:38.338099 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" podStartSLOduration=2.917939376 podStartE2EDuration="3.338069337s" podCreationTimestamp="2025-11-24 21:48:35 +0000 UTC" firstStartedPulling="2025-11-24 21:48:36.934008779 +0000 UTC m=+9110.722961190" lastFinishedPulling="2025-11-24 21:48:37.35413877 +0000 UTC m=+9111.143091151" observedRunningTime="2025-11-24 21:48:38.33110778 +0000 UTC m=+9112.120060151" watchObservedRunningTime="2025-11-24 21:48:38.338069337 +0000 UTC m=+9112.127021738" Nov 24 21:48:45 crc kubenswrapper[4812]: I1124 21:48:45.967364 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:48:45 crc kubenswrapper[4812]: E1124 21:48:45.968483 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:48:58 crc kubenswrapper[4812]: I1124 21:48:58.966606 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:48:58 crc kubenswrapper[4812]: E1124 21:48:58.967789 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:49:10 crc kubenswrapper[4812]: I1124 21:49:10.966696 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:49:10 crc kubenswrapper[4812]: E1124 21:49:10.967627 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:49:24 crc kubenswrapper[4812]: I1124 21:49:24.966007 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:49:24 crc kubenswrapper[4812]: E1124 21:49:24.966947 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:49:35 crc kubenswrapper[4812]: I1124 21:49:35.966505 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:49:35 crc kubenswrapper[4812]: E1124 21:49:35.967603 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:49:50 crc kubenswrapper[4812]: I1124 21:49:50.966038 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:49:50 crc kubenswrapper[4812]: E1124 21:49:50.967209 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:50:03 crc kubenswrapper[4812]: I1124 21:50:03.966066 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:50:04 crc kubenswrapper[4812]: I1124 21:50:04.390003 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"91e36abe3fab90d55024ad677354165b40cb30a56e8b1ec79bfa4418f8586d8f"} Nov 24 21:52:28 crc kubenswrapper[4812]: I1124 21:52:28.256812 4812 generic.go:334] "Generic (PLEG): container finished" podID="b613d8c2-5066-4eb3-bad2-4e662e2b5078" containerID="8773c77354d94be0c831fc17e9491f7bc66d0e076d301831d4ff89e133e64181" exitCode=0 Nov 24 21:52:28 crc kubenswrapper[4812]: I1124 21:52:28.256882 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" event={"ID":"b613d8c2-5066-4eb3-bad2-4e662e2b5078","Type":"ContainerDied","Data":"8773c77354d94be0c831fc17e9491f7bc66d0e076d301831d4ff89e133e64181"} Nov 24 21:52:30 crc kubenswrapper[4812]: I1124 21:52:30.154686 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" Nov 24 21:52:30 crc kubenswrapper[4812]: I1124 21:52:30.285421 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" event={"ID":"b613d8c2-5066-4eb3-bad2-4e662e2b5078","Type":"ContainerDied","Data":"51166c5222268009d6de2ee39e6464874788c74c3d4544267c0ffa27fa869c2f"} Nov 24 21:52:30 crc kubenswrapper[4812]: I1124 21:52:30.285465 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51166c5222268009d6de2ee39e6464874788c74c3d4544267c0ffa27fa869c2f" Nov 24 21:52:30 crc kubenswrapper[4812]: I1124 21:52:30.285534 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-lpvlm" Nov 24 21:52:30 crc kubenswrapper[4812]: I1124 21:52:30.347456 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-ssh-key\") pod \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\" (UID: \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\") " Nov 24 21:52:30 crc kubenswrapper[4812]: I1124 21:52:30.347607 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-inventory\") pod \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\" (UID: \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\") " Nov 24 21:52:30 crc kubenswrapper[4812]: I1124 21:52:30.347774 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-neutron-dhcp-combined-ca-bundle\") pod \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\" (UID: \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\") " Nov 24 21:52:30 crc kubenswrapper[4812]: I1124 21:52:30.347886 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-neutron-dhcp-agent-neutron-config-0\") pod \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\" (UID: \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\") " Nov 24 21:52:30 crc kubenswrapper[4812]: I1124 21:52:30.347962 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjlbs\" (UniqueName: \"kubernetes.io/projected/b613d8c2-5066-4eb3-bad2-4e662e2b5078-kube-api-access-gjlbs\") pod \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\" (UID: \"b613d8c2-5066-4eb3-bad2-4e662e2b5078\") " Nov 24 21:52:30 crc kubenswrapper[4812]: I1124 21:52:30.355429 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "b613d8c2-5066-4eb3-bad2-4e662e2b5078" (UID: "b613d8c2-5066-4eb3-bad2-4e662e2b5078"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:52:30 crc kubenswrapper[4812]: I1124 21:52:30.369627 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b613d8c2-5066-4eb3-bad2-4e662e2b5078-kube-api-access-gjlbs" (OuterVolumeSpecName: "kube-api-access-gjlbs") pod "b613d8c2-5066-4eb3-bad2-4e662e2b5078" (UID: "b613d8c2-5066-4eb3-bad2-4e662e2b5078"). InnerVolumeSpecName "kube-api-access-gjlbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:52:30 crc kubenswrapper[4812]: I1124 21:52:30.391499 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-inventory" (OuterVolumeSpecName: "inventory") pod "b613d8c2-5066-4eb3-bad2-4e662e2b5078" (UID: "b613d8c2-5066-4eb3-bad2-4e662e2b5078"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:52:30 crc kubenswrapper[4812]: I1124 21:52:30.392815 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b613d8c2-5066-4eb3-bad2-4e662e2b5078" (UID: "b613d8c2-5066-4eb3-bad2-4e662e2b5078"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:52:30 crc kubenswrapper[4812]: I1124 21:52:30.400910 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "b613d8c2-5066-4eb3-bad2-4e662e2b5078" (UID: "b613d8c2-5066-4eb3-bad2-4e662e2b5078"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:52:30 crc kubenswrapper[4812]: I1124 21:52:30.451797 4812 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:30 crc kubenswrapper[4812]: I1124 21:52:30.451935 4812 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:30 crc kubenswrapper[4812]: I1124 21:52:30.451947 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjlbs\" (UniqueName: \"kubernetes.io/projected/b613d8c2-5066-4eb3-bad2-4e662e2b5078-kube-api-access-gjlbs\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:30 crc kubenswrapper[4812]: I1124 21:52:30.451957 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:30 crc kubenswrapper[4812]: I1124 21:52:30.451965 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b613d8c2-5066-4eb3-bad2-4e662e2b5078-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:32 crc kubenswrapper[4812]: I1124 21:52:32.998545 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:52:32 crc kubenswrapper[4812]: I1124 21:52:32.999062 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:52:46 crc kubenswrapper[4812]: I1124 21:52:46.321723 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9ps94"] Nov 24 21:52:46 crc kubenswrapper[4812]: E1124 21:52:46.323044 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b613d8c2-5066-4eb3-bad2-4e662e2b5078" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 24 21:52:46 crc kubenswrapper[4812]: I1124 21:52:46.323066 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b613d8c2-5066-4eb3-bad2-4e662e2b5078" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 24 21:52:46 crc kubenswrapper[4812]: I1124 21:52:46.323454 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b613d8c2-5066-4eb3-bad2-4e662e2b5078" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 24 21:52:46 crc kubenswrapper[4812]: I1124 21:52:46.326147 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9ps94" Nov 24 21:52:46 crc kubenswrapper[4812]: I1124 21:52:46.348579 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9ps94"] Nov 24 21:52:46 crc kubenswrapper[4812]: I1124 21:52:46.453182 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e97a8074-9685-425e-960e-44fab12d2e05-utilities\") pod \"certified-operators-9ps94\" (UID: \"e97a8074-9685-425e-960e-44fab12d2e05\") " pod="openshift-marketplace/certified-operators-9ps94" Nov 24 21:52:46 crc kubenswrapper[4812]: I1124 21:52:46.453258 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt7xj\" (UniqueName: \"kubernetes.io/projected/e97a8074-9685-425e-960e-44fab12d2e05-kube-api-access-mt7xj\") pod \"certified-operators-9ps94\" (UID: \"e97a8074-9685-425e-960e-44fab12d2e05\") " pod="openshift-marketplace/certified-operators-9ps94" Nov 24 21:52:46 crc kubenswrapper[4812]: I1124 21:52:46.453399 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e97a8074-9685-425e-960e-44fab12d2e05-catalog-content\") pod \"certified-operators-9ps94\" (UID: \"e97a8074-9685-425e-960e-44fab12d2e05\") " pod="openshift-marketplace/certified-operators-9ps94" Nov 24 21:52:46 crc kubenswrapper[4812]: I1124 21:52:46.555249 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e97a8074-9685-425e-960e-44fab12d2e05-utilities\") pod \"certified-operators-9ps94\" (UID: \"e97a8074-9685-425e-960e-44fab12d2e05\") " pod="openshift-marketplace/certified-operators-9ps94" Nov 24 21:52:46 crc kubenswrapper[4812]: I1124 21:52:46.555327 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt7xj\" (UniqueName: \"kubernetes.io/projected/e97a8074-9685-425e-960e-44fab12d2e05-kube-api-access-mt7xj\") pod \"certified-operators-9ps94\" (UID: \"e97a8074-9685-425e-960e-44fab12d2e05\") " pod="openshift-marketplace/certified-operators-9ps94" Nov 24 21:52:46 crc kubenswrapper[4812]: I1124 21:52:46.555463 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e97a8074-9685-425e-960e-44fab12d2e05-catalog-content\") pod \"certified-operators-9ps94\" (UID: \"e97a8074-9685-425e-960e-44fab12d2e05\") " pod="openshift-marketplace/certified-operators-9ps94" Nov 24 21:52:46 crc kubenswrapper[4812]: I1124 21:52:46.555896 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e97a8074-9685-425e-960e-44fab12d2e05-utilities\") pod \"certified-operators-9ps94\" (UID: \"e97a8074-9685-425e-960e-44fab12d2e05\") " pod="openshift-marketplace/certified-operators-9ps94" Nov 24 21:52:46 crc kubenswrapper[4812]: I1124 21:52:46.555932 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e97a8074-9685-425e-960e-44fab12d2e05-catalog-content\") pod \"certified-operators-9ps94\" (UID: \"e97a8074-9685-425e-960e-44fab12d2e05\") " pod="openshift-marketplace/certified-operators-9ps94" Nov 24 21:52:46 crc kubenswrapper[4812]: I1124 21:52:46.596655 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt7xj\" (UniqueName: \"kubernetes.io/projected/e97a8074-9685-425e-960e-44fab12d2e05-kube-api-access-mt7xj\") pod \"certified-operators-9ps94\" (UID: \"e97a8074-9685-425e-960e-44fab12d2e05\") " pod="openshift-marketplace/certified-operators-9ps94" Nov 24 21:52:46 crc kubenswrapper[4812]: I1124 21:52:46.636146 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 21:52:46 crc kubenswrapper[4812]: I1124 21:52:46.636370 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="dd6b987f-ea2d-49ee-8d1c-3abbe6463a66" containerName="nova-cell0-conductor-conductor" containerID="cri-o://d29f6c6d47d8621b6fec66813f882174bb2e3411238e1ca13069fa9dedd9d7eb" gracePeriod=30 Nov 24 21:52:46 crc kubenswrapper[4812]: I1124 21:52:46.651498 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 21:52:46 crc kubenswrapper[4812]: I1124 21:52:46.651711 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="94d71826-08db-4e3a-a03d-e8141947dcc0" containerName="nova-cell1-conductor-conductor" containerID="cri-o://f7a22ae3ea48aaac2a5a21757c8e5a8fd54be92b0dba494a59bba60bceb12673" gracePeriod=30 Nov 24 21:52:46 crc kubenswrapper[4812]: I1124 21:52:46.692046 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9ps94" Nov 24 21:52:47 crc kubenswrapper[4812]: I1124 21:52:47.024864 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9ps94"] Nov 24 21:52:47 crc kubenswrapper[4812]: W1124 21:52:47.322383 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode97a8074_9685_425e_960e_44fab12d2e05.slice/crio-9775000f824f9f0f1b0d434a9be5d81cabfe12f908b68a7ab968308e09f0c485 WatchSource:0}: Error finding container 9775000f824f9f0f1b0d434a9be5d81cabfe12f908b68a7ab968308e09f0c485: Status 404 returned error can't find the container with id 9775000f824f9f0f1b0d434a9be5d81cabfe12f908b68a7ab968308e09f0c485 Nov 24 21:52:47 crc kubenswrapper[4812]: I1124 21:52:47.559618 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ps94" event={"ID":"e97a8074-9685-425e-960e-44fab12d2e05","Type":"ContainerStarted","Data":"9775000f824f9f0f1b0d434a9be5d81cabfe12f908b68a7ab968308e09f0c485"} Nov 24 21:52:47 crc kubenswrapper[4812]: I1124 21:52:47.570495 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:52:47 crc kubenswrapper[4812]: I1124 21:52:47.570733 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="021029af-7cb0-4e84-bd75-8fe60d9d41d4" containerName="nova-scheduler-scheduler" containerID="cri-o://6e5c45ea3c733ec40c45eb897ac744e45a8e64d860aa0ad8c3e7d5f5f070dddb" gracePeriod=30 Nov 24 21:52:47 crc kubenswrapper[4812]: I1124 21:52:47.588747 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:52:47 crc kubenswrapper[4812]: I1124 21:52:47.589009 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f864a763-a0e3-49bf-a7cf-913010c09061" containerName="nova-api-log" containerID="cri-o://5ea0e9eb78951157455459dd1cebfc399725d65e14242ce624635fac7bd597b7" gracePeriod=30 Nov 24 21:52:47 crc kubenswrapper[4812]: I1124 21:52:47.589483 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f864a763-a0e3-49bf-a7cf-913010c09061" containerName="nova-api-api" containerID="cri-o://9e60e42af1acb373af21c44a6794f8a97611d1bd243fd93913c94b9347e72ea5" gracePeriod=30 Nov 24 21:52:47 crc kubenswrapper[4812]: I1124 21:52:47.603943 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:52:47 crc kubenswrapper[4812]: I1124 21:52:47.604161 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6b8f08ee-8290-4ce4-a03f-7717a9d8e69a" containerName="nova-metadata-log" containerID="cri-o://cca9d001e7b2b0b1503f184cba192af641e4b4b01d27e89de428755d71a90a6c" gracePeriod=30 Nov 24 21:52:47 crc kubenswrapper[4812]: I1124 21:52:47.604659 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6b8f08ee-8290-4ce4-a03f-7717a9d8e69a" containerName="nova-metadata-metadata" containerID="cri-o://a16ab86fc267249c8e93fd27d29a9b1d651bc2131ee133b0b45bc96aed0425af" gracePeriod=30 Nov 24 21:52:47 crc kubenswrapper[4812]: E1124 21:52:47.982960 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d29f6c6d47d8621b6fec66813f882174bb2e3411238e1ca13069fa9dedd9d7eb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 24 21:52:47 crc kubenswrapper[4812]: E1124 21:52:47.992670 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d29f6c6d47d8621b6fec66813f882174bb2e3411238e1ca13069fa9dedd9d7eb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 24 21:52:47 crc kubenswrapper[4812]: E1124 21:52:47.994807 4812 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d29f6c6d47d8621b6fec66813f882174bb2e3411238e1ca13069fa9dedd9d7eb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 24 21:52:47 crc kubenswrapper[4812]: E1124 21:52:47.994882 4812 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="dd6b987f-ea2d-49ee-8d1c-3abbe6463a66" containerName="nova-cell0-conductor-conductor" Nov 24 21:52:48 crc kubenswrapper[4812]: I1124 21:52:48.575190 4812 generic.go:334] "Generic (PLEG): container finished" podID="f864a763-a0e3-49bf-a7cf-913010c09061" containerID="5ea0e9eb78951157455459dd1cebfc399725d65e14242ce624635fac7bd597b7" exitCode=143 Nov 24 21:52:48 crc kubenswrapper[4812]: I1124 21:52:48.575257 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f864a763-a0e3-49bf-a7cf-913010c09061","Type":"ContainerDied","Data":"5ea0e9eb78951157455459dd1cebfc399725d65e14242ce624635fac7bd597b7"} Nov 24 21:52:48 crc kubenswrapper[4812]: I1124 21:52:48.577394 4812 generic.go:334] "Generic (PLEG): container finished" podID="e97a8074-9685-425e-960e-44fab12d2e05" containerID="bcb5748eff25d395dbb81501a2cd06fa11838fc3d554ccfcb317b9eb5fdcf657" exitCode=0 Nov 24 21:52:48 crc kubenswrapper[4812]: I1124 21:52:48.577460 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ps94" event={"ID":"e97a8074-9685-425e-960e-44fab12d2e05","Type":"ContainerDied","Data":"bcb5748eff25d395dbb81501a2cd06fa11838fc3d554ccfcb317b9eb5fdcf657"} Nov 24 21:52:48 crc kubenswrapper[4812]: I1124 21:52:48.580668 4812 generic.go:334] "Generic (PLEG): container finished" podID="6b8f08ee-8290-4ce4-a03f-7717a9d8e69a" containerID="cca9d001e7b2b0b1503f184cba192af641e4b4b01d27e89de428755d71a90a6c" exitCode=143 Nov 24 21:52:48 crc kubenswrapper[4812]: I1124 21:52:48.580696 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a","Type":"ContainerDied","Data":"cca9d001e7b2b0b1503f184cba192af641e4b4b01d27e89de428755d71a90a6c"} Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.061284 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.072040 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.209637 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/021029af-7cb0-4e84-bd75-8fe60d9d41d4-combined-ca-bundle\") pod \"021029af-7cb0-4e84-bd75-8fe60d9d41d4\" (UID: \"021029af-7cb0-4e84-bd75-8fe60d9d41d4\") " Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.209791 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/021029af-7cb0-4e84-bd75-8fe60d9d41d4-config-data\") pod \"021029af-7cb0-4e84-bd75-8fe60d9d41d4\" (UID: \"021029af-7cb0-4e84-bd75-8fe60d9d41d4\") " Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.209849 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh4z6\" (UniqueName: \"kubernetes.io/projected/021029af-7cb0-4e84-bd75-8fe60d9d41d4-kube-api-access-qh4z6\") pod \"021029af-7cb0-4e84-bd75-8fe60d9d41d4\" (UID: \"021029af-7cb0-4e84-bd75-8fe60d9d41d4\") " Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.209893 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d71826-08db-4e3a-a03d-e8141947dcc0-config-data\") pod \"94d71826-08db-4e3a-a03d-e8141947dcc0\" (UID: \"94d71826-08db-4e3a-a03d-e8141947dcc0\") " Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.209986 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d71826-08db-4e3a-a03d-e8141947dcc0-combined-ca-bundle\") pod \"94d71826-08db-4e3a-a03d-e8141947dcc0\" (UID: \"94d71826-08db-4e3a-a03d-e8141947dcc0\") " Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.210110 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lv8h\" (UniqueName: \"kubernetes.io/projected/94d71826-08db-4e3a-a03d-e8141947dcc0-kube-api-access-2lv8h\") pod \"94d71826-08db-4e3a-a03d-e8141947dcc0\" (UID: \"94d71826-08db-4e3a-a03d-e8141947dcc0\") " Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.215632 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/021029af-7cb0-4e84-bd75-8fe60d9d41d4-kube-api-access-qh4z6" (OuterVolumeSpecName: "kube-api-access-qh4z6") pod "021029af-7cb0-4e84-bd75-8fe60d9d41d4" (UID: "021029af-7cb0-4e84-bd75-8fe60d9d41d4"). InnerVolumeSpecName "kube-api-access-qh4z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.217550 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94d71826-08db-4e3a-a03d-e8141947dcc0-kube-api-access-2lv8h" (OuterVolumeSpecName: "kube-api-access-2lv8h") pod "94d71826-08db-4e3a-a03d-e8141947dcc0" (UID: "94d71826-08db-4e3a-a03d-e8141947dcc0"). InnerVolumeSpecName "kube-api-access-2lv8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.250015 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/021029af-7cb0-4e84-bd75-8fe60d9d41d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "021029af-7cb0-4e84-bd75-8fe60d9d41d4" (UID: "021029af-7cb0-4e84-bd75-8fe60d9d41d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.251148 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d71826-08db-4e3a-a03d-e8141947dcc0-config-data" (OuterVolumeSpecName: "config-data") pod "94d71826-08db-4e3a-a03d-e8141947dcc0" (UID: "94d71826-08db-4e3a-a03d-e8141947dcc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.277311 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d71826-08db-4e3a-a03d-e8141947dcc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94d71826-08db-4e3a-a03d-e8141947dcc0" (UID: "94d71826-08db-4e3a-a03d-e8141947dcc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.284766 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/021029af-7cb0-4e84-bd75-8fe60d9d41d4-config-data" (OuterVolumeSpecName: "config-data") pod "021029af-7cb0-4e84-bd75-8fe60d9d41d4" (UID: "021029af-7cb0-4e84-bd75-8fe60d9d41d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.313165 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/021029af-7cb0-4e84-bd75-8fe60d9d41d4-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.313191 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qh4z6\" (UniqueName: \"kubernetes.io/projected/021029af-7cb0-4e84-bd75-8fe60d9d41d4-kube-api-access-qh4z6\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.313201 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d71826-08db-4e3a-a03d-e8141947dcc0-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.313209 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d71826-08db-4e3a-a03d-e8141947dcc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.313221 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lv8h\" (UniqueName: \"kubernetes.io/projected/94d71826-08db-4e3a-a03d-e8141947dcc0-kube-api-access-2lv8h\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.313229 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/021029af-7cb0-4e84-bd75-8fe60d9d41d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.591548 4812 generic.go:334] "Generic (PLEG): container finished" podID="021029af-7cb0-4e84-bd75-8fe60d9d41d4" containerID="6e5c45ea3c733ec40c45eb897ac744e45a8e64d860aa0ad8c3e7d5f5f070dddb" exitCode=0 Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.591624 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.591653 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"021029af-7cb0-4e84-bd75-8fe60d9d41d4","Type":"ContainerDied","Data":"6e5c45ea3c733ec40c45eb897ac744e45a8e64d860aa0ad8c3e7d5f5f070dddb"} Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.591711 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"021029af-7cb0-4e84-bd75-8fe60d9d41d4","Type":"ContainerDied","Data":"1b66ba0d0fcc2a86153e9c9f5c4c5527cebd1bf65b96cf4ff9f5190f8d396e42"} Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.591734 4812 scope.go:117] "RemoveContainer" containerID="6e5c45ea3c733ec40c45eb897ac744e45a8e64d860aa0ad8c3e7d5f5f070dddb" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.593461 4812 generic.go:334] "Generic (PLEG): container finished" podID="94d71826-08db-4e3a-a03d-e8141947dcc0" containerID="f7a22ae3ea48aaac2a5a21757c8e5a8fd54be92b0dba494a59bba60bceb12673" exitCode=0 Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.593506 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"94d71826-08db-4e3a-a03d-e8141947dcc0","Type":"ContainerDied","Data":"f7a22ae3ea48aaac2a5a21757c8e5a8fd54be92b0dba494a59bba60bceb12673"} Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.593532 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"94d71826-08db-4e3a-a03d-e8141947dcc0","Type":"ContainerDied","Data":"5b9b1b03f7e7144a13a1189be99194dd7c35be80e221c2871d998e6697ae6778"} Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.595004 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.635118 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.638055 4812 scope.go:117] "RemoveContainer" containerID="6e5c45ea3c733ec40c45eb897ac744e45a8e64d860aa0ad8c3e7d5f5f070dddb" Nov 24 21:52:49 crc kubenswrapper[4812]: E1124 21:52:49.638620 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e5c45ea3c733ec40c45eb897ac744e45a8e64d860aa0ad8c3e7d5f5f070dddb\": container with ID starting with 6e5c45ea3c733ec40c45eb897ac744e45a8e64d860aa0ad8c3e7d5f5f070dddb not found: ID does not exist" containerID="6e5c45ea3c733ec40c45eb897ac744e45a8e64d860aa0ad8c3e7d5f5f070dddb" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.638660 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e5c45ea3c733ec40c45eb897ac744e45a8e64d860aa0ad8c3e7d5f5f070dddb"} err="failed to get container status \"6e5c45ea3c733ec40c45eb897ac744e45a8e64d860aa0ad8c3e7d5f5f070dddb\": rpc error: code = NotFound desc = could not find container \"6e5c45ea3c733ec40c45eb897ac744e45a8e64d860aa0ad8c3e7d5f5f070dddb\": container with ID starting with 6e5c45ea3c733ec40c45eb897ac744e45a8e64d860aa0ad8c3e7d5f5f070dddb not found: ID does not exist" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.638685 4812 scope.go:117] "RemoveContainer" containerID="f7a22ae3ea48aaac2a5a21757c8e5a8fd54be92b0dba494a59bba60bceb12673" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.643647 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.688171 4812 scope.go:117] "RemoveContainer" containerID="f7a22ae3ea48aaac2a5a21757c8e5a8fd54be92b0dba494a59bba60bceb12673" Nov 24 21:52:49 crc kubenswrapper[4812]: E1124 21:52:49.689063 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7a22ae3ea48aaac2a5a21757c8e5a8fd54be92b0dba494a59bba60bceb12673\": container with ID starting with f7a22ae3ea48aaac2a5a21757c8e5a8fd54be92b0dba494a59bba60bceb12673 not found: ID does not exist" containerID="f7a22ae3ea48aaac2a5a21757c8e5a8fd54be92b0dba494a59bba60bceb12673" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.689111 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7a22ae3ea48aaac2a5a21757c8e5a8fd54be92b0dba494a59bba60bceb12673"} err="failed to get container status \"f7a22ae3ea48aaac2a5a21757c8e5a8fd54be92b0dba494a59bba60bceb12673\": rpc error: code = NotFound desc = could not find container \"f7a22ae3ea48aaac2a5a21757c8e5a8fd54be92b0dba494a59bba60bceb12673\": container with ID starting with f7a22ae3ea48aaac2a5a21757c8e5a8fd54be92b0dba494a59bba60bceb12673 not found: ID does not exist" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.703872 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.719041 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.742990 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:52:49 crc kubenswrapper[4812]: E1124 21:52:49.743509 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="021029af-7cb0-4e84-bd75-8fe60d9d41d4" containerName="nova-scheduler-scheduler" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.743527 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="021029af-7cb0-4e84-bd75-8fe60d9d41d4" containerName="nova-scheduler-scheduler" Nov 24 21:52:49 crc kubenswrapper[4812]: E1124 21:52:49.743554 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d71826-08db-4e3a-a03d-e8141947dcc0" containerName="nova-cell1-conductor-conductor" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.743562 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d71826-08db-4e3a-a03d-e8141947dcc0" containerName="nova-cell1-conductor-conductor" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.743771 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="021029af-7cb0-4e84-bd75-8fe60d9d41d4" containerName="nova-scheduler-scheduler" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.743804 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d71826-08db-4e3a-a03d-e8141947dcc0" containerName="nova-cell1-conductor-conductor" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.744609 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.747061 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.754913 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.761760 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.763632 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.768651 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.779978 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.834002 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5889704f-5b8d-4c34-a1ff-7295054a8cdf-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5889704f-5b8d-4c34-a1ff-7295054a8cdf\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.834093 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d1fbe4-c950-458c-aed0-03e810894146-config-data\") pod \"nova-scheduler-0\" (UID: \"e2d1fbe4-c950-458c-aed0-03e810894146\") " pod="openstack/nova-scheduler-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.834124 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m99c2\" (UniqueName: \"kubernetes.io/projected/5889704f-5b8d-4c34-a1ff-7295054a8cdf-kube-api-access-m99c2\") pod \"nova-cell1-conductor-0\" (UID: \"5889704f-5b8d-4c34-a1ff-7295054a8cdf\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.834227 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5889704f-5b8d-4c34-a1ff-7295054a8cdf-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5889704f-5b8d-4c34-a1ff-7295054a8cdf\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.834871 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d1fbe4-c950-458c-aed0-03e810894146-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e2d1fbe4-c950-458c-aed0-03e810894146\") " pod="openstack/nova-scheduler-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.834963 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc5nl\" (UniqueName: \"kubernetes.io/projected/e2d1fbe4-c950-458c-aed0-03e810894146-kube-api-access-gc5nl\") pod \"nova-scheduler-0\" (UID: \"e2d1fbe4-c950-458c-aed0-03e810894146\") " pod="openstack/nova-scheduler-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.936780 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5889704f-5b8d-4c34-a1ff-7295054a8cdf-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5889704f-5b8d-4c34-a1ff-7295054a8cdf\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.937110 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d1fbe4-c950-458c-aed0-03e810894146-config-data\") pod \"nova-scheduler-0\" (UID: \"e2d1fbe4-c950-458c-aed0-03e810894146\") " pod="openstack/nova-scheduler-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.937138 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m99c2\" (UniqueName: \"kubernetes.io/projected/5889704f-5b8d-4c34-a1ff-7295054a8cdf-kube-api-access-m99c2\") pod \"nova-cell1-conductor-0\" (UID: \"5889704f-5b8d-4c34-a1ff-7295054a8cdf\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.937165 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5889704f-5b8d-4c34-a1ff-7295054a8cdf-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5889704f-5b8d-4c34-a1ff-7295054a8cdf\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.937245 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d1fbe4-c950-458c-aed0-03e810894146-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e2d1fbe4-c950-458c-aed0-03e810894146\") " pod="openstack/nova-scheduler-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.937281 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc5nl\" (UniqueName: \"kubernetes.io/projected/e2d1fbe4-c950-458c-aed0-03e810894146-kube-api-access-gc5nl\") pod \"nova-scheduler-0\" (UID: \"e2d1fbe4-c950-458c-aed0-03e810894146\") " pod="openstack/nova-scheduler-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.943304 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d1fbe4-c950-458c-aed0-03e810894146-config-data\") pod \"nova-scheduler-0\" (UID: \"e2d1fbe4-c950-458c-aed0-03e810894146\") " pod="openstack/nova-scheduler-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.943718 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5889704f-5b8d-4c34-a1ff-7295054a8cdf-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5889704f-5b8d-4c34-a1ff-7295054a8cdf\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.948232 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5889704f-5b8d-4c34-a1ff-7295054a8cdf-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5889704f-5b8d-4c34-a1ff-7295054a8cdf\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.952921 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc5nl\" (UniqueName: \"kubernetes.io/projected/e2d1fbe4-c950-458c-aed0-03e810894146-kube-api-access-gc5nl\") pod \"nova-scheduler-0\" (UID: \"e2d1fbe4-c950-458c-aed0-03e810894146\") " pod="openstack/nova-scheduler-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.953277 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d1fbe4-c950-458c-aed0-03e810894146-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e2d1fbe4-c950-458c-aed0-03e810894146\") " pod="openstack/nova-scheduler-0" Nov 24 21:52:49 crc kubenswrapper[4812]: I1124 21:52:49.964874 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m99c2\" (UniqueName: \"kubernetes.io/projected/5889704f-5b8d-4c34-a1ff-7295054a8cdf-kube-api-access-m99c2\") pod \"nova-cell1-conductor-0\" (UID: \"5889704f-5b8d-4c34-a1ff-7295054a8cdf\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:52:50 crc kubenswrapper[4812]: I1124 21:52:50.110248 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:52:50 crc kubenswrapper[4812]: I1124 21:52:50.117913 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 21:52:50 crc kubenswrapper[4812]: I1124 21:52:50.445114 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 21:52:50 crc kubenswrapper[4812]: W1124 21:52:50.455672 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5889704f_5b8d_4c34_a1ff_7295054a8cdf.slice/crio-d5488d185b2b4b62e2354098d2298ba737b8af2e16e0376c18f6549742072c77 WatchSource:0}: Error finding container d5488d185b2b4b62e2354098d2298ba737b8af2e16e0376c18f6549742072c77: Status 404 returned error can't find the container with id d5488d185b2b4b62e2354098d2298ba737b8af2e16e0376c18f6549742072c77 Nov 24 21:52:50 crc kubenswrapper[4812]: I1124 21:52:50.591740 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:52:50 crc kubenswrapper[4812]: W1124 21:52:50.592227 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2d1fbe4_c950_458c_aed0_03e810894146.slice/crio-b8b0f6b89f834914d6779e2f7cd922aee2a5ba1e9bbfef056d189c08f3018faf WatchSource:0}: Error finding container b8b0f6b89f834914d6779e2f7cd922aee2a5ba1e9bbfef056d189c08f3018faf: Status 404 returned error can't find the container with id b8b0f6b89f834914d6779e2f7cd922aee2a5ba1e9bbfef056d189c08f3018faf Nov 24 21:52:50 crc kubenswrapper[4812]: I1124 21:52:50.607302 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e2d1fbe4-c950-458c-aed0-03e810894146","Type":"ContainerStarted","Data":"b8b0f6b89f834914d6779e2f7cd922aee2a5ba1e9bbfef056d189c08f3018faf"} Nov 24 21:52:50 crc kubenswrapper[4812]: I1124 21:52:50.608384 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5889704f-5b8d-4c34-a1ff-7295054a8cdf","Type":"ContainerStarted","Data":"d5488d185b2b4b62e2354098d2298ba737b8af2e16e0376c18f6549742072c77"} Nov 24 21:52:50 crc kubenswrapper[4812]: I1124 21:52:50.769674 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6b8f08ee-8290-4ce4-a03f-7717a9d8e69a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.98:8775/\": read tcp 10.217.0.2:48998->10.217.1.98:8775: read: connection reset by peer" Nov 24 21:52:50 crc kubenswrapper[4812]: I1124 21:52:50.769837 4812 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="6b8f08ee-8290-4ce4-a03f-7717a9d8e69a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.98:8775/\": read tcp 10.217.0.2:48984->10.217.1.98:8775: read: connection reset by peer" Nov 24 21:52:50 crc kubenswrapper[4812]: I1124 21:52:50.985889 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="021029af-7cb0-4e84-bd75-8fe60d9d41d4" path="/var/lib/kubelet/pods/021029af-7cb0-4e84-bd75-8fe60d9d41d4/volumes" Nov 24 21:52:50 crc kubenswrapper[4812]: I1124 21:52:50.986629 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94d71826-08db-4e3a-a03d-e8141947dcc0" path="/var/lib/kubelet/pods/94d71826-08db-4e3a-a03d-e8141947dcc0/volumes" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.270105 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.278331 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.367542 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f864a763-a0e3-49bf-a7cf-913010c09061-logs\") pod \"f864a763-a0e3-49bf-a7cf-913010c09061\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.367595 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-combined-ca-bundle\") pod \"f864a763-a0e3-49bf-a7cf-913010c09061\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.367698 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndll8\" (UniqueName: \"kubernetes.io/projected/f864a763-a0e3-49bf-a7cf-913010c09061-kube-api-access-ndll8\") pod \"f864a763-a0e3-49bf-a7cf-913010c09061\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.367739 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-config-data\") pod \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\" (UID: \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\") " Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.367769 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs7cw\" (UniqueName: \"kubernetes.io/projected/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-kube-api-access-zs7cw\") pod \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\" (UID: \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\") " Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.367786 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-combined-ca-bundle\") pod \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\" (UID: \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\") " Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.367825 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-nova-metadata-tls-certs\") pod \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\" (UID: \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\") " Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.367846 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-internal-tls-certs\") pod \"f864a763-a0e3-49bf-a7cf-913010c09061\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.367883 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-config-data\") pod \"f864a763-a0e3-49bf-a7cf-913010c09061\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.367943 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-public-tls-certs\") pod \"f864a763-a0e3-49bf-a7cf-913010c09061\" (UID: \"f864a763-a0e3-49bf-a7cf-913010c09061\") " Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.367962 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-logs\") pod \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\" (UID: \"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a\") " Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.369295 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f864a763-a0e3-49bf-a7cf-913010c09061-logs" (OuterVolumeSpecName: "logs") pod "f864a763-a0e3-49bf-a7cf-913010c09061" (UID: "f864a763-a0e3-49bf-a7cf-913010c09061"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.369989 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-logs" (OuterVolumeSpecName: "logs") pod "6b8f08ee-8290-4ce4-a03f-7717a9d8e69a" (UID: "6b8f08ee-8290-4ce4-a03f-7717a9d8e69a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.374445 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f864a763-a0e3-49bf-a7cf-913010c09061-kube-api-access-ndll8" (OuterVolumeSpecName: "kube-api-access-ndll8") pod "f864a763-a0e3-49bf-a7cf-913010c09061" (UID: "f864a763-a0e3-49bf-a7cf-913010c09061"). InnerVolumeSpecName "kube-api-access-ndll8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.390931 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-kube-api-access-zs7cw" (OuterVolumeSpecName: "kube-api-access-zs7cw") pod "6b8f08ee-8290-4ce4-a03f-7717a9d8e69a" (UID: "6b8f08ee-8290-4ce4-a03f-7717a9d8e69a"). InnerVolumeSpecName "kube-api-access-zs7cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.425747 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-config-data" (OuterVolumeSpecName: "config-data") pod "f864a763-a0e3-49bf-a7cf-913010c09061" (UID: "f864a763-a0e3-49bf-a7cf-913010c09061"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.437983 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b8f08ee-8290-4ce4-a03f-7717a9d8e69a" (UID: "6b8f08ee-8290-4ce4-a03f-7717a9d8e69a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.449189 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f864a763-a0e3-49bf-a7cf-913010c09061" (UID: "f864a763-a0e3-49bf-a7cf-913010c09061"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.462273 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-config-data" (OuterVolumeSpecName: "config-data") pod "6b8f08ee-8290-4ce4-a03f-7717a9d8e69a" (UID: "6b8f08ee-8290-4ce4-a03f-7717a9d8e69a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.469807 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f864a763-a0e3-49bf-a7cf-913010c09061" (UID: "f864a763-a0e3-49bf-a7cf-913010c09061"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.470402 4812 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.470427 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.470437 4812 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.470446 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.470454 4812 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f864a763-a0e3-49bf-a7cf-913010c09061-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.470462 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndll8\" (UniqueName: \"kubernetes.io/projected/f864a763-a0e3-49bf-a7cf-913010c09061-kube-api-access-ndll8\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.470470 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.470478 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs7cw\" (UniqueName: \"kubernetes.io/projected/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-kube-api-access-zs7cw\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.470487 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.475315 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "6b8f08ee-8290-4ce4-a03f-7717a9d8e69a" (UID: "6b8f08ee-8290-4ce4-a03f-7717a9d8e69a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.481575 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f864a763-a0e3-49bf-a7cf-913010c09061" (UID: "f864a763-a0e3-49bf-a7cf-913010c09061"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.571774 4812 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.571808 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f864a763-a0e3-49bf-a7cf-913010c09061-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.619511 4812 generic.go:334] "Generic (PLEG): container finished" podID="6b8f08ee-8290-4ce4-a03f-7717a9d8e69a" containerID="a16ab86fc267249c8e93fd27d29a9b1d651bc2131ee133b0b45bc96aed0425af" exitCode=0 Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.619582 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a","Type":"ContainerDied","Data":"a16ab86fc267249c8e93fd27d29a9b1d651bc2131ee133b0b45bc96aed0425af"} Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.619610 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6b8f08ee-8290-4ce4-a03f-7717a9d8e69a","Type":"ContainerDied","Data":"64968326841bd52220783ff80ff1261c1f420a44597db987c35071fad2f13974"} Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.619606 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.619630 4812 scope.go:117] "RemoveContainer" containerID="a16ab86fc267249c8e93fd27d29a9b1d651bc2131ee133b0b45bc96aed0425af" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.633112 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e2d1fbe4-c950-458c-aed0-03e810894146","Type":"ContainerStarted","Data":"11e606d28d37bcb000ebe0c578473ff818a4a15ed5f61eb2f10142f509831a20"} Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.643715 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5889704f-5b8d-4c34-a1ff-7295054a8cdf","Type":"ContainerStarted","Data":"4b17e271fcbcb1b7cb148a2a82e4329d815a28c031c7219ded4e640d47ee6196"} Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.643796 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.657512 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.658296 4812 generic.go:334] "Generic (PLEG): container finished" podID="f864a763-a0e3-49bf-a7cf-913010c09061" containerID="9e60e42af1acb373af21c44a6794f8a97611d1bd243fd93913c94b9347e72ea5" exitCode=0 Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.658418 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f864a763-a0e3-49bf-a7cf-913010c09061","Type":"ContainerDied","Data":"9e60e42af1acb373af21c44a6794f8a97611d1bd243fd93913c94b9347e72ea5"} Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.658451 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f864a763-a0e3-49bf-a7cf-913010c09061","Type":"ContainerDied","Data":"81b6f7437d5fba3e629d436ff1da94fe1869ed3994b60b4e94e000f15b3fee16"} Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.658600 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.662391 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ps94" event={"ID":"e97a8074-9685-425e-960e-44fab12d2e05","Type":"ContainerStarted","Data":"3e2c50827bc916d69d5188231d34fae9c2ee2010996cdfca2b2d89130c0fabe2"} Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.678009 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.680037 4812 scope.go:117] "RemoveContainer" containerID="cca9d001e7b2b0b1503f184cba192af641e4b4b01d27e89de428755d71a90a6c" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.705561 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:52:51 crc kubenswrapper[4812]: E1124 21:52:51.706034 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8f08ee-8290-4ce4-a03f-7717a9d8e69a" containerName="nova-metadata-metadata" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.706047 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8f08ee-8290-4ce4-a03f-7717a9d8e69a" containerName="nova-metadata-metadata" Nov 24 21:52:51 crc kubenswrapper[4812]: E1124 21:52:51.706067 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8f08ee-8290-4ce4-a03f-7717a9d8e69a" containerName="nova-metadata-log" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.706073 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8f08ee-8290-4ce4-a03f-7717a9d8e69a" containerName="nova-metadata-log" Nov 24 21:52:51 crc kubenswrapper[4812]: E1124 21:52:51.706092 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f864a763-a0e3-49bf-a7cf-913010c09061" containerName="nova-api-log" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.706098 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f864a763-a0e3-49bf-a7cf-913010c09061" containerName="nova-api-log" Nov 24 21:52:51 crc kubenswrapper[4812]: E1124 21:52:51.706115 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f864a763-a0e3-49bf-a7cf-913010c09061" containerName="nova-api-api" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.706121 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f864a763-a0e3-49bf-a7cf-913010c09061" containerName="nova-api-api" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.706311 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b8f08ee-8290-4ce4-a03f-7717a9d8e69a" containerName="nova-metadata-metadata" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.706324 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b8f08ee-8290-4ce4-a03f-7717a9d8e69a" containerName="nova-metadata-log" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.706353 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f864a763-a0e3-49bf-a7cf-913010c09061" containerName="nova-api-log" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.706378 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f864a763-a0e3-49bf-a7cf-913010c09061" containerName="nova-api-api" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.707468 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.709661 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.710607 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.710610 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.710595268 podStartE2EDuration="2.710595268s" podCreationTimestamp="2025-11-24 21:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:52:51.665597454 +0000 UTC m=+9365.454549825" watchObservedRunningTime="2025-11-24 21:52:51.710595268 +0000 UTC m=+9365.499547639" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.746735 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.747593 4812 scope.go:117] "RemoveContainer" containerID="a16ab86fc267249c8e93fd27d29a9b1d651bc2131ee133b0b45bc96aed0425af" Nov 24 21:52:51 crc kubenswrapper[4812]: E1124 21:52:51.750231 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a16ab86fc267249c8e93fd27d29a9b1d651bc2131ee133b0b45bc96aed0425af\": container with ID starting with a16ab86fc267249c8e93fd27d29a9b1d651bc2131ee133b0b45bc96aed0425af not found: ID does not exist" containerID="a16ab86fc267249c8e93fd27d29a9b1d651bc2131ee133b0b45bc96aed0425af" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.750264 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a16ab86fc267249c8e93fd27d29a9b1d651bc2131ee133b0b45bc96aed0425af"} err="failed to get container status \"a16ab86fc267249c8e93fd27d29a9b1d651bc2131ee133b0b45bc96aed0425af\": rpc error: code = NotFound desc = could not find container \"a16ab86fc267249c8e93fd27d29a9b1d651bc2131ee133b0b45bc96aed0425af\": container with ID starting with a16ab86fc267249c8e93fd27d29a9b1d651bc2131ee133b0b45bc96aed0425af not found: ID does not exist" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.750285 4812 scope.go:117] "RemoveContainer" containerID="cca9d001e7b2b0b1503f184cba192af641e4b4b01d27e89de428755d71a90a6c" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.756559 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.756541528 podStartE2EDuration="2.756541528s" podCreationTimestamp="2025-11-24 21:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:52:51.687657938 +0000 UTC m=+9365.476610309" watchObservedRunningTime="2025-11-24 21:52:51.756541528 +0000 UTC m=+9365.545493899" Nov 24 21:52:51 crc kubenswrapper[4812]: E1124 21:52:51.756806 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cca9d001e7b2b0b1503f184cba192af641e4b4b01d27e89de428755d71a90a6c\": container with ID starting with cca9d001e7b2b0b1503f184cba192af641e4b4b01d27e89de428755d71a90a6c not found: ID does not exist" containerID="cca9d001e7b2b0b1503f184cba192af641e4b4b01d27e89de428755d71a90a6c" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.756853 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cca9d001e7b2b0b1503f184cba192af641e4b4b01d27e89de428755d71a90a6c"} err="failed to get container status \"cca9d001e7b2b0b1503f184cba192af641e4b4b01d27e89de428755d71a90a6c\": rpc error: code = NotFound desc = could not find container \"cca9d001e7b2b0b1503f184cba192af641e4b4b01d27e89de428755d71a90a6c\": container with ID starting with cca9d001e7b2b0b1503f184cba192af641e4b4b01d27e89de428755d71a90a6c not found: ID does not exist" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.756880 4812 scope.go:117] "RemoveContainer" containerID="9e60e42af1acb373af21c44a6794f8a97611d1bd243fd93913c94b9347e72ea5" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.775437 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ede438-1eb5-43b3-86ae-6d4629ce5acb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"81ede438-1eb5-43b3-86ae-6d4629ce5acb\") " pod="openstack/nova-metadata-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.775498 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ede438-1eb5-43b3-86ae-6d4629ce5acb-config-data\") pod \"nova-metadata-0\" (UID: \"81ede438-1eb5-43b3-86ae-6d4629ce5acb\") " pod="openstack/nova-metadata-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.775585 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ede438-1eb5-43b3-86ae-6d4629ce5acb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"81ede438-1eb5-43b3-86ae-6d4629ce5acb\") " pod="openstack/nova-metadata-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.775641 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ede438-1eb5-43b3-86ae-6d4629ce5acb-logs\") pod \"nova-metadata-0\" (UID: \"81ede438-1eb5-43b3-86ae-6d4629ce5acb\") " pod="openstack/nova-metadata-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.775666 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8jx9\" (UniqueName: \"kubernetes.io/projected/81ede438-1eb5-43b3-86ae-6d4629ce5acb-kube-api-access-g8jx9\") pod \"nova-metadata-0\" (UID: \"81ede438-1eb5-43b3-86ae-6d4629ce5acb\") " pod="openstack/nova-metadata-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.776880 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.786769 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.795323 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.797752 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.800871 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.800899 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.801156 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.818855 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.878635 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ede438-1eb5-43b3-86ae-6d4629ce5acb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"81ede438-1eb5-43b3-86ae-6d4629ce5acb\") " pod="openstack/nova-metadata-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.878687 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7d15f6e-88a0-44d8-a0db-164710a033d1-public-tls-certs\") pod \"nova-api-0\" (UID: \"e7d15f6e-88a0-44d8-a0db-164710a033d1\") " pod="openstack/nova-api-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.878726 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7d15f6e-88a0-44d8-a0db-164710a033d1-config-data\") pod \"nova-api-0\" (UID: \"e7d15f6e-88a0-44d8-a0db-164710a033d1\") " pod="openstack/nova-api-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.878757 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7d15f6e-88a0-44d8-a0db-164710a033d1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e7d15f6e-88a0-44d8-a0db-164710a033d1\") " pod="openstack/nova-api-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.878789 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ede438-1eb5-43b3-86ae-6d4629ce5acb-logs\") pod \"nova-metadata-0\" (UID: \"81ede438-1eb5-43b3-86ae-6d4629ce5acb\") " pod="openstack/nova-metadata-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.878817 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8jx9\" (UniqueName: \"kubernetes.io/projected/81ede438-1eb5-43b3-86ae-6d4629ce5acb-kube-api-access-g8jx9\") pod \"nova-metadata-0\" (UID: \"81ede438-1eb5-43b3-86ae-6d4629ce5acb\") " pod="openstack/nova-metadata-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.878857 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7d15f6e-88a0-44d8-a0db-164710a033d1-logs\") pod \"nova-api-0\" (UID: \"e7d15f6e-88a0-44d8-a0db-164710a033d1\") " pod="openstack/nova-api-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.878901 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ede438-1eb5-43b3-86ae-6d4629ce5acb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"81ede438-1eb5-43b3-86ae-6d4629ce5acb\") " pod="openstack/nova-metadata-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.878933 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ede438-1eb5-43b3-86ae-6d4629ce5acb-config-data\") pod \"nova-metadata-0\" (UID: \"81ede438-1eb5-43b3-86ae-6d4629ce5acb\") " pod="openstack/nova-metadata-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.878985 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hckbm\" (UniqueName: \"kubernetes.io/projected/e7d15f6e-88a0-44d8-a0db-164710a033d1-kube-api-access-hckbm\") pod \"nova-api-0\" (UID: \"e7d15f6e-88a0-44d8-a0db-164710a033d1\") " pod="openstack/nova-api-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.879006 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7d15f6e-88a0-44d8-a0db-164710a033d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e7d15f6e-88a0-44d8-a0db-164710a033d1\") " pod="openstack/nova-api-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.879502 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81ede438-1eb5-43b3-86ae-6d4629ce5acb-logs\") pod \"nova-metadata-0\" (UID: \"81ede438-1eb5-43b3-86ae-6d4629ce5acb\") " pod="openstack/nova-metadata-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.880582 4812 scope.go:117] "RemoveContainer" containerID="5ea0e9eb78951157455459dd1cebfc399725d65e14242ce624635fac7bd597b7" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.890082 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81ede438-1eb5-43b3-86ae-6d4629ce5acb-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"81ede438-1eb5-43b3-86ae-6d4629ce5acb\") " pod="openstack/nova-metadata-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.895311 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/81ede438-1eb5-43b3-86ae-6d4629ce5acb-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"81ede438-1eb5-43b3-86ae-6d4629ce5acb\") " pod="openstack/nova-metadata-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.895775 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81ede438-1eb5-43b3-86ae-6d4629ce5acb-config-data\") pod \"nova-metadata-0\" (UID: \"81ede438-1eb5-43b3-86ae-6d4629ce5acb\") " pod="openstack/nova-metadata-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.897786 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8jx9\" (UniqueName: \"kubernetes.io/projected/81ede438-1eb5-43b3-86ae-6d4629ce5acb-kube-api-access-g8jx9\") pod \"nova-metadata-0\" (UID: \"81ede438-1eb5-43b3-86ae-6d4629ce5acb\") " pod="openstack/nova-metadata-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.999550 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hckbm\" (UniqueName: \"kubernetes.io/projected/e7d15f6e-88a0-44d8-a0db-164710a033d1-kube-api-access-hckbm\") pod \"nova-api-0\" (UID: \"e7d15f6e-88a0-44d8-a0db-164710a033d1\") " pod="openstack/nova-api-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.999598 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7d15f6e-88a0-44d8-a0db-164710a033d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e7d15f6e-88a0-44d8-a0db-164710a033d1\") " pod="openstack/nova-api-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.999714 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7d15f6e-88a0-44d8-a0db-164710a033d1-public-tls-certs\") pod \"nova-api-0\" (UID: \"e7d15f6e-88a0-44d8-a0db-164710a033d1\") " pod="openstack/nova-api-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.999759 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7d15f6e-88a0-44d8-a0db-164710a033d1-config-data\") pod \"nova-api-0\" (UID: \"e7d15f6e-88a0-44d8-a0db-164710a033d1\") " pod="openstack/nova-api-0" Nov 24 21:52:51 crc kubenswrapper[4812]: I1124 21:52:51.999794 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7d15f6e-88a0-44d8-a0db-164710a033d1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e7d15f6e-88a0-44d8-a0db-164710a033d1\") " pod="openstack/nova-api-0" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.000843 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7d15f6e-88a0-44d8-a0db-164710a033d1-logs\") pod \"nova-api-0\" (UID: \"e7d15f6e-88a0-44d8-a0db-164710a033d1\") " pod="openstack/nova-api-0" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.001494 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7d15f6e-88a0-44d8-a0db-164710a033d1-logs\") pod \"nova-api-0\" (UID: \"e7d15f6e-88a0-44d8-a0db-164710a033d1\") " pod="openstack/nova-api-0" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.004771 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7d15f6e-88a0-44d8-a0db-164710a033d1-public-tls-certs\") pod \"nova-api-0\" (UID: \"e7d15f6e-88a0-44d8-a0db-164710a033d1\") " pod="openstack/nova-api-0" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.009730 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7d15f6e-88a0-44d8-a0db-164710a033d1-config-data\") pod \"nova-api-0\" (UID: \"e7d15f6e-88a0-44d8-a0db-164710a033d1\") " pod="openstack/nova-api-0" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.012237 4812 scope.go:117] "RemoveContainer" containerID="9e60e42af1acb373af21c44a6794f8a97611d1bd243fd93913c94b9347e72ea5" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.014251 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7d15f6e-88a0-44d8-a0db-164710a033d1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e7d15f6e-88a0-44d8-a0db-164710a033d1\") " pod="openstack/nova-api-0" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.014421 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7d15f6e-88a0-44d8-a0db-164710a033d1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"e7d15f6e-88a0-44d8-a0db-164710a033d1\") " pod="openstack/nova-api-0" Nov 24 21:52:52 crc kubenswrapper[4812]: E1124 21:52:52.014535 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e60e42af1acb373af21c44a6794f8a97611d1bd243fd93913c94b9347e72ea5\": container with ID starting with 9e60e42af1acb373af21c44a6794f8a97611d1bd243fd93913c94b9347e72ea5 not found: ID does not exist" containerID="9e60e42af1acb373af21c44a6794f8a97611d1bd243fd93913c94b9347e72ea5" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.014585 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e60e42af1acb373af21c44a6794f8a97611d1bd243fd93913c94b9347e72ea5"} err="failed to get container status \"9e60e42af1acb373af21c44a6794f8a97611d1bd243fd93913c94b9347e72ea5\": rpc error: code = NotFound desc = could not find container \"9e60e42af1acb373af21c44a6794f8a97611d1bd243fd93913c94b9347e72ea5\": container with ID starting with 9e60e42af1acb373af21c44a6794f8a97611d1bd243fd93913c94b9347e72ea5 not found: ID does not exist" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.014607 4812 scope.go:117] "RemoveContainer" containerID="5ea0e9eb78951157455459dd1cebfc399725d65e14242ce624635fac7bd597b7" Nov 24 21:52:52 crc kubenswrapper[4812]: E1124 21:52:52.014923 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ea0e9eb78951157455459dd1cebfc399725d65e14242ce624635fac7bd597b7\": container with ID starting with 5ea0e9eb78951157455459dd1cebfc399725d65e14242ce624635fac7bd597b7 not found: ID does not exist" containerID="5ea0e9eb78951157455459dd1cebfc399725d65e14242ce624635fac7bd597b7" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.014942 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ea0e9eb78951157455459dd1cebfc399725d65e14242ce624635fac7bd597b7"} err="failed to get container status \"5ea0e9eb78951157455459dd1cebfc399725d65e14242ce624635fac7bd597b7\": rpc error: code = NotFound desc = could not find container \"5ea0e9eb78951157455459dd1cebfc399725d65e14242ce624635fac7bd597b7\": container with ID starting with 5ea0e9eb78951157455459dd1cebfc399725d65e14242ce624635fac7bd597b7 not found: ID does not exist" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.016650 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hckbm\" (UniqueName: \"kubernetes.io/projected/e7d15f6e-88a0-44d8-a0db-164710a033d1-kube-api-access-hckbm\") pod \"nova-api-0\" (UID: \"e7d15f6e-88a0-44d8-a0db-164710a033d1\") " pod="openstack/nova-api-0" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.035693 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.296563 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.546911 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.651479 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.680726 4812 generic.go:334] "Generic (PLEG): container finished" podID="e97a8074-9685-425e-960e-44fab12d2e05" containerID="3e2c50827bc916d69d5188231d34fae9c2ee2010996cdfca2b2d89130c0fabe2" exitCode=0 Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.680819 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ps94" event={"ID":"e97a8074-9685-425e-960e-44fab12d2e05","Type":"ContainerDied","Data":"3e2c50827bc916d69d5188231d34fae9c2ee2010996cdfca2b2d89130c0fabe2"} Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.686306 4812 generic.go:334] "Generic (PLEG): container finished" podID="dd6b987f-ea2d-49ee-8d1c-3abbe6463a66" containerID="d29f6c6d47d8621b6fec66813f882174bb2e3411238e1ca13069fa9dedd9d7eb" exitCode=0 Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.686370 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dd6b987f-ea2d-49ee-8d1c-3abbe6463a66","Type":"ContainerDied","Data":"d29f6c6d47d8621b6fec66813f882174bb2e3411238e1ca13069fa9dedd9d7eb"} Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.686397 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.686419 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dd6b987f-ea2d-49ee-8d1c-3abbe6463a66","Type":"ContainerDied","Data":"3ce7236eea21df5331f48c9624f2ded1012b8a51612c9a13391ee1ce0f78849f"} Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.686438 4812 scope.go:117] "RemoveContainer" containerID="d29f6c6d47d8621b6fec66813f882174bb2e3411238e1ca13069fa9dedd9d7eb" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.688508 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81ede438-1eb5-43b3-86ae-6d4629ce5acb","Type":"ContainerStarted","Data":"4150b517da3f5dcf721184ebee6e5b0ec6ceb5e52dfeffad8d2a25b44e42b78f"} Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.718078 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd6b987f-ea2d-49ee-8d1c-3abbe6463a66-config-data\") pod \"dd6b987f-ea2d-49ee-8d1c-3abbe6463a66\" (UID: \"dd6b987f-ea2d-49ee-8d1c-3abbe6463a66\") " Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.718845 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v8m6\" (UniqueName: \"kubernetes.io/projected/dd6b987f-ea2d-49ee-8d1c-3abbe6463a66-kube-api-access-9v8m6\") pod \"dd6b987f-ea2d-49ee-8d1c-3abbe6463a66\" (UID: \"dd6b987f-ea2d-49ee-8d1c-3abbe6463a66\") " Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.719119 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd6b987f-ea2d-49ee-8d1c-3abbe6463a66-combined-ca-bundle\") pod \"dd6b987f-ea2d-49ee-8d1c-3abbe6463a66\" (UID: \"dd6b987f-ea2d-49ee-8d1c-3abbe6463a66\") " Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.719862 4812 scope.go:117] "RemoveContainer" containerID="d29f6c6d47d8621b6fec66813f882174bb2e3411238e1ca13069fa9dedd9d7eb" Nov 24 21:52:52 crc kubenswrapper[4812]: E1124 21:52:52.721306 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d29f6c6d47d8621b6fec66813f882174bb2e3411238e1ca13069fa9dedd9d7eb\": container with ID starting with d29f6c6d47d8621b6fec66813f882174bb2e3411238e1ca13069fa9dedd9d7eb not found: ID does not exist" containerID="d29f6c6d47d8621b6fec66813f882174bb2e3411238e1ca13069fa9dedd9d7eb" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.721561 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d29f6c6d47d8621b6fec66813f882174bb2e3411238e1ca13069fa9dedd9d7eb"} err="failed to get container status \"d29f6c6d47d8621b6fec66813f882174bb2e3411238e1ca13069fa9dedd9d7eb\": rpc error: code = NotFound desc = could not find container \"d29f6c6d47d8621b6fec66813f882174bb2e3411238e1ca13069fa9dedd9d7eb\": container with ID starting with d29f6c6d47d8621b6fec66813f882174bb2e3411238e1ca13069fa9dedd9d7eb not found: ID does not exist" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.726148 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd6b987f-ea2d-49ee-8d1c-3abbe6463a66-kube-api-access-9v8m6" (OuterVolumeSpecName: "kube-api-access-9v8m6") pod "dd6b987f-ea2d-49ee-8d1c-3abbe6463a66" (UID: "dd6b987f-ea2d-49ee-8d1c-3abbe6463a66"). InnerVolumeSpecName "kube-api-access-9v8m6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.769919 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd6b987f-ea2d-49ee-8d1c-3abbe6463a66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd6b987f-ea2d-49ee-8d1c-3abbe6463a66" (UID: "dd6b987f-ea2d-49ee-8d1c-3abbe6463a66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.771025 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd6b987f-ea2d-49ee-8d1c-3abbe6463a66-config-data" (OuterVolumeSpecName: "config-data") pod "dd6b987f-ea2d-49ee-8d1c-3abbe6463a66" (UID: "dd6b987f-ea2d-49ee-8d1c-3abbe6463a66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.823449 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd6b987f-ea2d-49ee-8d1c-3abbe6463a66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.823709 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd6b987f-ea2d-49ee-8d1c-3abbe6463a66-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.823721 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v8m6\" (UniqueName: \"kubernetes.io/projected/dd6b987f-ea2d-49ee-8d1c-3abbe6463a66-kube-api-access-9v8m6\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.823552 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:52:52 crc kubenswrapper[4812]: W1124 21:52:52.848889 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7d15f6e_88a0_44d8_a0db_164710a033d1.slice/crio-0fe3eabbedf112368302910f3cbc63cb3ad733e618f2fe6f954e6e12d61e843b WatchSource:0}: Error finding container 0fe3eabbedf112368302910f3cbc63cb3ad733e618f2fe6f954e6e12d61e843b: Status 404 returned error can't find the container with id 0fe3eabbedf112368302910f3cbc63cb3ad733e618f2fe6f954e6e12d61e843b Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.981199 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b8f08ee-8290-4ce4-a03f-7717a9d8e69a" path="/var/lib/kubelet/pods/6b8f08ee-8290-4ce4-a03f-7717a9d8e69a/volumes" Nov 24 21:52:52 crc kubenswrapper[4812]: I1124 21:52:52.982086 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f864a763-a0e3-49bf-a7cf-913010c09061" path="/var/lib/kubelet/pods/f864a763-a0e3-49bf-a7cf-913010c09061/volumes" Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.040813 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.058602 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.067554 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 21:52:53 crc kubenswrapper[4812]: E1124 21:52:53.068305 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd6b987f-ea2d-49ee-8d1c-3abbe6463a66" containerName="nova-cell0-conductor-conductor" Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.068416 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd6b987f-ea2d-49ee-8d1c-3abbe6463a66" containerName="nova-cell0-conductor-conductor" Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.068771 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd6b987f-ea2d-49ee-8d1c-3abbe6463a66" containerName="nova-cell0-conductor-conductor" Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.069798 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.071707 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.098040 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.135235 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67d0e93a-8395-4a70-8745-a35492914079-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"67d0e93a-8395-4a70-8745-a35492914079\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.135424 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcrbj\" (UniqueName: \"kubernetes.io/projected/67d0e93a-8395-4a70-8745-a35492914079-kube-api-access-dcrbj\") pod \"nova-cell0-conductor-0\" (UID: \"67d0e93a-8395-4a70-8745-a35492914079\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.135470 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67d0e93a-8395-4a70-8745-a35492914079-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"67d0e93a-8395-4a70-8745-a35492914079\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.238642 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcrbj\" (UniqueName: \"kubernetes.io/projected/67d0e93a-8395-4a70-8745-a35492914079-kube-api-access-dcrbj\") pod \"nova-cell0-conductor-0\" (UID: \"67d0e93a-8395-4a70-8745-a35492914079\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.238736 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67d0e93a-8395-4a70-8745-a35492914079-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"67d0e93a-8395-4a70-8745-a35492914079\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.238797 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67d0e93a-8395-4a70-8745-a35492914079-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"67d0e93a-8395-4a70-8745-a35492914079\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.243705 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67d0e93a-8395-4a70-8745-a35492914079-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"67d0e93a-8395-4a70-8745-a35492914079\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.249814 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67d0e93a-8395-4a70-8745-a35492914079-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"67d0e93a-8395-4a70-8745-a35492914079\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.253842 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcrbj\" (UniqueName: \"kubernetes.io/projected/67d0e93a-8395-4a70-8745-a35492914079-kube-api-access-dcrbj\") pod \"nova-cell0-conductor-0\" (UID: \"67d0e93a-8395-4a70-8745-a35492914079\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.405288 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.709017 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ps94" event={"ID":"e97a8074-9685-425e-960e-44fab12d2e05","Type":"ContainerStarted","Data":"e18cb4c7bd76f31ef34b97416c0a7ac86930d951ec16e6e81ecdab46c3136fe2"} Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.711972 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e7d15f6e-88a0-44d8-a0db-164710a033d1","Type":"ContainerStarted","Data":"8edf3fd3ce1945c15fdd63a5c63ccc02df59521f6afb6339aaa6ebac5c1649dd"} Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.712035 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e7d15f6e-88a0-44d8-a0db-164710a033d1","Type":"ContainerStarted","Data":"601c8c3bf9022ee54adfebe3b955d9c54075674989abbb747c46d7ec03acc168"} Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.712045 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e7d15f6e-88a0-44d8-a0db-164710a033d1","Type":"ContainerStarted","Data":"0fe3eabbedf112368302910f3cbc63cb3ad733e618f2fe6f954e6e12d61e843b"} Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.723969 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81ede438-1eb5-43b3-86ae-6d4629ce5acb","Type":"ContainerStarted","Data":"9e9261cc5e866b31b00d4e380169f77bf8727b51c73e317d9a928f9b1efe1360"} Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.724016 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"81ede438-1eb5-43b3-86ae-6d4629ce5acb","Type":"ContainerStarted","Data":"196d911dbb519fdb8127658adc9a7c7ef08fbd2d52e4979974e65ec148a5870c"} Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.739549 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9ps94" podStartSLOduration=3.186949243 podStartE2EDuration="7.739532411s" podCreationTimestamp="2025-11-24 21:52:46 +0000 UTC" firstStartedPulling="2025-11-24 21:52:48.580249742 +0000 UTC m=+9362.369202123" lastFinishedPulling="2025-11-24 21:52:53.13283292 +0000 UTC m=+9366.921785291" observedRunningTime="2025-11-24 21:52:53.735666912 +0000 UTC m=+9367.524619283" watchObservedRunningTime="2025-11-24 21:52:53.739532411 +0000 UTC m=+9367.528484782" Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.763964 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.763949382 podStartE2EDuration="2.763949382s" podCreationTimestamp="2025-11-24 21:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:52:53.760793063 +0000 UTC m=+9367.549745434" watchObservedRunningTime="2025-11-24 21:52:53.763949382 +0000 UTC m=+9367.552901753" Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.929100 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.9290823660000003 podStartE2EDuration="2.929082366s" podCreationTimestamp="2025-11-24 21:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:52:53.785658577 +0000 UTC m=+9367.574610948" watchObservedRunningTime="2025-11-24 21:52:53.929082366 +0000 UTC m=+9367.718034737" Nov 24 21:52:53 crc kubenswrapper[4812]: I1124 21:52:53.934547 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 21:52:53 crc kubenswrapper[4812]: W1124 21:52:53.940314 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67d0e93a_8395_4a70_8745_a35492914079.slice/crio-98660297aca40c89c95a83d4c85275131b92895f3a5dd68ac20fbe8efaece2e1 WatchSource:0}: Error finding container 98660297aca40c89c95a83d4c85275131b92895f3a5dd68ac20fbe8efaece2e1: Status 404 returned error can't find the container with id 98660297aca40c89c95a83d4c85275131b92895f3a5dd68ac20fbe8efaece2e1 Nov 24 21:52:54 crc kubenswrapper[4812]: I1124 21:52:54.739631 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"67d0e93a-8395-4a70-8745-a35492914079","Type":"ContainerStarted","Data":"5aac0a947e22f8bd91eb15ee8b908830d10ed2e54b95f27ca75de581ef8296ad"} Nov 24 21:52:54 crc kubenswrapper[4812]: I1124 21:52:54.739676 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"67d0e93a-8395-4a70-8745-a35492914079","Type":"ContainerStarted","Data":"98660297aca40c89c95a83d4c85275131b92895f3a5dd68ac20fbe8efaece2e1"} Nov 24 21:52:54 crc kubenswrapper[4812]: I1124 21:52:54.764217 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.7642014110000002 podStartE2EDuration="1.764201411s" podCreationTimestamp="2025-11-24 21:52:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:52:54.756667058 +0000 UTC m=+9368.545619429" watchObservedRunningTime="2025-11-24 21:52:54.764201411 +0000 UTC m=+9368.553153782" Nov 24 21:52:54 crc kubenswrapper[4812]: I1124 21:52:54.981103 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd6b987f-ea2d-49ee-8d1c-3abbe6463a66" path="/var/lib/kubelet/pods/dd6b987f-ea2d-49ee-8d1c-3abbe6463a66/volumes" Nov 24 21:52:55 crc kubenswrapper[4812]: I1124 21:52:55.110914 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 21:52:55 crc kubenswrapper[4812]: I1124 21:52:55.189961 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 24 21:52:55 crc kubenswrapper[4812]: I1124 21:52:55.756822 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 24 21:52:56 crc kubenswrapper[4812]: I1124 21:52:56.693374 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9ps94" Nov 24 21:52:56 crc kubenswrapper[4812]: I1124 21:52:56.693961 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9ps94" Nov 24 21:52:56 crc kubenswrapper[4812]: I1124 21:52:56.780108 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9ps94" Nov 24 21:52:57 crc kubenswrapper[4812]: I1124 21:52:57.036949 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:52:57 crc kubenswrapper[4812]: I1124 21:52:57.037020 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:53:00 crc kubenswrapper[4812]: I1124 21:53:00.111068 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 21:53:00 crc kubenswrapper[4812]: I1124 21:53:00.147118 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 21:53:00 crc kubenswrapper[4812]: I1124 21:53:00.892651 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 21:53:02 crc kubenswrapper[4812]: I1124 21:53:02.036715 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 21:53:02 crc kubenswrapper[4812]: I1124 21:53:02.036842 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 21:53:02 crc kubenswrapper[4812]: I1124 21:53:02.297944 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 21:53:02 crc kubenswrapper[4812]: I1124 21:53:02.297997 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 21:53:02 crc kubenswrapper[4812]: I1124 21:53:02.998158 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:53:02 crc kubenswrapper[4812]: I1124 21:53:02.998500 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:53:03 crc kubenswrapper[4812]: I1124 21:53:03.054610 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="81ede438-1eb5-43b3-86ae-6d4629ce5acb" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:53:03 crc kubenswrapper[4812]: I1124 21:53:03.055058 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="81ede438-1eb5-43b3-86ae-6d4629ce5acb" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:53:03 crc kubenswrapper[4812]: I1124 21:53:03.314535 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e7d15f6e-88a0-44d8-a0db-164710a033d1" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:53:03 crc kubenswrapper[4812]: I1124 21:53:03.315074 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e7d15f6e-88a0-44d8-a0db-164710a033d1" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:53:03 crc kubenswrapper[4812]: I1124 21:53:03.466487 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 24 21:53:05 crc kubenswrapper[4812]: I1124 21:53:05.117971 4812 scope.go:117] "RemoveContainer" containerID="0a5ea4c46b5c49852327162912e4b9a8154e31a4b048ac6baf12ac5fe074b654" Nov 24 21:53:06 crc kubenswrapper[4812]: I1124 21:53:06.905957 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9ps94" Nov 24 21:53:06 crc kubenswrapper[4812]: I1124 21:53:06.982008 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9ps94"] Nov 24 21:53:07 crc kubenswrapper[4812]: I1124 21:53:07.930627 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9ps94" podUID="e97a8074-9685-425e-960e-44fab12d2e05" containerName="registry-server" containerID="cri-o://e18cb4c7bd76f31ef34b97416c0a7ac86930d951ec16e6e81ecdab46c3136fe2" gracePeriod=2 Nov 24 21:53:08 crc kubenswrapper[4812]: I1124 21:53:08.942536 4812 generic.go:334] "Generic (PLEG): container finished" podID="e97a8074-9685-425e-960e-44fab12d2e05" containerID="e18cb4c7bd76f31ef34b97416c0a7ac86930d951ec16e6e81ecdab46c3136fe2" exitCode=0 Nov 24 21:53:08 crc kubenswrapper[4812]: I1124 21:53:08.942600 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ps94" event={"ID":"e97a8074-9685-425e-960e-44fab12d2e05","Type":"ContainerDied","Data":"e18cb4c7bd76f31ef34b97416c0a7ac86930d951ec16e6e81ecdab46c3136fe2"} Nov 24 21:53:08 crc kubenswrapper[4812]: I1124 21:53:08.943072 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9ps94" event={"ID":"e97a8074-9685-425e-960e-44fab12d2e05","Type":"ContainerDied","Data":"9775000f824f9f0f1b0d434a9be5d81cabfe12f908b68a7ab968308e09f0c485"} Nov 24 21:53:08 crc kubenswrapper[4812]: I1124 21:53:08.943109 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9775000f824f9f0f1b0d434a9be5d81cabfe12f908b68a7ab968308e09f0c485" Nov 24 21:53:09 crc kubenswrapper[4812]: I1124 21:53:09.398141 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9ps94" Nov 24 21:53:09 crc kubenswrapper[4812]: I1124 21:53:09.491503 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt7xj\" (UniqueName: \"kubernetes.io/projected/e97a8074-9685-425e-960e-44fab12d2e05-kube-api-access-mt7xj\") pod \"e97a8074-9685-425e-960e-44fab12d2e05\" (UID: \"e97a8074-9685-425e-960e-44fab12d2e05\") " Nov 24 21:53:09 crc kubenswrapper[4812]: I1124 21:53:09.492248 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e97a8074-9685-425e-960e-44fab12d2e05-utilities\") pod \"e97a8074-9685-425e-960e-44fab12d2e05\" (UID: \"e97a8074-9685-425e-960e-44fab12d2e05\") " Nov 24 21:53:09 crc kubenswrapper[4812]: I1124 21:53:09.492451 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e97a8074-9685-425e-960e-44fab12d2e05-catalog-content\") pod \"e97a8074-9685-425e-960e-44fab12d2e05\" (UID: \"e97a8074-9685-425e-960e-44fab12d2e05\") " Nov 24 21:53:09 crc kubenswrapper[4812]: I1124 21:53:09.493369 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e97a8074-9685-425e-960e-44fab12d2e05-utilities" (OuterVolumeSpecName: "utilities") pod "e97a8074-9685-425e-960e-44fab12d2e05" (UID: "e97a8074-9685-425e-960e-44fab12d2e05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:53:09 crc kubenswrapper[4812]: I1124 21:53:09.494108 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e97a8074-9685-425e-960e-44fab12d2e05-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:53:09 crc kubenswrapper[4812]: I1124 21:53:09.514470 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e97a8074-9685-425e-960e-44fab12d2e05-kube-api-access-mt7xj" (OuterVolumeSpecName: "kube-api-access-mt7xj") pod "e97a8074-9685-425e-960e-44fab12d2e05" (UID: "e97a8074-9685-425e-960e-44fab12d2e05"). InnerVolumeSpecName "kube-api-access-mt7xj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:53:09 crc kubenswrapper[4812]: I1124 21:53:09.560294 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e97a8074-9685-425e-960e-44fab12d2e05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e97a8074-9685-425e-960e-44fab12d2e05" (UID: "e97a8074-9685-425e-960e-44fab12d2e05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:53:09 crc kubenswrapper[4812]: I1124 21:53:09.596571 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e97a8074-9685-425e-960e-44fab12d2e05-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:53:09 crc kubenswrapper[4812]: I1124 21:53:09.596622 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt7xj\" (UniqueName: \"kubernetes.io/projected/e97a8074-9685-425e-960e-44fab12d2e05-kube-api-access-mt7xj\") on node \"crc\" DevicePath \"\"" Nov 24 21:53:09 crc kubenswrapper[4812]: I1124 21:53:09.953353 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9ps94" Nov 24 21:53:10 crc kubenswrapper[4812]: I1124 21:53:10.006409 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9ps94"] Nov 24 21:53:10 crc kubenswrapper[4812]: I1124 21:53:10.021677 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9ps94"] Nov 24 21:53:10 crc kubenswrapper[4812]: I1124 21:53:10.988518 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e97a8074-9685-425e-960e-44fab12d2e05" path="/var/lib/kubelet/pods/e97a8074-9685-425e-960e-44fab12d2e05/volumes" Nov 24 21:53:12 crc kubenswrapper[4812]: I1124 21:53:12.046040 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 21:53:12 crc kubenswrapper[4812]: I1124 21:53:12.048224 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 21:53:12 crc kubenswrapper[4812]: I1124 21:53:12.060357 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 21:53:12 crc kubenswrapper[4812]: I1124 21:53:12.308694 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 21:53:12 crc kubenswrapper[4812]: I1124 21:53:12.309266 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 21:53:12 crc kubenswrapper[4812]: I1124 21:53:12.312930 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 21:53:12 crc kubenswrapper[4812]: I1124 21:53:12.316933 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 21:53:12 crc kubenswrapper[4812]: I1124 21:53:12.997797 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 21:53:13 crc kubenswrapper[4812]: I1124 21:53:13.005104 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 21:53:13 crc kubenswrapper[4812]: I1124 21:53:13.006825 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.357215 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z"] Nov 24 21:53:14 crc kubenswrapper[4812]: E1124 21:53:14.357962 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e97a8074-9685-425e-960e-44fab12d2e05" containerName="registry-server" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.357978 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e97a8074-9685-425e-960e-44fab12d2e05" containerName="registry-server" Nov 24 21:53:14 crc kubenswrapper[4812]: E1124 21:53:14.358189 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e97a8074-9685-425e-960e-44fab12d2e05" containerName="extract-content" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.358200 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e97a8074-9685-425e-960e-44fab12d2e05" containerName="extract-content" Nov 24 21:53:14 crc kubenswrapper[4812]: E1124 21:53:14.358243 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e97a8074-9685-425e-960e-44fab12d2e05" containerName="extract-utilities" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.358253 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="e97a8074-9685-425e-960e-44fab12d2e05" containerName="extract-utilities" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.359603 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="e97a8074-9685-425e-960e-44fab12d2e05" containerName="registry-server" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.360510 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.363061 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-2x7tj" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.363260 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.363383 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.363631 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.363690 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.367735 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.367739 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.376975 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z"] Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.438550 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.438691 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.438849 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.438995 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.439017 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.439182 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.439216 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.439302 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.439612 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82zv4\" (UniqueName: \"kubernetes.io/projected/2f62942a-1274-4b4d-8f6a-82488ebd090b-kube-api-access-82zv4\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.542130 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.542186 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.542263 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.542294 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.542329 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.542446 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82zv4\" (UniqueName: \"kubernetes.io/projected/2f62942a-1274-4b4d-8f6a-82488ebd090b-kube-api-access-82zv4\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.542483 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.542522 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.542597 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.543816 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.558186 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.558739 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.562896 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.563781 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.563943 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.581982 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.594613 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.598240 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82zv4\" (UniqueName: \"kubernetes.io/projected/2f62942a-1274-4b4d-8f6a-82488ebd090b-kube-api-access-82zv4\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:14 crc kubenswrapper[4812]: I1124 21:53:14.691716 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:53:15 crc kubenswrapper[4812]: I1124 21:53:15.316857 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z"] Nov 24 21:53:15 crc kubenswrapper[4812]: W1124 21:53:15.321941 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f62942a_1274_4b4d_8f6a_82488ebd090b.slice/crio-f3fa4a88a1bffb2c91fbe3b9e28237b2791964bd8452a43c8c20de9f891299e8 WatchSource:0}: Error finding container f3fa4a88a1bffb2c91fbe3b9e28237b2791964bd8452a43c8c20de9f891299e8: Status 404 returned error can't find the container with id f3fa4a88a1bffb2c91fbe3b9e28237b2791964bd8452a43c8c20de9f891299e8 Nov 24 21:53:16 crc kubenswrapper[4812]: I1124 21:53:16.033879 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" event={"ID":"2f62942a-1274-4b4d-8f6a-82488ebd090b","Type":"ContainerStarted","Data":"f3fa4a88a1bffb2c91fbe3b9e28237b2791964bd8452a43c8c20de9f891299e8"} Nov 24 21:53:17 crc kubenswrapper[4812]: I1124 21:53:17.059807 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" event={"ID":"2f62942a-1274-4b4d-8f6a-82488ebd090b","Type":"ContainerStarted","Data":"108d897e7e8b9577c7d12ae8b3cdf416a6143ae0cecaff25caa3cc39e4cfc053"} Nov 24 21:53:17 crc kubenswrapper[4812]: I1124 21:53:17.104201 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" podStartSLOduration=2.556385986 podStartE2EDuration="3.104172599s" podCreationTimestamp="2025-11-24 21:53:14 +0000 UTC" firstStartedPulling="2025-11-24 21:53:15.325306934 +0000 UTC m=+9389.114259345" lastFinishedPulling="2025-11-24 21:53:15.873093547 +0000 UTC m=+9389.662045958" observedRunningTime="2025-11-24 21:53:17.085858991 +0000 UTC m=+9390.874811402" watchObservedRunningTime="2025-11-24 21:53:17.104172599 +0000 UTC m=+9390.893125000" Nov 24 21:53:23 crc kubenswrapper[4812]: I1124 21:53:23.013639 4812 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","poddd6b987f-ea2d-49ee-8d1c-3abbe6463a66"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poddd6b987f-ea2d-49ee-8d1c-3abbe6463a66] : Timed out while waiting for systemd to remove kubepods-besteffort-poddd6b987f_ea2d_49ee_8d1c_3abbe6463a66.slice" Nov 24 21:53:32 crc kubenswrapper[4812]: I1124 21:53:32.998587 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:53:33 crc kubenswrapper[4812]: I1124 21:53:32.999594 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:53:33 crc kubenswrapper[4812]: I1124 21:53:32.999711 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 21:53:33 crc kubenswrapper[4812]: I1124 21:53:33.269687 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"91e36abe3fab90d55024ad677354165b40cb30a56e8b1ec79bfa4418f8586d8f"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:53:33 crc kubenswrapper[4812]: I1124 21:53:33.269809 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://91e36abe3fab90d55024ad677354165b40cb30a56e8b1ec79bfa4418f8586d8f" gracePeriod=600 Nov 24 21:53:34 crc kubenswrapper[4812]: I1124 21:53:34.287057 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="91e36abe3fab90d55024ad677354165b40cb30a56e8b1ec79bfa4418f8586d8f" exitCode=0 Nov 24 21:53:34 crc kubenswrapper[4812]: I1124 21:53:34.287143 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"91e36abe3fab90d55024ad677354165b40cb30a56e8b1ec79bfa4418f8586d8f"} Nov 24 21:53:34 crc kubenswrapper[4812]: I1124 21:53:34.287727 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f"} Nov 24 21:53:34 crc kubenswrapper[4812]: I1124 21:53:34.287753 4812 scope.go:117] "RemoveContainer" containerID="8ce3ee6436c2955bffa6e00b3c0a08aeb5afa01a08982a10a34b4bf388dd4337" Nov 24 21:54:05 crc kubenswrapper[4812]: I1124 21:54:05.339806 4812 scope.go:117] "RemoveContainer" containerID="28d0ab93f59c3de0b614dfea878b5694d93e74f7cfc4b804036776c4d4a6aaa8" Nov 24 21:54:05 crc kubenswrapper[4812]: I1124 21:54:05.399138 4812 scope.go:117] "RemoveContainer" containerID="f42ca3d0066ede7771b0f71920924b26c0a37cf7a16e6e5c6199e967a3fb8d0e" Nov 24 21:56:02 crc kubenswrapper[4812]: I1124 21:56:02.998849 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:56:02 crc kubenswrapper[4812]: I1124 21:56:02.999643 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:56:32 crc kubenswrapper[4812]: I1124 21:56:32.999119 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:56:33 crc kubenswrapper[4812]: I1124 21:56:32.999783 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:57:02 crc kubenswrapper[4812]: I1124 21:57:02.319231 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bn5gn"] Nov 24 21:57:02 crc kubenswrapper[4812]: I1124 21:57:02.322773 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bn5gn" Nov 24 21:57:02 crc kubenswrapper[4812]: I1124 21:57:02.333406 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bn5gn"] Nov 24 21:57:02 crc kubenswrapper[4812]: I1124 21:57:02.385892 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b9949b1-a0fa-4f2d-ac97-3dff7f280db8-utilities\") pod \"community-operators-bn5gn\" (UID: \"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8\") " pod="openshift-marketplace/community-operators-bn5gn" Nov 24 21:57:02 crc kubenswrapper[4812]: I1124 21:57:02.386289 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b9949b1-a0fa-4f2d-ac97-3dff7f280db8-catalog-content\") pod \"community-operators-bn5gn\" (UID: \"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8\") " pod="openshift-marketplace/community-operators-bn5gn" Nov 24 21:57:02 crc kubenswrapper[4812]: I1124 21:57:02.386326 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zt5l\" (UniqueName: \"kubernetes.io/projected/1b9949b1-a0fa-4f2d-ac97-3dff7f280db8-kube-api-access-5zt5l\") pod \"community-operators-bn5gn\" (UID: \"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8\") " pod="openshift-marketplace/community-operators-bn5gn" Nov 24 21:57:02 crc kubenswrapper[4812]: I1124 21:57:02.488004 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b9949b1-a0fa-4f2d-ac97-3dff7f280db8-utilities\") pod \"community-operators-bn5gn\" (UID: \"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8\") " pod="openshift-marketplace/community-operators-bn5gn" Nov 24 21:57:02 crc kubenswrapper[4812]: I1124 21:57:02.488066 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b9949b1-a0fa-4f2d-ac97-3dff7f280db8-catalog-content\") pod \"community-operators-bn5gn\" (UID: \"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8\") " pod="openshift-marketplace/community-operators-bn5gn" Nov 24 21:57:02 crc kubenswrapper[4812]: I1124 21:57:02.488129 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zt5l\" (UniqueName: \"kubernetes.io/projected/1b9949b1-a0fa-4f2d-ac97-3dff7f280db8-kube-api-access-5zt5l\") pod \"community-operators-bn5gn\" (UID: \"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8\") " pod="openshift-marketplace/community-operators-bn5gn" Nov 24 21:57:02 crc kubenswrapper[4812]: I1124 21:57:02.488640 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b9949b1-a0fa-4f2d-ac97-3dff7f280db8-utilities\") pod \"community-operators-bn5gn\" (UID: \"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8\") " pod="openshift-marketplace/community-operators-bn5gn" Nov 24 21:57:02 crc kubenswrapper[4812]: I1124 21:57:02.488667 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b9949b1-a0fa-4f2d-ac97-3dff7f280db8-catalog-content\") pod \"community-operators-bn5gn\" (UID: \"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8\") " pod="openshift-marketplace/community-operators-bn5gn" Nov 24 21:57:02 crc kubenswrapper[4812]: I1124 21:57:02.516462 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zt5l\" (UniqueName: \"kubernetes.io/projected/1b9949b1-a0fa-4f2d-ac97-3dff7f280db8-kube-api-access-5zt5l\") pod \"community-operators-bn5gn\" (UID: \"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8\") " pod="openshift-marketplace/community-operators-bn5gn" Nov 24 21:57:02 crc kubenswrapper[4812]: I1124 21:57:02.663717 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bn5gn" Nov 24 21:57:03 crc kubenswrapper[4812]: I1124 21:57:03.000240 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:57:03 crc kubenswrapper[4812]: I1124 21:57:03.000584 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:57:03 crc kubenswrapper[4812]: I1124 21:57:03.000636 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 21:57:03 crc kubenswrapper[4812]: I1124 21:57:03.001465 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:57:03 crc kubenswrapper[4812]: I1124 21:57:03.001517 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" gracePeriod=600 Nov 24 21:57:03 crc kubenswrapper[4812]: E1124 21:57:03.154031 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:57:03 crc kubenswrapper[4812]: I1124 21:57:03.307732 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bn5gn"] Nov 24 21:57:03 crc kubenswrapper[4812]: I1124 21:57:03.528988 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" exitCode=0 Nov 24 21:57:03 crc kubenswrapper[4812]: I1124 21:57:03.529103 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f"} Nov 24 21:57:03 crc kubenswrapper[4812]: I1124 21:57:03.529441 4812 scope.go:117] "RemoveContainer" containerID="91e36abe3fab90d55024ad677354165b40cb30a56e8b1ec79bfa4418f8586d8f" Nov 24 21:57:03 crc kubenswrapper[4812]: I1124 21:57:03.530120 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 21:57:03 crc kubenswrapper[4812]: E1124 21:57:03.530704 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:57:03 crc kubenswrapper[4812]: I1124 21:57:03.534196 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn5gn" event={"ID":"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8","Type":"ContainerStarted","Data":"d6452a20e952869b25021c02a1c49bcf89247287412a0ed4ee8952a36aa590e3"} Nov 24 21:57:04 crc kubenswrapper[4812]: I1124 21:57:04.554004 4812 generic.go:334] "Generic (PLEG): container finished" podID="1b9949b1-a0fa-4f2d-ac97-3dff7f280db8" containerID="8ace12be5d283480a3691b613a917fd63420dc4eff31892edbe776851caddcc8" exitCode=0 Nov 24 21:57:04 crc kubenswrapper[4812]: I1124 21:57:04.554133 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn5gn" event={"ID":"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8","Type":"ContainerDied","Data":"8ace12be5d283480a3691b613a917fd63420dc4eff31892edbe776851caddcc8"} Nov 24 21:57:04 crc kubenswrapper[4812]: I1124 21:57:04.557862 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:57:05 crc kubenswrapper[4812]: I1124 21:57:05.573066 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn5gn" event={"ID":"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8","Type":"ContainerStarted","Data":"38998f78fc6a5a102002e79c4982e4019dc5e020efbb5dbf05383b20d5f4b408"} Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.321487 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mldhv"] Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.326587 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mldhv" Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.358790 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mldhv"] Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.457724 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvmwc\" (UniqueName: \"kubernetes.io/projected/b5f67280-8ddb-46f8-b57a-854ef288da49-kube-api-access-qvmwc\") pod \"redhat-marketplace-mldhv\" (UID: \"b5f67280-8ddb-46f8-b57a-854ef288da49\") " pod="openshift-marketplace/redhat-marketplace-mldhv" Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.457908 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f67280-8ddb-46f8-b57a-854ef288da49-utilities\") pod \"redhat-marketplace-mldhv\" (UID: \"b5f67280-8ddb-46f8-b57a-854ef288da49\") " pod="openshift-marketplace/redhat-marketplace-mldhv" Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.458146 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f67280-8ddb-46f8-b57a-854ef288da49-catalog-content\") pod \"redhat-marketplace-mldhv\" (UID: \"b5f67280-8ddb-46f8-b57a-854ef288da49\") " pod="openshift-marketplace/redhat-marketplace-mldhv" Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.507548 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lwq4b"] Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.511247 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lwq4b" Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.554885 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lwq4b"] Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.561950 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f67280-8ddb-46f8-b57a-854ef288da49-utilities\") pod \"redhat-marketplace-mldhv\" (UID: \"b5f67280-8ddb-46f8-b57a-854ef288da49\") " pod="openshift-marketplace/redhat-marketplace-mldhv" Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.562200 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f67280-8ddb-46f8-b57a-854ef288da49-catalog-content\") pod \"redhat-marketplace-mldhv\" (UID: \"b5f67280-8ddb-46f8-b57a-854ef288da49\") " pod="openshift-marketplace/redhat-marketplace-mldhv" Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.562484 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvmwc\" (UniqueName: \"kubernetes.io/projected/b5f67280-8ddb-46f8-b57a-854ef288da49-kube-api-access-qvmwc\") pod \"redhat-marketplace-mldhv\" (UID: \"b5f67280-8ddb-46f8-b57a-854ef288da49\") " pod="openshift-marketplace/redhat-marketplace-mldhv" Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.563712 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f67280-8ddb-46f8-b57a-854ef288da49-utilities\") pod \"redhat-marketplace-mldhv\" (UID: \"b5f67280-8ddb-46f8-b57a-854ef288da49\") " pod="openshift-marketplace/redhat-marketplace-mldhv" Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.564077 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f67280-8ddb-46f8-b57a-854ef288da49-catalog-content\") pod \"redhat-marketplace-mldhv\" (UID: \"b5f67280-8ddb-46f8-b57a-854ef288da49\") " pod="openshift-marketplace/redhat-marketplace-mldhv" Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.605559 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvmwc\" (UniqueName: \"kubernetes.io/projected/b5f67280-8ddb-46f8-b57a-854ef288da49-kube-api-access-qvmwc\") pod \"redhat-marketplace-mldhv\" (UID: \"b5f67280-8ddb-46f8-b57a-854ef288da49\") " pod="openshift-marketplace/redhat-marketplace-mldhv" Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.624476 4812 generic.go:334] "Generic (PLEG): container finished" podID="1b9949b1-a0fa-4f2d-ac97-3dff7f280db8" containerID="38998f78fc6a5a102002e79c4982e4019dc5e020efbb5dbf05383b20d5f4b408" exitCode=0 Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.624536 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn5gn" event={"ID":"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8","Type":"ContainerDied","Data":"38998f78fc6a5a102002e79c4982e4019dc5e020efbb5dbf05383b20d5f4b408"} Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.670121 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39ba55da-7815-4bac-a4c3-1006e3c9e95b-catalog-content\") pod \"redhat-operators-lwq4b\" (UID: \"39ba55da-7815-4bac-a4c3-1006e3c9e95b\") " pod="openshift-marketplace/redhat-operators-lwq4b" Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.670323 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwxhc\" (UniqueName: \"kubernetes.io/projected/39ba55da-7815-4bac-a4c3-1006e3c9e95b-kube-api-access-gwxhc\") pod \"redhat-operators-lwq4b\" (UID: \"39ba55da-7815-4bac-a4c3-1006e3c9e95b\") " pod="openshift-marketplace/redhat-operators-lwq4b" Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.670602 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39ba55da-7815-4bac-a4c3-1006e3c9e95b-utilities\") pod \"redhat-operators-lwq4b\" (UID: \"39ba55da-7815-4bac-a4c3-1006e3c9e95b\") " pod="openshift-marketplace/redhat-operators-lwq4b" Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.675094 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mldhv" Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.772729 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39ba55da-7815-4bac-a4c3-1006e3c9e95b-catalog-content\") pod \"redhat-operators-lwq4b\" (UID: \"39ba55da-7815-4bac-a4c3-1006e3c9e95b\") " pod="openshift-marketplace/redhat-operators-lwq4b" Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.773171 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwxhc\" (UniqueName: \"kubernetes.io/projected/39ba55da-7815-4bac-a4c3-1006e3c9e95b-kube-api-access-gwxhc\") pod \"redhat-operators-lwq4b\" (UID: \"39ba55da-7815-4bac-a4c3-1006e3c9e95b\") " pod="openshift-marketplace/redhat-operators-lwq4b" Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.773285 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39ba55da-7815-4bac-a4c3-1006e3c9e95b-utilities\") pod \"redhat-operators-lwq4b\" (UID: \"39ba55da-7815-4bac-a4c3-1006e3c9e95b\") " pod="openshift-marketplace/redhat-operators-lwq4b" Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.773822 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39ba55da-7815-4bac-a4c3-1006e3c9e95b-utilities\") pod \"redhat-operators-lwq4b\" (UID: \"39ba55da-7815-4bac-a4c3-1006e3c9e95b\") " pod="openshift-marketplace/redhat-operators-lwq4b" Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.773898 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39ba55da-7815-4bac-a4c3-1006e3c9e95b-catalog-content\") pod \"redhat-operators-lwq4b\" (UID: \"39ba55da-7815-4bac-a4c3-1006e3c9e95b\") " pod="openshift-marketplace/redhat-operators-lwq4b" Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.790674 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwxhc\" (UniqueName: \"kubernetes.io/projected/39ba55da-7815-4bac-a4c3-1006e3c9e95b-kube-api-access-gwxhc\") pod \"redhat-operators-lwq4b\" (UID: \"39ba55da-7815-4bac-a4c3-1006e3c9e95b\") " pod="openshift-marketplace/redhat-operators-lwq4b" Nov 24 21:57:08 crc kubenswrapper[4812]: I1124 21:57:08.865692 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lwq4b" Nov 24 21:57:09 crc kubenswrapper[4812]: I1124 21:57:09.230424 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mldhv"] Nov 24 21:57:09 crc kubenswrapper[4812]: I1124 21:57:09.359075 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lwq4b"] Nov 24 21:57:09 crc kubenswrapper[4812]: W1124 21:57:09.359167 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39ba55da_7815_4bac_a4c3_1006e3c9e95b.slice/crio-faf1eeccc55c13eff824b00e30d3385d480fa0dede212b9d75685c31033cddd8 WatchSource:0}: Error finding container faf1eeccc55c13eff824b00e30d3385d480fa0dede212b9d75685c31033cddd8: Status 404 returned error can't find the container with id faf1eeccc55c13eff824b00e30d3385d480fa0dede212b9d75685c31033cddd8 Nov 24 21:57:09 crc kubenswrapper[4812]: I1124 21:57:09.638650 4812 generic.go:334] "Generic (PLEG): container finished" podID="b5f67280-8ddb-46f8-b57a-854ef288da49" containerID="2326eb8922bb52bcc22ebd759bcac1923050ceb659a37c1073a684e73f9d9e60" exitCode=0 Nov 24 21:57:09 crc kubenswrapper[4812]: I1124 21:57:09.638871 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mldhv" event={"ID":"b5f67280-8ddb-46f8-b57a-854ef288da49","Type":"ContainerDied","Data":"2326eb8922bb52bcc22ebd759bcac1923050ceb659a37c1073a684e73f9d9e60"} Nov 24 21:57:09 crc kubenswrapper[4812]: I1124 21:57:09.639009 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mldhv" event={"ID":"b5f67280-8ddb-46f8-b57a-854ef288da49","Type":"ContainerStarted","Data":"13b5e1abba127af986c866658e3a533a055255b7bbc1290e7976aa3344cee0b9"} Nov 24 21:57:09 crc kubenswrapper[4812]: I1124 21:57:09.640753 4812 generic.go:334] "Generic (PLEG): container finished" podID="39ba55da-7815-4bac-a4c3-1006e3c9e95b" containerID="be4fda43ae973dc938e9bbd5d01f479999a3b253e0e3d920dd7c4a8c3d2e6933" exitCode=0 Nov 24 21:57:09 crc kubenswrapper[4812]: I1124 21:57:09.640795 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lwq4b" event={"ID":"39ba55da-7815-4bac-a4c3-1006e3c9e95b","Type":"ContainerDied","Data":"be4fda43ae973dc938e9bbd5d01f479999a3b253e0e3d920dd7c4a8c3d2e6933"} Nov 24 21:57:09 crc kubenswrapper[4812]: I1124 21:57:09.640820 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lwq4b" event={"ID":"39ba55da-7815-4bac-a4c3-1006e3c9e95b","Type":"ContainerStarted","Data":"faf1eeccc55c13eff824b00e30d3385d480fa0dede212b9d75685c31033cddd8"} Nov 24 21:57:10 crc kubenswrapper[4812]: I1124 21:57:10.653277 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mldhv" event={"ID":"b5f67280-8ddb-46f8-b57a-854ef288da49","Type":"ContainerStarted","Data":"21cf2ce4a28de92872a9e38ea043b8d51ec5136e3064c0877d3305cfc6625fa7"} Nov 24 21:57:10 crc kubenswrapper[4812]: I1124 21:57:10.658096 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn5gn" event={"ID":"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8","Type":"ContainerStarted","Data":"2c7957c8b2ad362e57ec4eb092ed6d89376fd7bbdff952f1e129eb36f1d37344"} Nov 24 21:57:10 crc kubenswrapper[4812]: I1124 21:57:10.701532 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bn5gn" podStartSLOduration=3.77442859 podStartE2EDuration="8.701515248s" podCreationTimestamp="2025-11-24 21:57:02 +0000 UTC" firstStartedPulling="2025-11-24 21:57:04.557563954 +0000 UTC m=+9618.346516325" lastFinishedPulling="2025-11-24 21:57:09.484650612 +0000 UTC m=+9623.273602983" observedRunningTime="2025-11-24 21:57:10.691751992 +0000 UTC m=+9624.480704373" watchObservedRunningTime="2025-11-24 21:57:10.701515248 +0000 UTC m=+9624.490467619" Nov 24 21:57:11 crc kubenswrapper[4812]: I1124 21:57:11.669155 4812 generic.go:334] "Generic (PLEG): container finished" podID="b5f67280-8ddb-46f8-b57a-854ef288da49" containerID="21cf2ce4a28de92872a9e38ea043b8d51ec5136e3064c0877d3305cfc6625fa7" exitCode=0 Nov 24 21:57:11 crc kubenswrapper[4812]: I1124 21:57:11.669388 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mldhv" event={"ID":"b5f67280-8ddb-46f8-b57a-854ef288da49","Type":"ContainerDied","Data":"21cf2ce4a28de92872a9e38ea043b8d51ec5136e3064c0877d3305cfc6625fa7"} Nov 24 21:57:11 crc kubenswrapper[4812]: I1124 21:57:11.671677 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lwq4b" event={"ID":"39ba55da-7815-4bac-a4c3-1006e3c9e95b","Type":"ContainerStarted","Data":"322624ca7118ea0798da9d1f25951290881beffc91ef7f4df69e1e7e8a740526"} Nov 24 21:57:12 crc kubenswrapper[4812]: I1124 21:57:12.667257 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bn5gn" Nov 24 21:57:12 crc kubenswrapper[4812]: I1124 21:57:12.668588 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bn5gn" Nov 24 21:57:13 crc kubenswrapper[4812]: I1124 21:57:13.711138 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mldhv" event={"ID":"b5f67280-8ddb-46f8-b57a-854ef288da49","Type":"ContainerStarted","Data":"2b21a1e39020d519cbfba45fc8a3104f54f9f14999f375d7a6a83084d4e5c1ef"} Nov 24 21:57:13 crc kubenswrapper[4812]: I1124 21:57:13.736368 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mldhv" podStartSLOduration=3.160055784 podStartE2EDuration="5.736350134s" podCreationTimestamp="2025-11-24 21:57:08 +0000 UTC" firstStartedPulling="2025-11-24 21:57:09.640982709 +0000 UTC m=+9623.429935080" lastFinishedPulling="2025-11-24 21:57:12.217277029 +0000 UTC m=+9626.006229430" observedRunningTime="2025-11-24 21:57:13.728173233 +0000 UTC m=+9627.517125614" watchObservedRunningTime="2025-11-24 21:57:13.736350134 +0000 UTC m=+9627.525302505" Nov 24 21:57:13 crc kubenswrapper[4812]: I1124 21:57:13.806230 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-bn5gn" podUID="1b9949b1-a0fa-4f2d-ac97-3dff7f280db8" containerName="registry-server" probeResult="failure" output=< Nov 24 21:57:13 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 21:57:13 crc kubenswrapper[4812]: > Nov 24 21:57:14 crc kubenswrapper[4812]: I1124 21:57:14.966631 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 21:57:14 crc kubenswrapper[4812]: E1124 21:57:14.967263 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:57:15 crc kubenswrapper[4812]: I1124 21:57:15.734842 4812 generic.go:334] "Generic (PLEG): container finished" podID="39ba55da-7815-4bac-a4c3-1006e3c9e95b" containerID="322624ca7118ea0798da9d1f25951290881beffc91ef7f4df69e1e7e8a740526" exitCode=0 Nov 24 21:57:15 crc kubenswrapper[4812]: I1124 21:57:15.734912 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lwq4b" event={"ID":"39ba55da-7815-4bac-a4c3-1006e3c9e95b","Type":"ContainerDied","Data":"322624ca7118ea0798da9d1f25951290881beffc91ef7f4df69e1e7e8a740526"} Nov 24 21:57:17 crc kubenswrapper[4812]: I1124 21:57:17.760869 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lwq4b" event={"ID":"39ba55da-7815-4bac-a4c3-1006e3c9e95b","Type":"ContainerStarted","Data":"b23993295c90df7e888235b3cc9f08df8819f0986a8c23e85b556b1663a289ca"} Nov 24 21:57:17 crc kubenswrapper[4812]: I1124 21:57:17.784264 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lwq4b" podStartSLOduration=2.481898742 podStartE2EDuration="9.784243868s" podCreationTimestamp="2025-11-24 21:57:08 +0000 UTC" firstStartedPulling="2025-11-24 21:57:09.643948523 +0000 UTC m=+9623.432900904" lastFinishedPulling="2025-11-24 21:57:16.946293659 +0000 UTC m=+9630.735246030" observedRunningTime="2025-11-24 21:57:17.775594274 +0000 UTC m=+9631.564546645" watchObservedRunningTime="2025-11-24 21:57:17.784243868 +0000 UTC m=+9631.573196229" Nov 24 21:57:18 crc kubenswrapper[4812]: I1124 21:57:18.676076 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mldhv" Nov 24 21:57:18 crc kubenswrapper[4812]: I1124 21:57:18.676412 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mldhv" Nov 24 21:57:18 crc kubenswrapper[4812]: I1124 21:57:18.866817 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lwq4b" Nov 24 21:57:18 crc kubenswrapper[4812]: I1124 21:57:18.867995 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lwq4b" Nov 24 21:57:19 crc kubenswrapper[4812]: I1124 21:57:19.747491 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mldhv" podUID="b5f67280-8ddb-46f8-b57a-854ef288da49" containerName="registry-server" probeResult="failure" output=< Nov 24 21:57:19 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 21:57:19 crc kubenswrapper[4812]: > Nov 24 21:57:19 crc kubenswrapper[4812]: I1124 21:57:19.916372 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lwq4b" podUID="39ba55da-7815-4bac-a4c3-1006e3c9e95b" containerName="registry-server" probeResult="failure" output=< Nov 24 21:57:19 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 21:57:19 crc kubenswrapper[4812]: > Nov 24 21:57:23 crc kubenswrapper[4812]: I1124 21:57:23.741094 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-bn5gn" podUID="1b9949b1-a0fa-4f2d-ac97-3dff7f280db8" containerName="registry-server" probeResult="failure" output=< Nov 24 21:57:23 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 21:57:23 crc kubenswrapper[4812]: > Nov 24 21:57:26 crc kubenswrapper[4812]: I1124 21:57:26.976302 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 21:57:26 crc kubenswrapper[4812]: E1124 21:57:26.976793 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:57:28 crc kubenswrapper[4812]: I1124 21:57:28.755738 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mldhv" Nov 24 21:57:28 crc kubenswrapper[4812]: I1124 21:57:28.895398 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mldhv" Nov 24 21:57:29 crc kubenswrapper[4812]: I1124 21:57:29.005846 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mldhv"] Nov 24 21:57:29 crc kubenswrapper[4812]: I1124 21:57:29.896654 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mldhv" podUID="b5f67280-8ddb-46f8-b57a-854ef288da49" containerName="registry-server" containerID="cri-o://2b21a1e39020d519cbfba45fc8a3104f54f9f14999f375d7a6a83084d4e5c1ef" gracePeriod=2 Nov 24 21:57:29 crc kubenswrapper[4812]: I1124 21:57:29.915458 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lwq4b" podUID="39ba55da-7815-4bac-a4c3-1006e3c9e95b" containerName="registry-server" probeResult="failure" output=< Nov 24 21:57:29 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 21:57:29 crc kubenswrapper[4812]: > Nov 24 21:57:30 crc kubenswrapper[4812]: I1124 21:57:30.465324 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mldhv" Nov 24 21:57:30 crc kubenswrapper[4812]: I1124 21:57:30.613890 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f67280-8ddb-46f8-b57a-854ef288da49-catalog-content\") pod \"b5f67280-8ddb-46f8-b57a-854ef288da49\" (UID: \"b5f67280-8ddb-46f8-b57a-854ef288da49\") " Nov 24 21:57:30 crc kubenswrapper[4812]: I1124 21:57:30.614228 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvmwc\" (UniqueName: \"kubernetes.io/projected/b5f67280-8ddb-46f8-b57a-854ef288da49-kube-api-access-qvmwc\") pod \"b5f67280-8ddb-46f8-b57a-854ef288da49\" (UID: \"b5f67280-8ddb-46f8-b57a-854ef288da49\") " Nov 24 21:57:30 crc kubenswrapper[4812]: I1124 21:57:30.614446 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f67280-8ddb-46f8-b57a-854ef288da49-utilities\") pod \"b5f67280-8ddb-46f8-b57a-854ef288da49\" (UID: \"b5f67280-8ddb-46f8-b57a-854ef288da49\") " Nov 24 21:57:30 crc kubenswrapper[4812]: I1124 21:57:30.615729 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5f67280-8ddb-46f8-b57a-854ef288da49-utilities" (OuterVolumeSpecName: "utilities") pod "b5f67280-8ddb-46f8-b57a-854ef288da49" (UID: "b5f67280-8ddb-46f8-b57a-854ef288da49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:30 crc kubenswrapper[4812]: I1124 21:57:30.619161 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5f67280-8ddb-46f8-b57a-854ef288da49-kube-api-access-qvmwc" (OuterVolumeSpecName: "kube-api-access-qvmwc") pod "b5f67280-8ddb-46f8-b57a-854ef288da49" (UID: "b5f67280-8ddb-46f8-b57a-854ef288da49"). InnerVolumeSpecName "kube-api-access-qvmwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:57:30 crc kubenswrapper[4812]: I1124 21:57:30.633807 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5f67280-8ddb-46f8-b57a-854ef288da49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5f67280-8ddb-46f8-b57a-854ef288da49" (UID: "b5f67280-8ddb-46f8-b57a-854ef288da49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:30 crc kubenswrapper[4812]: I1124 21:57:30.717074 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f67280-8ddb-46f8-b57a-854ef288da49-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:30 crc kubenswrapper[4812]: I1124 21:57:30.717112 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f67280-8ddb-46f8-b57a-854ef288da49-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:30 crc kubenswrapper[4812]: I1124 21:57:30.717128 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvmwc\" (UniqueName: \"kubernetes.io/projected/b5f67280-8ddb-46f8-b57a-854ef288da49-kube-api-access-qvmwc\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:30 crc kubenswrapper[4812]: I1124 21:57:30.914635 4812 generic.go:334] "Generic (PLEG): container finished" podID="b5f67280-8ddb-46f8-b57a-854ef288da49" containerID="2b21a1e39020d519cbfba45fc8a3104f54f9f14999f375d7a6a83084d4e5c1ef" exitCode=0 Nov 24 21:57:30 crc kubenswrapper[4812]: I1124 21:57:30.914695 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mldhv" event={"ID":"b5f67280-8ddb-46f8-b57a-854ef288da49","Type":"ContainerDied","Data":"2b21a1e39020d519cbfba45fc8a3104f54f9f14999f375d7a6a83084d4e5c1ef"} Nov 24 21:57:30 crc kubenswrapper[4812]: I1124 21:57:30.914732 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mldhv" event={"ID":"b5f67280-8ddb-46f8-b57a-854ef288da49","Type":"ContainerDied","Data":"13b5e1abba127af986c866658e3a533a055255b7bbc1290e7976aa3344cee0b9"} Nov 24 21:57:30 crc kubenswrapper[4812]: I1124 21:57:30.914743 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mldhv" Nov 24 21:57:30 crc kubenswrapper[4812]: I1124 21:57:30.914787 4812 scope.go:117] "RemoveContainer" containerID="2b21a1e39020d519cbfba45fc8a3104f54f9f14999f375d7a6a83084d4e5c1ef" Nov 24 21:57:30 crc kubenswrapper[4812]: I1124 21:57:30.944423 4812 scope.go:117] "RemoveContainer" containerID="21cf2ce4a28de92872a9e38ea043b8d51ec5136e3064c0877d3305cfc6625fa7" Nov 24 21:57:30 crc kubenswrapper[4812]: I1124 21:57:30.963824 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mldhv"] Nov 24 21:57:30 crc kubenswrapper[4812]: I1124 21:57:30.986574 4812 scope.go:117] "RemoveContainer" containerID="2326eb8922bb52bcc22ebd759bcac1923050ceb659a37c1073a684e73f9d9e60" Nov 24 21:57:30 crc kubenswrapper[4812]: I1124 21:57:30.990665 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mldhv"] Nov 24 21:57:31 crc kubenswrapper[4812]: I1124 21:57:31.034075 4812 scope.go:117] "RemoveContainer" containerID="2b21a1e39020d519cbfba45fc8a3104f54f9f14999f375d7a6a83084d4e5c1ef" Nov 24 21:57:31 crc kubenswrapper[4812]: E1124 21:57:31.034623 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b21a1e39020d519cbfba45fc8a3104f54f9f14999f375d7a6a83084d4e5c1ef\": container with ID starting with 2b21a1e39020d519cbfba45fc8a3104f54f9f14999f375d7a6a83084d4e5c1ef not found: ID does not exist" containerID="2b21a1e39020d519cbfba45fc8a3104f54f9f14999f375d7a6a83084d4e5c1ef" Nov 24 21:57:31 crc kubenswrapper[4812]: I1124 21:57:31.034682 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b21a1e39020d519cbfba45fc8a3104f54f9f14999f375d7a6a83084d4e5c1ef"} err="failed to get container status \"2b21a1e39020d519cbfba45fc8a3104f54f9f14999f375d7a6a83084d4e5c1ef\": rpc error: code = NotFound desc = could not find container \"2b21a1e39020d519cbfba45fc8a3104f54f9f14999f375d7a6a83084d4e5c1ef\": container with ID starting with 2b21a1e39020d519cbfba45fc8a3104f54f9f14999f375d7a6a83084d4e5c1ef not found: ID does not exist" Nov 24 21:57:31 crc kubenswrapper[4812]: I1124 21:57:31.034725 4812 scope.go:117] "RemoveContainer" containerID="21cf2ce4a28de92872a9e38ea043b8d51ec5136e3064c0877d3305cfc6625fa7" Nov 24 21:57:31 crc kubenswrapper[4812]: E1124 21:57:31.035270 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21cf2ce4a28de92872a9e38ea043b8d51ec5136e3064c0877d3305cfc6625fa7\": container with ID starting with 21cf2ce4a28de92872a9e38ea043b8d51ec5136e3064c0877d3305cfc6625fa7 not found: ID does not exist" containerID="21cf2ce4a28de92872a9e38ea043b8d51ec5136e3064c0877d3305cfc6625fa7" Nov 24 21:57:31 crc kubenswrapper[4812]: I1124 21:57:31.035326 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21cf2ce4a28de92872a9e38ea043b8d51ec5136e3064c0877d3305cfc6625fa7"} err="failed to get container status \"21cf2ce4a28de92872a9e38ea043b8d51ec5136e3064c0877d3305cfc6625fa7\": rpc error: code = NotFound desc = could not find container \"21cf2ce4a28de92872a9e38ea043b8d51ec5136e3064c0877d3305cfc6625fa7\": container with ID starting with 21cf2ce4a28de92872a9e38ea043b8d51ec5136e3064c0877d3305cfc6625fa7 not found: ID does not exist" Nov 24 21:57:31 crc kubenswrapper[4812]: I1124 21:57:31.035366 4812 scope.go:117] "RemoveContainer" containerID="2326eb8922bb52bcc22ebd759bcac1923050ceb659a37c1073a684e73f9d9e60" Nov 24 21:57:31 crc kubenswrapper[4812]: E1124 21:57:31.037482 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2326eb8922bb52bcc22ebd759bcac1923050ceb659a37c1073a684e73f9d9e60\": container with ID starting with 2326eb8922bb52bcc22ebd759bcac1923050ceb659a37c1073a684e73f9d9e60 not found: ID does not exist" containerID="2326eb8922bb52bcc22ebd759bcac1923050ceb659a37c1073a684e73f9d9e60" Nov 24 21:57:31 crc kubenswrapper[4812]: I1124 21:57:31.037529 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2326eb8922bb52bcc22ebd759bcac1923050ceb659a37c1073a684e73f9d9e60"} err="failed to get container status \"2326eb8922bb52bcc22ebd759bcac1923050ceb659a37c1073a684e73f9d9e60\": rpc error: code = NotFound desc = could not find container \"2326eb8922bb52bcc22ebd759bcac1923050ceb659a37c1073a684e73f9d9e60\": container with ID starting with 2326eb8922bb52bcc22ebd759bcac1923050ceb659a37c1073a684e73f9d9e60 not found: ID does not exist" Nov 24 21:57:32 crc kubenswrapper[4812]: I1124 21:57:32.715745 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bn5gn" Nov 24 21:57:32 crc kubenswrapper[4812]: I1124 21:57:32.772152 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bn5gn" Nov 24 21:57:32 crc kubenswrapper[4812]: I1124 21:57:32.983985 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5f67280-8ddb-46f8-b57a-854ef288da49" path="/var/lib/kubelet/pods/b5f67280-8ddb-46f8-b57a-854ef288da49/volumes" Nov 24 21:57:33 crc kubenswrapper[4812]: I1124 21:57:33.408473 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bn5gn"] Nov 24 21:57:33 crc kubenswrapper[4812]: I1124 21:57:33.950144 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bn5gn" podUID="1b9949b1-a0fa-4f2d-ac97-3dff7f280db8" containerName="registry-server" containerID="cri-o://2c7957c8b2ad362e57ec4eb092ed6d89376fd7bbdff952f1e129eb36f1d37344" gracePeriod=2 Nov 24 21:57:34 crc kubenswrapper[4812]: I1124 21:57:34.965602 4812 generic.go:334] "Generic (PLEG): container finished" podID="1b9949b1-a0fa-4f2d-ac97-3dff7f280db8" containerID="2c7957c8b2ad362e57ec4eb092ed6d89376fd7bbdff952f1e129eb36f1d37344" exitCode=0 Nov 24 21:57:34 crc kubenswrapper[4812]: I1124 21:57:34.990270 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn5gn" event={"ID":"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8","Type":"ContainerDied","Data":"2c7957c8b2ad362e57ec4eb092ed6d89376fd7bbdff952f1e129eb36f1d37344"} Nov 24 21:57:35 crc kubenswrapper[4812]: I1124 21:57:35.343920 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bn5gn" Nov 24 21:57:35 crc kubenswrapper[4812]: I1124 21:57:35.414634 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zt5l\" (UniqueName: \"kubernetes.io/projected/1b9949b1-a0fa-4f2d-ac97-3dff7f280db8-kube-api-access-5zt5l\") pod \"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8\" (UID: \"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8\") " Nov 24 21:57:35 crc kubenswrapper[4812]: I1124 21:57:35.414706 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b9949b1-a0fa-4f2d-ac97-3dff7f280db8-utilities\") pod \"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8\" (UID: \"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8\") " Nov 24 21:57:35 crc kubenswrapper[4812]: I1124 21:57:35.414854 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b9949b1-a0fa-4f2d-ac97-3dff7f280db8-catalog-content\") pod \"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8\" (UID: \"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8\") " Nov 24 21:57:35 crc kubenswrapper[4812]: I1124 21:57:35.424449 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b9949b1-a0fa-4f2d-ac97-3dff7f280db8-utilities" (OuterVolumeSpecName: "utilities") pod "1b9949b1-a0fa-4f2d-ac97-3dff7f280db8" (UID: "1b9949b1-a0fa-4f2d-ac97-3dff7f280db8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:35 crc kubenswrapper[4812]: I1124 21:57:35.447356 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b9949b1-a0fa-4f2d-ac97-3dff7f280db8-kube-api-access-5zt5l" (OuterVolumeSpecName: "kube-api-access-5zt5l") pod "1b9949b1-a0fa-4f2d-ac97-3dff7f280db8" (UID: "1b9949b1-a0fa-4f2d-ac97-3dff7f280db8"). InnerVolumeSpecName "kube-api-access-5zt5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:57:35 crc kubenswrapper[4812]: I1124 21:57:35.485878 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b9949b1-a0fa-4f2d-ac97-3dff7f280db8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b9949b1-a0fa-4f2d-ac97-3dff7f280db8" (UID: "1b9949b1-a0fa-4f2d-ac97-3dff7f280db8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:35 crc kubenswrapper[4812]: I1124 21:57:35.518133 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zt5l\" (UniqueName: \"kubernetes.io/projected/1b9949b1-a0fa-4f2d-ac97-3dff7f280db8-kube-api-access-5zt5l\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:35 crc kubenswrapper[4812]: I1124 21:57:35.518176 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b9949b1-a0fa-4f2d-ac97-3dff7f280db8-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:35 crc kubenswrapper[4812]: I1124 21:57:35.518189 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b9949b1-a0fa-4f2d-ac97-3dff7f280db8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:35 crc kubenswrapper[4812]: I1124 21:57:35.990069 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn5gn" event={"ID":"1b9949b1-a0fa-4f2d-ac97-3dff7f280db8","Type":"ContainerDied","Data":"d6452a20e952869b25021c02a1c49bcf89247287412a0ed4ee8952a36aa590e3"} Nov 24 21:57:35 crc kubenswrapper[4812]: I1124 21:57:35.990151 4812 scope.go:117] "RemoveContainer" containerID="2c7957c8b2ad362e57ec4eb092ed6d89376fd7bbdff952f1e129eb36f1d37344" Nov 24 21:57:35 crc kubenswrapper[4812]: I1124 21:57:35.990152 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bn5gn" Nov 24 21:57:36 crc kubenswrapper[4812]: I1124 21:57:36.028436 4812 scope.go:117] "RemoveContainer" containerID="38998f78fc6a5a102002e79c4982e4019dc5e020efbb5dbf05383b20d5f4b408" Nov 24 21:57:36 crc kubenswrapper[4812]: I1124 21:57:36.056213 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bn5gn"] Nov 24 21:57:36 crc kubenswrapper[4812]: I1124 21:57:36.066528 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bn5gn"] Nov 24 21:57:36 crc kubenswrapper[4812]: I1124 21:57:36.537381 4812 scope.go:117] "RemoveContainer" containerID="8ace12be5d283480a3691b613a917fd63420dc4eff31892edbe776851caddcc8" Nov 24 21:57:36 crc kubenswrapper[4812]: I1124 21:57:36.983013 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b9949b1-a0fa-4f2d-ac97-3dff7f280db8" path="/var/lib/kubelet/pods/1b9949b1-a0fa-4f2d-ac97-3dff7f280db8/volumes" Nov 24 21:57:39 crc kubenswrapper[4812]: I1124 21:57:39.943288 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lwq4b" podUID="39ba55da-7815-4bac-a4c3-1006e3c9e95b" containerName="registry-server" probeResult="failure" output=< Nov 24 21:57:39 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 21:57:39 crc kubenswrapper[4812]: > Nov 24 21:57:40 crc kubenswrapper[4812]: I1124 21:57:40.966479 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 21:57:40 crc kubenswrapper[4812]: E1124 21:57:40.967413 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:57:50 crc kubenswrapper[4812]: I1124 21:57:50.378766 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lwq4b" podUID="39ba55da-7815-4bac-a4c3-1006e3c9e95b" containerName="registry-server" probeResult="failure" output=< Nov 24 21:57:50 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 21:57:50 crc kubenswrapper[4812]: > Nov 24 21:57:54 crc kubenswrapper[4812]: I1124 21:57:54.966177 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 21:57:54 crc kubenswrapper[4812]: E1124 21:57:54.967226 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:57:58 crc kubenswrapper[4812]: I1124 21:57:58.952470 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lwq4b" Nov 24 21:57:59 crc kubenswrapper[4812]: I1124 21:57:59.032176 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lwq4b" Nov 24 21:57:59 crc kubenswrapper[4812]: I1124 21:57:59.226868 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lwq4b"] Nov 24 21:58:00 crc kubenswrapper[4812]: I1124 21:58:00.299326 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lwq4b" podUID="39ba55da-7815-4bac-a4c3-1006e3c9e95b" containerName="registry-server" containerID="cri-o://b23993295c90df7e888235b3cc9f08df8819f0986a8c23e85b556b1663a289ca" gracePeriod=2 Nov 24 21:58:01 crc kubenswrapper[4812]: I1124 21:58:01.317524 4812 generic.go:334] "Generic (PLEG): container finished" podID="39ba55da-7815-4bac-a4c3-1006e3c9e95b" containerID="b23993295c90df7e888235b3cc9f08df8819f0986a8c23e85b556b1663a289ca" exitCode=0 Nov 24 21:58:01 crc kubenswrapper[4812]: I1124 21:58:01.317575 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lwq4b" event={"ID":"39ba55da-7815-4bac-a4c3-1006e3c9e95b","Type":"ContainerDied","Data":"b23993295c90df7e888235b3cc9f08df8819f0986a8c23e85b556b1663a289ca"} Nov 24 21:58:01 crc kubenswrapper[4812]: I1124 21:58:01.687731 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lwq4b" Nov 24 21:58:01 crc kubenswrapper[4812]: I1124 21:58:01.844100 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39ba55da-7815-4bac-a4c3-1006e3c9e95b-catalog-content\") pod \"39ba55da-7815-4bac-a4c3-1006e3c9e95b\" (UID: \"39ba55da-7815-4bac-a4c3-1006e3c9e95b\") " Nov 24 21:58:01 crc kubenswrapper[4812]: I1124 21:58:01.844214 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwxhc\" (UniqueName: \"kubernetes.io/projected/39ba55da-7815-4bac-a4c3-1006e3c9e95b-kube-api-access-gwxhc\") pod \"39ba55da-7815-4bac-a4c3-1006e3c9e95b\" (UID: \"39ba55da-7815-4bac-a4c3-1006e3c9e95b\") " Nov 24 21:58:01 crc kubenswrapper[4812]: I1124 21:58:01.844383 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39ba55da-7815-4bac-a4c3-1006e3c9e95b-utilities\") pod \"39ba55da-7815-4bac-a4c3-1006e3c9e95b\" (UID: \"39ba55da-7815-4bac-a4c3-1006e3c9e95b\") " Nov 24 21:58:01 crc kubenswrapper[4812]: I1124 21:58:01.845570 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39ba55da-7815-4bac-a4c3-1006e3c9e95b-utilities" (OuterVolumeSpecName: "utilities") pod "39ba55da-7815-4bac-a4c3-1006e3c9e95b" (UID: "39ba55da-7815-4bac-a4c3-1006e3c9e95b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:58:01 crc kubenswrapper[4812]: I1124 21:58:01.854677 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39ba55da-7815-4bac-a4c3-1006e3c9e95b-kube-api-access-gwxhc" (OuterVolumeSpecName: "kube-api-access-gwxhc") pod "39ba55da-7815-4bac-a4c3-1006e3c9e95b" (UID: "39ba55da-7815-4bac-a4c3-1006e3c9e95b"). InnerVolumeSpecName "kube-api-access-gwxhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:58:01 crc kubenswrapper[4812]: I1124 21:58:01.948675 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwxhc\" (UniqueName: \"kubernetes.io/projected/39ba55da-7815-4bac-a4c3-1006e3c9e95b-kube-api-access-gwxhc\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:01 crc kubenswrapper[4812]: I1124 21:58:01.948711 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39ba55da-7815-4bac-a4c3-1006e3c9e95b-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:01 crc kubenswrapper[4812]: I1124 21:58:01.959818 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39ba55da-7815-4bac-a4c3-1006e3c9e95b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39ba55da-7815-4bac-a4c3-1006e3c9e95b" (UID: "39ba55da-7815-4bac-a4c3-1006e3c9e95b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:58:02 crc kubenswrapper[4812]: I1124 21:58:02.051584 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39ba55da-7815-4bac-a4c3-1006e3c9e95b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:02 crc kubenswrapper[4812]: I1124 21:58:02.335529 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lwq4b" event={"ID":"39ba55da-7815-4bac-a4c3-1006e3c9e95b","Type":"ContainerDied","Data":"faf1eeccc55c13eff824b00e30d3385d480fa0dede212b9d75685c31033cddd8"} Nov 24 21:58:02 crc kubenswrapper[4812]: I1124 21:58:02.335593 4812 scope.go:117] "RemoveContainer" containerID="b23993295c90df7e888235b3cc9f08df8819f0986a8c23e85b556b1663a289ca" Nov 24 21:58:02 crc kubenswrapper[4812]: I1124 21:58:02.335632 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lwq4b" Nov 24 21:58:02 crc kubenswrapper[4812]: I1124 21:58:02.372971 4812 scope.go:117] "RemoveContainer" containerID="322624ca7118ea0798da9d1f25951290881beffc91ef7f4df69e1e7e8a740526" Nov 24 21:58:02 crc kubenswrapper[4812]: I1124 21:58:02.394389 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lwq4b"] Nov 24 21:58:02 crc kubenswrapper[4812]: I1124 21:58:02.418652 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lwq4b"] Nov 24 21:58:02 crc kubenswrapper[4812]: I1124 21:58:02.418755 4812 scope.go:117] "RemoveContainer" containerID="be4fda43ae973dc938e9bbd5d01f479999a3b253e0e3d920dd7c4a8c3d2e6933" Nov 24 21:58:02 crc kubenswrapper[4812]: I1124 21:58:02.984148 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39ba55da-7815-4bac-a4c3-1006e3c9e95b" path="/var/lib/kubelet/pods/39ba55da-7815-4bac-a4c3-1006e3c9e95b/volumes" Nov 24 21:58:05 crc kubenswrapper[4812]: I1124 21:58:05.967018 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 21:58:05 crc kubenswrapper[4812]: E1124 21:58:05.968592 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:58:17 crc kubenswrapper[4812]: I1124 21:58:17.966570 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 21:58:17 crc kubenswrapper[4812]: E1124 21:58:17.967648 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:58:20 crc kubenswrapper[4812]: I1124 21:58:20.595585 4812 generic.go:334] "Generic (PLEG): container finished" podID="2f62942a-1274-4b4d-8f6a-82488ebd090b" containerID="108d897e7e8b9577c7d12ae8b3cdf416a6143ae0cecaff25caa3cc39e4cfc053" exitCode=0 Nov 24 21:58:20 crc kubenswrapper[4812]: I1124 21:58:20.595676 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" event={"ID":"2f62942a-1274-4b4d-8f6a-82488ebd090b","Type":"ContainerDied","Data":"108d897e7e8b9577c7d12ae8b3cdf416a6143ae0cecaff25caa3cc39e4cfc053"} Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.124858 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.269944 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-inventory\") pod \"2f62942a-1274-4b4d-8f6a-82488ebd090b\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.270014 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-migration-ssh-key-0\") pod \"2f62942a-1274-4b4d-8f6a-82488ebd090b\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.270055 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-migration-ssh-key-1\") pod \"2f62942a-1274-4b4d-8f6a-82488ebd090b\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.270101 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cell1-combined-ca-bundle\") pod \"2f62942a-1274-4b4d-8f6a-82488ebd090b\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.270162 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cell1-compute-config-1\") pod \"2f62942a-1274-4b4d-8f6a-82488ebd090b\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.271176 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cells-global-config-0\") pod \"2f62942a-1274-4b4d-8f6a-82488ebd090b\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.271224 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82zv4\" (UniqueName: \"kubernetes.io/projected/2f62942a-1274-4b4d-8f6a-82488ebd090b-kube-api-access-82zv4\") pod \"2f62942a-1274-4b4d-8f6a-82488ebd090b\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.271255 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cell1-compute-config-0\") pod \"2f62942a-1274-4b4d-8f6a-82488ebd090b\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.271298 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-ssh-key\") pod \"2f62942a-1274-4b4d-8f6a-82488ebd090b\" (UID: \"2f62942a-1274-4b4d-8f6a-82488ebd090b\") " Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.278493 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f62942a-1274-4b4d-8f6a-82488ebd090b-kube-api-access-82zv4" (OuterVolumeSpecName: "kube-api-access-82zv4") pod "2f62942a-1274-4b4d-8f6a-82488ebd090b" (UID: "2f62942a-1274-4b4d-8f6a-82488ebd090b"). InnerVolumeSpecName "kube-api-access-82zv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.278647 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "2f62942a-1274-4b4d-8f6a-82488ebd090b" (UID: "2f62942a-1274-4b4d-8f6a-82488ebd090b"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.304323 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "2f62942a-1274-4b4d-8f6a-82488ebd090b" (UID: "2f62942a-1274-4b4d-8f6a-82488ebd090b"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.306143 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "2f62942a-1274-4b4d-8f6a-82488ebd090b" (UID: "2f62942a-1274-4b4d-8f6a-82488ebd090b"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.319045 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "2f62942a-1274-4b4d-8f6a-82488ebd090b" (UID: "2f62942a-1274-4b4d-8f6a-82488ebd090b"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.323724 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-inventory" (OuterVolumeSpecName: "inventory") pod "2f62942a-1274-4b4d-8f6a-82488ebd090b" (UID: "2f62942a-1274-4b4d-8f6a-82488ebd090b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.330004 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2f62942a-1274-4b4d-8f6a-82488ebd090b" (UID: "2f62942a-1274-4b4d-8f6a-82488ebd090b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.330805 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "2f62942a-1274-4b4d-8f6a-82488ebd090b" (UID: "2f62942a-1274-4b4d-8f6a-82488ebd090b"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.334020 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "2f62942a-1274-4b4d-8f6a-82488ebd090b" (UID: "2f62942a-1274-4b4d-8f6a-82488ebd090b"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.373966 4812 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.374003 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82zv4\" (UniqueName: \"kubernetes.io/projected/2f62942a-1274-4b4d-8f6a-82488ebd090b-kube-api-access-82zv4\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.374012 4812 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.374021 4812 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.374030 4812 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.374039 4812 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.374048 4812 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.374058 4812 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.374066 4812 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2f62942a-1274-4b4d-8f6a-82488ebd090b-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.621104 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" event={"ID":"2f62942a-1274-4b4d-8f6a-82488ebd090b","Type":"ContainerDied","Data":"f3fa4a88a1bffb2c91fbe3b9e28237b2791964bd8452a43c8c20de9f891299e8"} Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.621396 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3fa4a88a1bffb2c91fbe3b9e28237b2791964bd8452a43c8c20de9f891299e8" Nov 24 21:58:22 crc kubenswrapper[4812]: I1124 21:58:22.621153 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z" Nov 24 21:58:31 crc kubenswrapper[4812]: I1124 21:58:31.966018 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 21:58:31 crc kubenswrapper[4812]: E1124 21:58:31.967075 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:58:43 crc kubenswrapper[4812]: I1124 21:58:43.965741 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 21:58:43 crc kubenswrapper[4812]: E1124 21:58:43.966375 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:58:56 crc kubenswrapper[4812]: I1124 21:58:56.980731 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 21:58:56 crc kubenswrapper[4812]: E1124 21:58:56.982929 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:59:05 crc kubenswrapper[4812]: I1124 21:59:05.672441 4812 scope.go:117] "RemoveContainer" containerID="bcb5748eff25d395dbb81501a2cd06fa11838fc3d554ccfcb317b9eb5fdcf657" Nov 24 21:59:05 crc kubenswrapper[4812]: I1124 21:59:05.714175 4812 scope.go:117] "RemoveContainer" containerID="3e2c50827bc916d69d5188231d34fae9c2ee2010996cdfca2b2d89130c0fabe2" Nov 24 21:59:05 crc kubenswrapper[4812]: I1124 21:59:05.787182 4812 scope.go:117] "RemoveContainer" containerID="e18cb4c7bd76f31ef34b97416c0a7ac86930d951ec16e6e81ecdab46c3136fe2" Nov 24 21:59:09 crc kubenswrapper[4812]: I1124 21:59:09.966658 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 21:59:09 crc kubenswrapper[4812]: E1124 21:59:09.968588 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:59:21 crc kubenswrapper[4812]: I1124 21:59:21.966389 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 21:59:21 crc kubenswrapper[4812]: E1124 21:59:21.969314 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:59:36 crc kubenswrapper[4812]: I1124 21:59:36.968650 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 21:59:36 crc kubenswrapper[4812]: E1124 21:59:36.969833 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 21:59:48 crc kubenswrapper[4812]: I1124 21:59:48.965916 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 21:59:48 crc kubenswrapper[4812]: E1124 21:59:48.966649 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.177575 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400360-wb6n8"] Nov 24 22:00:00 crc kubenswrapper[4812]: E1124 22:00:00.178822 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f62942a-1274-4b4d-8f6a-82488ebd090b" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.178842 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f62942a-1274-4b4d-8f6a-82488ebd090b" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 24 22:00:00 crc kubenswrapper[4812]: E1124 22:00:00.178875 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f67280-8ddb-46f8-b57a-854ef288da49" containerName="extract-content" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.178885 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f67280-8ddb-46f8-b57a-854ef288da49" containerName="extract-content" Nov 24 22:00:00 crc kubenswrapper[4812]: E1124 22:00:00.178899 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b9949b1-a0fa-4f2d-ac97-3dff7f280db8" containerName="extract-utilities" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.178906 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b9949b1-a0fa-4f2d-ac97-3dff7f280db8" containerName="extract-utilities" Nov 24 22:00:00 crc kubenswrapper[4812]: E1124 22:00:00.178920 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f67280-8ddb-46f8-b57a-854ef288da49" containerName="registry-server" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.178929 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f67280-8ddb-46f8-b57a-854ef288da49" containerName="registry-server" Nov 24 22:00:00 crc kubenswrapper[4812]: E1124 22:00:00.178947 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f67280-8ddb-46f8-b57a-854ef288da49" containerName="extract-utilities" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.178957 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f67280-8ddb-46f8-b57a-854ef288da49" containerName="extract-utilities" Nov 24 22:00:00 crc kubenswrapper[4812]: E1124 22:00:00.178966 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39ba55da-7815-4bac-a4c3-1006e3c9e95b" containerName="extract-content" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.178974 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="39ba55da-7815-4bac-a4c3-1006e3c9e95b" containerName="extract-content" Nov 24 22:00:00 crc kubenswrapper[4812]: E1124 22:00:00.178982 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39ba55da-7815-4bac-a4c3-1006e3c9e95b" containerName="registry-server" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.178990 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="39ba55da-7815-4bac-a4c3-1006e3c9e95b" containerName="registry-server" Nov 24 22:00:00 crc kubenswrapper[4812]: E1124 22:00:00.179014 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b9949b1-a0fa-4f2d-ac97-3dff7f280db8" containerName="extract-content" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.179022 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b9949b1-a0fa-4f2d-ac97-3dff7f280db8" containerName="extract-content" Nov 24 22:00:00 crc kubenswrapper[4812]: E1124 22:00:00.179033 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39ba55da-7815-4bac-a4c3-1006e3c9e95b" containerName="extract-utilities" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.179041 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="39ba55da-7815-4bac-a4c3-1006e3c9e95b" containerName="extract-utilities" Nov 24 22:00:00 crc kubenswrapper[4812]: E1124 22:00:00.179053 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b9949b1-a0fa-4f2d-ac97-3dff7f280db8" containerName="registry-server" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.179062 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b9949b1-a0fa-4f2d-ac97-3dff7f280db8" containerName="registry-server" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.179304 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f62942a-1274-4b4d-8f6a-82488ebd090b" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.179355 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="39ba55da-7815-4bac-a4c3-1006e3c9e95b" containerName="registry-server" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.179376 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f67280-8ddb-46f8-b57a-854ef288da49" containerName="registry-server" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.179388 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b9949b1-a0fa-4f2d-ac97-3dff7f280db8" containerName="registry-server" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.180606 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-wb6n8" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.185148 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.185263 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.209590 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400360-wb6n8"] Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.361647 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0751e902-4521-4f65-96b7-1f5da2edb9e3-config-volume\") pod \"collect-profiles-29400360-wb6n8\" (UID: \"0751e902-4521-4f65-96b7-1f5da2edb9e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-wb6n8" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.361978 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0751e902-4521-4f65-96b7-1f5da2edb9e3-secret-volume\") pod \"collect-profiles-29400360-wb6n8\" (UID: \"0751e902-4521-4f65-96b7-1f5da2edb9e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-wb6n8" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.362082 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xcwc\" (UniqueName: \"kubernetes.io/projected/0751e902-4521-4f65-96b7-1f5da2edb9e3-kube-api-access-4xcwc\") pod \"collect-profiles-29400360-wb6n8\" (UID: \"0751e902-4521-4f65-96b7-1f5da2edb9e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-wb6n8" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.465557 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0751e902-4521-4f65-96b7-1f5da2edb9e3-config-volume\") pod \"collect-profiles-29400360-wb6n8\" (UID: \"0751e902-4521-4f65-96b7-1f5da2edb9e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-wb6n8" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.465785 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0751e902-4521-4f65-96b7-1f5da2edb9e3-secret-volume\") pod \"collect-profiles-29400360-wb6n8\" (UID: \"0751e902-4521-4f65-96b7-1f5da2edb9e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-wb6n8" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.465854 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xcwc\" (UniqueName: \"kubernetes.io/projected/0751e902-4521-4f65-96b7-1f5da2edb9e3-kube-api-access-4xcwc\") pod \"collect-profiles-29400360-wb6n8\" (UID: \"0751e902-4521-4f65-96b7-1f5da2edb9e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-wb6n8" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.467468 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0751e902-4521-4f65-96b7-1f5da2edb9e3-config-volume\") pod \"collect-profiles-29400360-wb6n8\" (UID: \"0751e902-4521-4f65-96b7-1f5da2edb9e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-wb6n8" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.473604 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0751e902-4521-4f65-96b7-1f5da2edb9e3-secret-volume\") pod \"collect-profiles-29400360-wb6n8\" (UID: \"0751e902-4521-4f65-96b7-1f5da2edb9e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-wb6n8" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.487316 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xcwc\" (UniqueName: \"kubernetes.io/projected/0751e902-4521-4f65-96b7-1f5da2edb9e3-kube-api-access-4xcwc\") pod \"collect-profiles-29400360-wb6n8\" (UID: \"0751e902-4521-4f65-96b7-1f5da2edb9e3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-wb6n8" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.512227 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-wb6n8" Nov 24 22:00:00 crc kubenswrapper[4812]: I1124 22:00:00.973221 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 22:00:00 crc kubenswrapper[4812]: E1124 22:00:00.974262 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:00:01 crc kubenswrapper[4812]: I1124 22:00:01.010257 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400360-wb6n8"] Nov 24 22:00:01 crc kubenswrapper[4812]: I1124 22:00:01.512060 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-wb6n8" event={"ID":"0751e902-4521-4f65-96b7-1f5da2edb9e3","Type":"ContainerStarted","Data":"fdd8932e35f101e80ff3e94946c7190912e7019728c5e1876ff7e937541106d6"} Nov 24 22:00:01 crc kubenswrapper[4812]: I1124 22:00:01.512404 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-wb6n8" event={"ID":"0751e902-4521-4f65-96b7-1f5da2edb9e3","Type":"ContainerStarted","Data":"d2f3dc5262d70d8c1bc3d8e4a8c421ae66eb894ace51854008a592bb3908e25b"} Nov 24 22:00:01 crc kubenswrapper[4812]: I1124 22:00:01.531936 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-wb6n8" podStartSLOduration=1.531921537 podStartE2EDuration="1.531921537s" podCreationTimestamp="2025-11-24 22:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 22:00:01.525398673 +0000 UTC m=+9795.314351054" watchObservedRunningTime="2025-11-24 22:00:01.531921537 +0000 UTC m=+9795.320873908" Nov 24 22:00:02 crc kubenswrapper[4812]: I1124 22:00:02.525279 4812 generic.go:334] "Generic (PLEG): container finished" podID="0751e902-4521-4f65-96b7-1f5da2edb9e3" containerID="fdd8932e35f101e80ff3e94946c7190912e7019728c5e1876ff7e937541106d6" exitCode=0 Nov 24 22:00:02 crc kubenswrapper[4812]: I1124 22:00:02.525442 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-wb6n8" event={"ID":"0751e902-4521-4f65-96b7-1f5da2edb9e3","Type":"ContainerDied","Data":"fdd8932e35f101e80ff3e94946c7190912e7019728c5e1876ff7e937541106d6"} Nov 24 22:00:04 crc kubenswrapper[4812]: I1124 22:00:04.007135 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-wb6n8" Nov 24 22:00:04 crc kubenswrapper[4812]: I1124 22:00:04.154372 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xcwc\" (UniqueName: \"kubernetes.io/projected/0751e902-4521-4f65-96b7-1f5da2edb9e3-kube-api-access-4xcwc\") pod \"0751e902-4521-4f65-96b7-1f5da2edb9e3\" (UID: \"0751e902-4521-4f65-96b7-1f5da2edb9e3\") " Nov 24 22:00:04 crc kubenswrapper[4812]: I1124 22:00:04.154718 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0751e902-4521-4f65-96b7-1f5da2edb9e3-config-volume\") pod \"0751e902-4521-4f65-96b7-1f5da2edb9e3\" (UID: \"0751e902-4521-4f65-96b7-1f5da2edb9e3\") " Nov 24 22:00:04 crc kubenswrapper[4812]: I1124 22:00:04.154807 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0751e902-4521-4f65-96b7-1f5da2edb9e3-secret-volume\") pod \"0751e902-4521-4f65-96b7-1f5da2edb9e3\" (UID: \"0751e902-4521-4f65-96b7-1f5da2edb9e3\") " Nov 24 22:00:04 crc kubenswrapper[4812]: I1124 22:00:04.155522 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0751e902-4521-4f65-96b7-1f5da2edb9e3-config-volume" (OuterVolumeSpecName: "config-volume") pod "0751e902-4521-4f65-96b7-1f5da2edb9e3" (UID: "0751e902-4521-4f65-96b7-1f5da2edb9e3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:00:04 crc kubenswrapper[4812]: I1124 22:00:04.160672 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0751e902-4521-4f65-96b7-1f5da2edb9e3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0751e902-4521-4f65-96b7-1f5da2edb9e3" (UID: "0751e902-4521-4f65-96b7-1f5da2edb9e3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:00:04 crc kubenswrapper[4812]: I1124 22:00:04.161452 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0751e902-4521-4f65-96b7-1f5da2edb9e3-kube-api-access-4xcwc" (OuterVolumeSpecName: "kube-api-access-4xcwc") pod "0751e902-4521-4f65-96b7-1f5da2edb9e3" (UID: "0751e902-4521-4f65-96b7-1f5da2edb9e3"). InnerVolumeSpecName "kube-api-access-4xcwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:00:04 crc kubenswrapper[4812]: I1124 22:00:04.260098 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xcwc\" (UniqueName: \"kubernetes.io/projected/0751e902-4521-4f65-96b7-1f5da2edb9e3-kube-api-access-4xcwc\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:04 crc kubenswrapper[4812]: I1124 22:00:04.260151 4812 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0751e902-4521-4f65-96b7-1f5da2edb9e3-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:04 crc kubenswrapper[4812]: I1124 22:00:04.260169 4812 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0751e902-4521-4f65-96b7-1f5da2edb9e3-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:04 crc kubenswrapper[4812]: I1124 22:00:04.559615 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-wb6n8" event={"ID":"0751e902-4521-4f65-96b7-1f5da2edb9e3","Type":"ContainerDied","Data":"d2f3dc5262d70d8c1bc3d8e4a8c421ae66eb894ace51854008a592bb3908e25b"} Nov 24 22:00:04 crc kubenswrapper[4812]: I1124 22:00:04.559715 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2f3dc5262d70d8c1bc3d8e4a8c421ae66eb894ace51854008a592bb3908e25b" Nov 24 22:00:04 crc kubenswrapper[4812]: I1124 22:00:04.559889 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-wb6n8" Nov 24 22:00:04 crc kubenswrapper[4812]: I1124 22:00:04.643023 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj"] Nov 24 22:00:04 crc kubenswrapper[4812]: I1124 22:00:04.654789 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400315-j5khj"] Nov 24 22:00:04 crc kubenswrapper[4812]: I1124 22:00:04.987617 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1cb8a30-be67-4462-8e91-dc626153f19e" path="/var/lib/kubelet/pods/b1cb8a30-be67-4462-8e91-dc626153f19e/volumes" Nov 24 22:00:05 crc kubenswrapper[4812]: I1124 22:00:05.879898 4812 scope.go:117] "RemoveContainer" containerID="cdb0c81f9f88ff471816efdea96c2565b89945f0ce53654ba913c40c487d60ae" Nov 24 22:00:13 crc kubenswrapper[4812]: I1124 22:00:13.966089 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 22:00:13 crc kubenswrapper[4812]: E1124 22:00:13.966807 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:00:16 crc kubenswrapper[4812]: I1124 22:00:16.703411 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Nov 24 22:00:16 crc kubenswrapper[4812]: I1124 22:00:16.703944 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="fd16f350-87b3-4ddc-baf8-fd693b722413" containerName="adoption" containerID="cri-o://7220ae976c51ea972924f92b1b0a12ed85267f705b6dd4b01b7231de3f0727e3" gracePeriod=30 Nov 24 22:00:26 crc kubenswrapper[4812]: I1124 22:00:26.979010 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 22:00:26 crc kubenswrapper[4812]: E1124 22:00:26.980604 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:00:38 crc kubenswrapper[4812]: I1124 22:00:38.966261 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 22:00:38 crc kubenswrapper[4812]: E1124 22:00:38.967600 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:00:47 crc kubenswrapper[4812]: I1124 22:00:47.254085 4812 generic.go:334] "Generic (PLEG): container finished" podID="fd16f350-87b3-4ddc-baf8-fd693b722413" containerID="7220ae976c51ea972924f92b1b0a12ed85267f705b6dd4b01b7231de3f0727e3" exitCode=137 Nov 24 22:00:47 crc kubenswrapper[4812]: I1124 22:00:47.254320 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"fd16f350-87b3-4ddc-baf8-fd693b722413","Type":"ContainerDied","Data":"7220ae976c51ea972924f92b1b0a12ed85267f705b6dd4b01b7231de3f0727e3"} Nov 24 22:00:47 crc kubenswrapper[4812]: I1124 22:00:47.254691 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"fd16f350-87b3-4ddc-baf8-fd693b722413","Type":"ContainerDied","Data":"17f8bccb77f77502a3c88c7702f56ed619f989224562b13677c1c76740c74daf"} Nov 24 22:00:47 crc kubenswrapper[4812]: I1124 22:00:47.254712 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17f8bccb77f77502a3c88c7702f56ed619f989224562b13677c1c76740c74daf" Nov 24 22:00:47 crc kubenswrapper[4812]: I1124 22:00:47.314664 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 24 22:00:47 crc kubenswrapper[4812]: I1124 22:00:47.447974 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzfnn\" (UniqueName: \"kubernetes.io/projected/fd16f350-87b3-4ddc-baf8-fd693b722413-kube-api-access-tzfnn\") pod \"fd16f350-87b3-4ddc-baf8-fd693b722413\" (UID: \"fd16f350-87b3-4ddc-baf8-fd693b722413\") " Nov 24 22:00:47 crc kubenswrapper[4812]: I1124 22:00:47.449009 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3622206-f1f1-48f5-9c0a-ec5ee8848944\") pod \"fd16f350-87b3-4ddc-baf8-fd693b722413\" (UID: \"fd16f350-87b3-4ddc-baf8-fd693b722413\") " Nov 24 22:00:47 crc kubenswrapper[4812]: I1124 22:00:47.454958 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd16f350-87b3-4ddc-baf8-fd693b722413-kube-api-access-tzfnn" (OuterVolumeSpecName: "kube-api-access-tzfnn") pod "fd16f350-87b3-4ddc-baf8-fd693b722413" (UID: "fd16f350-87b3-4ddc-baf8-fd693b722413"). InnerVolumeSpecName "kube-api-access-tzfnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:00:47 crc kubenswrapper[4812]: I1124 22:00:47.503286 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3622206-f1f1-48f5-9c0a-ec5ee8848944" (OuterVolumeSpecName: "mariadb-data") pod "fd16f350-87b3-4ddc-baf8-fd693b722413" (UID: "fd16f350-87b3-4ddc-baf8-fd693b722413"). InnerVolumeSpecName "pvc-b3622206-f1f1-48f5-9c0a-ec5ee8848944". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 22:00:47 crc kubenswrapper[4812]: I1124 22:00:47.551736 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b3622206-f1f1-48f5-9c0a-ec5ee8848944\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3622206-f1f1-48f5-9c0a-ec5ee8848944\") on node \"crc\" " Nov 24 22:00:47 crc kubenswrapper[4812]: I1124 22:00:47.551801 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzfnn\" (UniqueName: \"kubernetes.io/projected/fd16f350-87b3-4ddc-baf8-fd693b722413-kube-api-access-tzfnn\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:47 crc kubenswrapper[4812]: I1124 22:00:47.597249 4812 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 24 22:00:47 crc kubenswrapper[4812]: I1124 22:00:47.597442 4812 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b3622206-f1f1-48f5-9c0a-ec5ee8848944" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3622206-f1f1-48f5-9c0a-ec5ee8848944") on node "crc" Nov 24 22:00:47 crc kubenswrapper[4812]: I1124 22:00:47.653424 4812 reconciler_common.go:293] "Volume detached for volume \"pvc-b3622206-f1f1-48f5-9c0a-ec5ee8848944\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b3622206-f1f1-48f5-9c0a-ec5ee8848944\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:48 crc kubenswrapper[4812]: I1124 22:00:48.270839 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 24 22:00:48 crc kubenswrapper[4812]: I1124 22:00:48.316967 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Nov 24 22:00:48 crc kubenswrapper[4812]: I1124 22:00:48.331480 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Nov 24 22:00:49 crc kubenswrapper[4812]: I1124 22:00:49.002785 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd16f350-87b3-4ddc-baf8-fd693b722413" path="/var/lib/kubelet/pods/fd16f350-87b3-4ddc-baf8-fd693b722413/volumes" Nov 24 22:00:49 crc kubenswrapper[4812]: I1124 22:00:49.417299 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Nov 24 22:00:49 crc kubenswrapper[4812]: I1124 22:00:49.417569 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="21f4d7d0-d494-4c0e-8a72-5920867ba86d" containerName="adoption" containerID="cri-o://52296d3c9a774ef743b4c069f03eb78366aedce02ccfe46129409317ba18fdfa" gracePeriod=30 Nov 24 22:00:51 crc kubenswrapper[4812]: I1124 22:00:51.966542 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 22:00:51 crc kubenswrapper[4812]: E1124 22:00:51.967299 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:01:00 crc kubenswrapper[4812]: I1124 22:01:00.176283 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29400361-2gkwr"] Nov 24 22:01:00 crc kubenswrapper[4812]: E1124 22:01:00.177456 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd16f350-87b3-4ddc-baf8-fd693b722413" containerName="adoption" Nov 24 22:01:00 crc kubenswrapper[4812]: I1124 22:01:00.177474 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd16f350-87b3-4ddc-baf8-fd693b722413" containerName="adoption" Nov 24 22:01:00 crc kubenswrapper[4812]: E1124 22:01:00.177527 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0751e902-4521-4f65-96b7-1f5da2edb9e3" containerName="collect-profiles" Nov 24 22:01:00 crc kubenswrapper[4812]: I1124 22:01:00.177536 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="0751e902-4521-4f65-96b7-1f5da2edb9e3" containerName="collect-profiles" Nov 24 22:01:00 crc kubenswrapper[4812]: I1124 22:01:00.177795 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="0751e902-4521-4f65-96b7-1f5da2edb9e3" containerName="collect-profiles" Nov 24 22:01:00 crc kubenswrapper[4812]: I1124 22:01:00.177837 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd16f350-87b3-4ddc-baf8-fd693b722413" containerName="adoption" Nov 24 22:01:00 crc kubenswrapper[4812]: I1124 22:01:00.178736 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400361-2gkwr" Nov 24 22:01:00 crc kubenswrapper[4812]: I1124 22:01:00.187293 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29400361-2gkwr"] Nov 24 22:01:00 crc kubenswrapper[4812]: I1124 22:01:00.315792 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w289n\" (UniqueName: \"kubernetes.io/projected/a4e84932-5719-400d-8c94-df947dd51038-kube-api-access-w289n\") pod \"keystone-cron-29400361-2gkwr\" (UID: \"a4e84932-5719-400d-8c94-df947dd51038\") " pod="openstack/keystone-cron-29400361-2gkwr" Nov 24 22:01:00 crc kubenswrapper[4812]: I1124 22:01:00.315903 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4e84932-5719-400d-8c94-df947dd51038-fernet-keys\") pod \"keystone-cron-29400361-2gkwr\" (UID: \"a4e84932-5719-400d-8c94-df947dd51038\") " pod="openstack/keystone-cron-29400361-2gkwr" Nov 24 22:01:00 crc kubenswrapper[4812]: I1124 22:01:00.315977 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e84932-5719-400d-8c94-df947dd51038-config-data\") pod \"keystone-cron-29400361-2gkwr\" (UID: \"a4e84932-5719-400d-8c94-df947dd51038\") " pod="openstack/keystone-cron-29400361-2gkwr" Nov 24 22:01:00 crc kubenswrapper[4812]: I1124 22:01:00.316030 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e84932-5719-400d-8c94-df947dd51038-combined-ca-bundle\") pod \"keystone-cron-29400361-2gkwr\" (UID: \"a4e84932-5719-400d-8c94-df947dd51038\") " pod="openstack/keystone-cron-29400361-2gkwr" Nov 24 22:01:00 crc kubenswrapper[4812]: I1124 22:01:00.418393 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e84932-5719-400d-8c94-df947dd51038-config-data\") pod \"keystone-cron-29400361-2gkwr\" (UID: \"a4e84932-5719-400d-8c94-df947dd51038\") " pod="openstack/keystone-cron-29400361-2gkwr" Nov 24 22:01:00 crc kubenswrapper[4812]: I1124 22:01:00.418550 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e84932-5719-400d-8c94-df947dd51038-combined-ca-bundle\") pod \"keystone-cron-29400361-2gkwr\" (UID: \"a4e84932-5719-400d-8c94-df947dd51038\") " pod="openstack/keystone-cron-29400361-2gkwr" Nov 24 22:01:00 crc kubenswrapper[4812]: I1124 22:01:00.418879 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w289n\" (UniqueName: \"kubernetes.io/projected/a4e84932-5719-400d-8c94-df947dd51038-kube-api-access-w289n\") pod \"keystone-cron-29400361-2gkwr\" (UID: \"a4e84932-5719-400d-8c94-df947dd51038\") " pod="openstack/keystone-cron-29400361-2gkwr" Nov 24 22:01:00 crc kubenswrapper[4812]: I1124 22:01:00.418994 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4e84932-5719-400d-8c94-df947dd51038-fernet-keys\") pod \"keystone-cron-29400361-2gkwr\" (UID: \"a4e84932-5719-400d-8c94-df947dd51038\") " pod="openstack/keystone-cron-29400361-2gkwr" Nov 24 22:01:00 crc kubenswrapper[4812]: I1124 22:01:00.428321 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4e84932-5719-400d-8c94-df947dd51038-fernet-keys\") pod \"keystone-cron-29400361-2gkwr\" (UID: \"a4e84932-5719-400d-8c94-df947dd51038\") " pod="openstack/keystone-cron-29400361-2gkwr" Nov 24 22:01:00 crc kubenswrapper[4812]: I1124 22:01:00.443020 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e84932-5719-400d-8c94-df947dd51038-config-data\") pod \"keystone-cron-29400361-2gkwr\" (UID: \"a4e84932-5719-400d-8c94-df947dd51038\") " pod="openstack/keystone-cron-29400361-2gkwr" Nov 24 22:01:00 crc kubenswrapper[4812]: I1124 22:01:00.444231 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e84932-5719-400d-8c94-df947dd51038-combined-ca-bundle\") pod \"keystone-cron-29400361-2gkwr\" (UID: \"a4e84932-5719-400d-8c94-df947dd51038\") " pod="openstack/keystone-cron-29400361-2gkwr" Nov 24 22:01:00 crc kubenswrapper[4812]: I1124 22:01:00.454797 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w289n\" (UniqueName: \"kubernetes.io/projected/a4e84932-5719-400d-8c94-df947dd51038-kube-api-access-w289n\") pod \"keystone-cron-29400361-2gkwr\" (UID: \"a4e84932-5719-400d-8c94-df947dd51038\") " pod="openstack/keystone-cron-29400361-2gkwr" Nov 24 22:01:00 crc kubenswrapper[4812]: I1124 22:01:00.514250 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400361-2gkwr" Nov 24 22:01:01 crc kubenswrapper[4812]: I1124 22:01:01.065047 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29400361-2gkwr"] Nov 24 22:01:01 crc kubenswrapper[4812]: I1124 22:01:01.453381 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400361-2gkwr" event={"ID":"a4e84932-5719-400d-8c94-df947dd51038","Type":"ContainerStarted","Data":"c635875f2d402a5c070a89836008a933c25bb8503db102aab04fdef7419a2655"} Nov 24 22:01:01 crc kubenswrapper[4812]: I1124 22:01:01.454665 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400361-2gkwr" event={"ID":"a4e84932-5719-400d-8c94-df947dd51038","Type":"ContainerStarted","Data":"54c64cf8d4709eb951668d126ae6428e4b9c9784c05065469ac908f0395999d1"} Nov 24 22:01:01 crc kubenswrapper[4812]: I1124 22:01:01.494010 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29400361-2gkwr" podStartSLOduration=1.4939746280000001 podStartE2EDuration="1.493974628s" podCreationTimestamp="2025-11-24 22:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 22:01:01.477364769 +0000 UTC m=+9855.266317180" watchObservedRunningTime="2025-11-24 22:01:01.493974628 +0000 UTC m=+9855.282927099" Nov 24 22:01:03 crc kubenswrapper[4812]: I1124 22:01:03.965597 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 22:01:03 crc kubenswrapper[4812]: E1124 22:01:03.966556 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:01:05 crc kubenswrapper[4812]: I1124 22:01:05.509836 4812 generic.go:334] "Generic (PLEG): container finished" podID="a4e84932-5719-400d-8c94-df947dd51038" containerID="c635875f2d402a5c070a89836008a933c25bb8503db102aab04fdef7419a2655" exitCode=0 Nov 24 22:01:05 crc kubenswrapper[4812]: I1124 22:01:05.509912 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400361-2gkwr" event={"ID":"a4e84932-5719-400d-8c94-df947dd51038","Type":"ContainerDied","Data":"c635875f2d402a5c070a89836008a933c25bb8503db102aab04fdef7419a2655"} Nov 24 22:01:06 crc kubenswrapper[4812]: I1124 22:01:06.007761 4812 scope.go:117] "RemoveContainer" containerID="7220ae976c51ea972924f92b1b0a12ed85267f705b6dd4b01b7231de3f0727e3" Nov 24 22:01:06 crc kubenswrapper[4812]: I1124 22:01:06.995852 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400361-2gkwr" Nov 24 22:01:07 crc kubenswrapper[4812]: I1124 22:01:07.177010 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w289n\" (UniqueName: \"kubernetes.io/projected/a4e84932-5719-400d-8c94-df947dd51038-kube-api-access-w289n\") pod \"a4e84932-5719-400d-8c94-df947dd51038\" (UID: \"a4e84932-5719-400d-8c94-df947dd51038\") " Nov 24 22:01:07 crc kubenswrapper[4812]: I1124 22:01:07.177854 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e84932-5719-400d-8c94-df947dd51038-combined-ca-bundle\") pod \"a4e84932-5719-400d-8c94-df947dd51038\" (UID: \"a4e84932-5719-400d-8c94-df947dd51038\") " Nov 24 22:01:07 crc kubenswrapper[4812]: I1124 22:01:07.178307 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4e84932-5719-400d-8c94-df947dd51038-fernet-keys\") pod \"a4e84932-5719-400d-8c94-df947dd51038\" (UID: \"a4e84932-5719-400d-8c94-df947dd51038\") " Nov 24 22:01:07 crc kubenswrapper[4812]: I1124 22:01:07.178471 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e84932-5719-400d-8c94-df947dd51038-config-data\") pod \"a4e84932-5719-400d-8c94-df947dd51038\" (UID: \"a4e84932-5719-400d-8c94-df947dd51038\") " Nov 24 22:01:07 crc kubenswrapper[4812]: I1124 22:01:07.185089 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4e84932-5719-400d-8c94-df947dd51038-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a4e84932-5719-400d-8c94-df947dd51038" (UID: "a4e84932-5719-400d-8c94-df947dd51038"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:01:07 crc kubenswrapper[4812]: I1124 22:01:07.185116 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4e84932-5719-400d-8c94-df947dd51038-kube-api-access-w289n" (OuterVolumeSpecName: "kube-api-access-w289n") pod "a4e84932-5719-400d-8c94-df947dd51038" (UID: "a4e84932-5719-400d-8c94-df947dd51038"). InnerVolumeSpecName "kube-api-access-w289n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:01:07 crc kubenswrapper[4812]: I1124 22:01:07.222795 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4e84932-5719-400d-8c94-df947dd51038-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4e84932-5719-400d-8c94-df947dd51038" (UID: "a4e84932-5719-400d-8c94-df947dd51038"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:01:07 crc kubenswrapper[4812]: I1124 22:01:07.253565 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4e84932-5719-400d-8c94-df947dd51038-config-data" (OuterVolumeSpecName: "config-data") pod "a4e84932-5719-400d-8c94-df947dd51038" (UID: "a4e84932-5719-400d-8c94-df947dd51038"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:01:07 crc kubenswrapper[4812]: I1124 22:01:07.281616 4812 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4e84932-5719-400d-8c94-df947dd51038-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:07 crc kubenswrapper[4812]: I1124 22:01:07.281905 4812 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a4e84932-5719-400d-8c94-df947dd51038-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:07 crc kubenswrapper[4812]: I1124 22:01:07.281993 4812 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4e84932-5719-400d-8c94-df947dd51038-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:07 crc kubenswrapper[4812]: I1124 22:01:07.282075 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w289n\" (UniqueName: \"kubernetes.io/projected/a4e84932-5719-400d-8c94-df947dd51038-kube-api-access-w289n\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:07 crc kubenswrapper[4812]: I1124 22:01:07.536519 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400361-2gkwr" event={"ID":"a4e84932-5719-400d-8c94-df947dd51038","Type":"ContainerDied","Data":"54c64cf8d4709eb951668d126ae6428e4b9c9784c05065469ac908f0395999d1"} Nov 24 22:01:07 crc kubenswrapper[4812]: I1124 22:01:07.536565 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54c64cf8d4709eb951668d126ae6428e4b9c9784c05065469ac908f0395999d1" Nov 24 22:01:07 crc kubenswrapper[4812]: I1124 22:01:07.536869 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400361-2gkwr" Nov 24 22:01:16 crc kubenswrapper[4812]: I1124 22:01:16.981264 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 22:01:16 crc kubenswrapper[4812]: E1124 22:01:16.982294 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:01:19 crc kubenswrapper[4812]: I1124 22:01:19.686505 4812 generic.go:334] "Generic (PLEG): container finished" podID="21f4d7d0-d494-4c0e-8a72-5920867ba86d" containerID="52296d3c9a774ef743b4c069f03eb78366aedce02ccfe46129409317ba18fdfa" exitCode=137 Nov 24 22:01:19 crc kubenswrapper[4812]: I1124 22:01:19.686623 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"21f4d7d0-d494-4c0e-8a72-5920867ba86d","Type":"ContainerDied","Data":"52296d3c9a774ef743b4c069f03eb78366aedce02ccfe46129409317ba18fdfa"} Nov 24 22:01:20 crc kubenswrapper[4812]: I1124 22:01:20.500052 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 24 22:01:20 crc kubenswrapper[4812]: I1124 22:01:20.507750 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/21f4d7d0-d494-4c0e-8a72-5920867ba86d-ovn-data-cert\") pod \"21f4d7d0-d494-4c0e-8a72-5920867ba86d\" (UID: \"21f4d7d0-d494-4c0e-8a72-5920867ba86d\") " Nov 24 22:01:20 crc kubenswrapper[4812]: I1124 22:01:20.507846 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27m9f\" (UniqueName: \"kubernetes.io/projected/21f4d7d0-d494-4c0e-8a72-5920867ba86d-kube-api-access-27m9f\") pod \"21f4d7d0-d494-4c0e-8a72-5920867ba86d\" (UID: \"21f4d7d0-d494-4c0e-8a72-5920867ba86d\") " Nov 24 22:01:20 crc kubenswrapper[4812]: I1124 22:01:20.514761 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21f4d7d0-d494-4c0e-8a72-5920867ba86d-kube-api-access-27m9f" (OuterVolumeSpecName: "kube-api-access-27m9f") pod "21f4d7d0-d494-4c0e-8a72-5920867ba86d" (UID: "21f4d7d0-d494-4c0e-8a72-5920867ba86d"). InnerVolumeSpecName "kube-api-access-27m9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:01:20 crc kubenswrapper[4812]: I1124 22:01:20.519373 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21f4d7d0-d494-4c0e-8a72-5920867ba86d-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "21f4d7d0-d494-4c0e-8a72-5920867ba86d" (UID: "21f4d7d0-d494-4c0e-8a72-5920867ba86d"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:01:20 crc kubenswrapper[4812]: I1124 22:01:20.526515 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b26af0a-ffd2-46ec-8c5b-1e30489c4ffe\") pod \"21f4d7d0-d494-4c0e-8a72-5920867ba86d\" (UID: \"21f4d7d0-d494-4c0e-8a72-5920867ba86d\") " Nov 24 22:01:20 crc kubenswrapper[4812]: I1124 22:01:20.527496 4812 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/21f4d7d0-d494-4c0e-8a72-5920867ba86d-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:20 crc kubenswrapper[4812]: I1124 22:01:20.527522 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27m9f\" (UniqueName: \"kubernetes.io/projected/21f4d7d0-d494-4c0e-8a72-5920867ba86d-kube-api-access-27m9f\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:20 crc kubenswrapper[4812]: I1124 22:01:20.574219 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b26af0a-ffd2-46ec-8c5b-1e30489c4ffe" (OuterVolumeSpecName: "ovn-data") pod "21f4d7d0-d494-4c0e-8a72-5920867ba86d" (UID: "21f4d7d0-d494-4c0e-8a72-5920867ba86d"). InnerVolumeSpecName "pvc-3b26af0a-ffd2-46ec-8c5b-1e30489c4ffe". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 22:01:20 crc kubenswrapper[4812]: I1124 22:01:20.629830 4812 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-3b26af0a-ffd2-46ec-8c5b-1e30489c4ffe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b26af0a-ffd2-46ec-8c5b-1e30489c4ffe\") on node \"crc\" " Nov 24 22:01:20 crc kubenswrapper[4812]: I1124 22:01:20.695595 4812 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 24 22:01:20 crc kubenswrapper[4812]: I1124 22:01:20.695784 4812 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-3b26af0a-ffd2-46ec-8c5b-1e30489c4ffe" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b26af0a-ffd2-46ec-8c5b-1e30489c4ffe") on node "crc" Nov 24 22:01:20 crc kubenswrapper[4812]: I1124 22:01:20.699310 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"21f4d7d0-d494-4c0e-8a72-5920867ba86d","Type":"ContainerDied","Data":"be0a9268e7482ce7355c550419baddae4844a795bd5444d26c50ef4b8143e6a4"} Nov 24 22:01:20 crc kubenswrapper[4812]: I1124 22:01:20.699523 4812 scope.go:117] "RemoveContainer" containerID="52296d3c9a774ef743b4c069f03eb78366aedce02ccfe46129409317ba18fdfa" Nov 24 22:01:20 crc kubenswrapper[4812]: I1124 22:01:20.699784 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 24 22:01:20 crc kubenswrapper[4812]: I1124 22:01:20.731910 4812 reconciler_common.go:293] "Volume detached for volume \"pvc-3b26af0a-ffd2-46ec-8c5b-1e30489c4ffe\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3b26af0a-ffd2-46ec-8c5b-1e30489c4ffe\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:20 crc kubenswrapper[4812]: I1124 22:01:20.738156 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Nov 24 22:01:20 crc kubenswrapper[4812]: I1124 22:01:20.748647 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Nov 24 22:01:20 crc kubenswrapper[4812]: I1124 22:01:20.979788 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21f4d7d0-d494-4c0e-8a72-5920867ba86d" path="/var/lib/kubelet/pods/21f4d7d0-d494-4c0e-8a72-5920867ba86d/volumes" Nov 24 22:01:27 crc kubenswrapper[4812]: I1124 22:01:27.966126 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 22:01:27 crc kubenswrapper[4812]: E1124 22:01:27.967025 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:01:38 crc kubenswrapper[4812]: I1124 22:01:38.971729 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 22:01:38 crc kubenswrapper[4812]: E1124 22:01:38.972763 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:01:53 crc kubenswrapper[4812]: I1124 22:01:53.969262 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 22:01:53 crc kubenswrapper[4812]: E1124 22:01:53.971005 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:02:08 crc kubenswrapper[4812]: I1124 22:02:08.966909 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 22:02:10 crc kubenswrapper[4812]: I1124 22:02:10.330051 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"49054f13e47f83e6b6e31e0eccf51f5fd1d8d5b8e02975ff3d97240cc7947326"} Nov 24 22:02:20 crc kubenswrapper[4812]: I1124 22:02:20.752840 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4zm6x/must-gather-2fczk"] Nov 24 22:02:20 crc kubenswrapper[4812]: E1124 22:02:20.753788 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4e84932-5719-400d-8c94-df947dd51038" containerName="keystone-cron" Nov 24 22:02:20 crc kubenswrapper[4812]: I1124 22:02:20.753801 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4e84932-5719-400d-8c94-df947dd51038" containerName="keystone-cron" Nov 24 22:02:20 crc kubenswrapper[4812]: E1124 22:02:20.753824 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21f4d7d0-d494-4c0e-8a72-5920867ba86d" containerName="adoption" Nov 24 22:02:20 crc kubenswrapper[4812]: I1124 22:02:20.753830 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="21f4d7d0-d494-4c0e-8a72-5920867ba86d" containerName="adoption" Nov 24 22:02:20 crc kubenswrapper[4812]: I1124 22:02:20.754049 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="21f4d7d0-d494-4c0e-8a72-5920867ba86d" containerName="adoption" Nov 24 22:02:20 crc kubenswrapper[4812]: I1124 22:02:20.754064 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4e84932-5719-400d-8c94-df947dd51038" containerName="keystone-cron" Nov 24 22:02:20 crc kubenswrapper[4812]: I1124 22:02:20.755274 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zm6x/must-gather-2fczk" Nov 24 22:02:20 crc kubenswrapper[4812]: I1124 22:02:20.756960 4812 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-4zm6x"/"default-dockercfg-m2nw7" Nov 24 22:02:20 crc kubenswrapper[4812]: I1124 22:02:20.757014 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4zm6x"/"kube-root-ca.crt" Nov 24 22:02:20 crc kubenswrapper[4812]: I1124 22:02:20.759079 4812 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4zm6x"/"openshift-service-ca.crt" Nov 24 22:02:20 crc kubenswrapper[4812]: I1124 22:02:20.783076 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4zm6x/must-gather-2fczk"] Nov 24 22:02:20 crc kubenswrapper[4812]: I1124 22:02:20.892408 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-297cs\" (UniqueName: \"kubernetes.io/projected/c5889655-07e1-486a-9f90-deb9ec6781b9-kube-api-access-297cs\") pod \"must-gather-2fczk\" (UID: \"c5889655-07e1-486a-9f90-deb9ec6781b9\") " pod="openshift-must-gather-4zm6x/must-gather-2fczk" Nov 24 22:02:20 crc kubenswrapper[4812]: I1124 22:02:20.892480 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c5889655-07e1-486a-9f90-deb9ec6781b9-must-gather-output\") pod \"must-gather-2fczk\" (UID: \"c5889655-07e1-486a-9f90-deb9ec6781b9\") " pod="openshift-must-gather-4zm6x/must-gather-2fczk" Nov 24 22:02:20 crc kubenswrapper[4812]: I1124 22:02:20.994704 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-297cs\" (UniqueName: \"kubernetes.io/projected/c5889655-07e1-486a-9f90-deb9ec6781b9-kube-api-access-297cs\") pod \"must-gather-2fczk\" (UID: \"c5889655-07e1-486a-9f90-deb9ec6781b9\") " pod="openshift-must-gather-4zm6x/must-gather-2fczk" Nov 24 22:02:20 crc kubenswrapper[4812]: I1124 22:02:20.995006 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c5889655-07e1-486a-9f90-deb9ec6781b9-must-gather-output\") pod \"must-gather-2fczk\" (UID: \"c5889655-07e1-486a-9f90-deb9ec6781b9\") " pod="openshift-must-gather-4zm6x/must-gather-2fczk" Nov 24 22:02:20 crc kubenswrapper[4812]: I1124 22:02:20.995469 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c5889655-07e1-486a-9f90-deb9ec6781b9-must-gather-output\") pod \"must-gather-2fczk\" (UID: \"c5889655-07e1-486a-9f90-deb9ec6781b9\") " pod="openshift-must-gather-4zm6x/must-gather-2fczk" Nov 24 22:02:21 crc kubenswrapper[4812]: I1124 22:02:21.014697 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-297cs\" (UniqueName: \"kubernetes.io/projected/c5889655-07e1-486a-9f90-deb9ec6781b9-kube-api-access-297cs\") pod \"must-gather-2fczk\" (UID: \"c5889655-07e1-486a-9f90-deb9ec6781b9\") " pod="openshift-must-gather-4zm6x/must-gather-2fczk" Nov 24 22:02:21 crc kubenswrapper[4812]: I1124 22:02:21.076395 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zm6x/must-gather-2fczk" Nov 24 22:02:21 crc kubenswrapper[4812]: I1124 22:02:21.604628 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4zm6x/must-gather-2fczk"] Nov 24 22:02:21 crc kubenswrapper[4812]: I1124 22:02:21.645910 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:02:22 crc kubenswrapper[4812]: I1124 22:02:22.519284 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zm6x/must-gather-2fczk" event={"ID":"c5889655-07e1-486a-9f90-deb9ec6781b9","Type":"ContainerStarted","Data":"7d04121b98ffe35ab7a68ae31db5089da748fb316930c43c618cf7c96d3cf0fd"} Nov 24 22:02:27 crc kubenswrapper[4812]: I1124 22:02:27.582731 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zm6x/must-gather-2fczk" event={"ID":"c5889655-07e1-486a-9f90-deb9ec6781b9","Type":"ContainerStarted","Data":"8b464f8f308aa024320f40f1e2d856f89cc53b8487cb8fffa2c22a338ff4afe0"} Nov 24 22:02:27 crc kubenswrapper[4812]: I1124 22:02:27.583262 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zm6x/must-gather-2fczk" event={"ID":"c5889655-07e1-486a-9f90-deb9ec6781b9","Type":"ContainerStarted","Data":"f15ab36df18f4c44e925963216ab0cd9288b54d4807ee64374c809b407fef859"} Nov 24 22:02:27 crc kubenswrapper[4812]: I1124 22:02:27.604659 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4zm6x/must-gather-2fczk" podStartSLOduration=2.440723516 podStartE2EDuration="7.604640675s" podCreationTimestamp="2025-11-24 22:02:20 +0000 UTC" firstStartedPulling="2025-11-24 22:02:21.645598857 +0000 UTC m=+9935.434551228" lastFinishedPulling="2025-11-24 22:02:26.809516016 +0000 UTC m=+9940.598468387" observedRunningTime="2025-11-24 22:02:27.597678118 +0000 UTC m=+9941.386630529" watchObservedRunningTime="2025-11-24 22:02:27.604640675 +0000 UTC m=+9941.393593056" Nov 24 22:02:29 crc kubenswrapper[4812]: E1124 22:02:29.857308 4812 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.36:51164->38.102.83.36:46073: write tcp 38.102.83.36:51164->38.102.83.36:46073: write: broken pipe Nov 24 22:02:30 crc kubenswrapper[4812]: I1124 22:02:30.912109 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4zm6x/crc-debug-xjvmf"] Nov 24 22:02:30 crc kubenswrapper[4812]: I1124 22:02:30.914124 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zm6x/crc-debug-xjvmf" Nov 24 22:02:31 crc kubenswrapper[4812]: I1124 22:02:31.116483 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mg7g\" (UniqueName: \"kubernetes.io/projected/f813698a-f9c2-4b17-8e8b-eb5883b2b4dc-kube-api-access-5mg7g\") pod \"crc-debug-xjvmf\" (UID: \"f813698a-f9c2-4b17-8e8b-eb5883b2b4dc\") " pod="openshift-must-gather-4zm6x/crc-debug-xjvmf" Nov 24 22:02:31 crc kubenswrapper[4812]: I1124 22:02:31.116552 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f813698a-f9c2-4b17-8e8b-eb5883b2b4dc-host\") pod \"crc-debug-xjvmf\" (UID: \"f813698a-f9c2-4b17-8e8b-eb5883b2b4dc\") " pod="openshift-must-gather-4zm6x/crc-debug-xjvmf" Nov 24 22:02:31 crc kubenswrapper[4812]: I1124 22:02:31.218370 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mg7g\" (UniqueName: \"kubernetes.io/projected/f813698a-f9c2-4b17-8e8b-eb5883b2b4dc-kube-api-access-5mg7g\") pod \"crc-debug-xjvmf\" (UID: \"f813698a-f9c2-4b17-8e8b-eb5883b2b4dc\") " pod="openshift-must-gather-4zm6x/crc-debug-xjvmf" Nov 24 22:02:31 crc kubenswrapper[4812]: I1124 22:02:31.218429 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f813698a-f9c2-4b17-8e8b-eb5883b2b4dc-host\") pod \"crc-debug-xjvmf\" (UID: \"f813698a-f9c2-4b17-8e8b-eb5883b2b4dc\") " pod="openshift-must-gather-4zm6x/crc-debug-xjvmf" Nov 24 22:02:31 crc kubenswrapper[4812]: I1124 22:02:31.218690 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f813698a-f9c2-4b17-8e8b-eb5883b2b4dc-host\") pod \"crc-debug-xjvmf\" (UID: \"f813698a-f9c2-4b17-8e8b-eb5883b2b4dc\") " pod="openshift-must-gather-4zm6x/crc-debug-xjvmf" Nov 24 22:02:31 crc kubenswrapper[4812]: I1124 22:02:31.246214 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mg7g\" (UniqueName: \"kubernetes.io/projected/f813698a-f9c2-4b17-8e8b-eb5883b2b4dc-kube-api-access-5mg7g\") pod \"crc-debug-xjvmf\" (UID: \"f813698a-f9c2-4b17-8e8b-eb5883b2b4dc\") " pod="openshift-must-gather-4zm6x/crc-debug-xjvmf" Nov 24 22:02:31 crc kubenswrapper[4812]: I1124 22:02:31.534863 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zm6x/crc-debug-xjvmf" Nov 24 22:02:31 crc kubenswrapper[4812]: W1124 22:02:31.565747 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf813698a_f9c2_4b17_8e8b_eb5883b2b4dc.slice/crio-18f409acc171d2bcbf4c1bd9dad04ede8a1d447f54f2f9cdfab137f088bfad90 WatchSource:0}: Error finding container 18f409acc171d2bcbf4c1bd9dad04ede8a1d447f54f2f9cdfab137f088bfad90: Status 404 returned error can't find the container with id 18f409acc171d2bcbf4c1bd9dad04ede8a1d447f54f2f9cdfab137f088bfad90 Nov 24 22:02:31 crc kubenswrapper[4812]: I1124 22:02:31.623598 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zm6x/crc-debug-xjvmf" event={"ID":"f813698a-f9c2-4b17-8e8b-eb5883b2b4dc","Type":"ContainerStarted","Data":"18f409acc171d2bcbf4c1bd9dad04ede8a1d447f54f2f9cdfab137f088bfad90"} Nov 24 22:02:47 crc kubenswrapper[4812]: E1124 22:02:47.400472 4812 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Nov 24 22:02:47 crc kubenswrapper[4812]: E1124 22:02:47.401068 4812 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mg7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-xjvmf_openshift-must-gather-4zm6x(f813698a-f9c2-4b17-8e8b-eb5883b2b4dc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 22:02:47 crc kubenswrapper[4812]: E1124 22:02:47.402350 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-4zm6x/crc-debug-xjvmf" podUID="f813698a-f9c2-4b17-8e8b-eb5883b2b4dc" Nov 24 22:02:47 crc kubenswrapper[4812]: E1124 22:02:47.811152 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-4zm6x/crc-debug-xjvmf" podUID="f813698a-f9c2-4b17-8e8b-eb5883b2b4dc" Nov 24 22:03:02 crc kubenswrapper[4812]: I1124 22:03:02.953966 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zm6x/crc-debug-xjvmf" event={"ID":"f813698a-f9c2-4b17-8e8b-eb5883b2b4dc","Type":"ContainerStarted","Data":"37b99fa43b53a77e04c49d32367a14323150752d585218c26b8d0c4e4b459904"} Nov 24 22:03:02 crc kubenswrapper[4812]: I1124 22:03:02.978792 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4zm6x/crc-debug-xjvmf" podStartSLOduration=2.735811438 podStartE2EDuration="32.978775251s" podCreationTimestamp="2025-11-24 22:02:30 +0000 UTC" firstStartedPulling="2025-11-24 22:02:31.567942618 +0000 UTC m=+9945.356894989" lastFinishedPulling="2025-11-24 22:03:01.810906421 +0000 UTC m=+9975.599858802" observedRunningTime="2025-11-24 22:03:02.974032127 +0000 UTC m=+9976.762984498" watchObservedRunningTime="2025-11-24 22:03:02.978775251 +0000 UTC m=+9976.767727622" Nov 24 22:03:45 crc kubenswrapper[4812]: I1124 22:03:45.578979 4812 generic.go:334] "Generic (PLEG): container finished" podID="f813698a-f9c2-4b17-8e8b-eb5883b2b4dc" containerID="37b99fa43b53a77e04c49d32367a14323150752d585218c26b8d0c4e4b459904" exitCode=0 Nov 24 22:03:45 crc kubenswrapper[4812]: I1124 22:03:45.579049 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zm6x/crc-debug-xjvmf" event={"ID":"f813698a-f9c2-4b17-8e8b-eb5883b2b4dc","Type":"ContainerDied","Data":"37b99fa43b53a77e04c49d32367a14323150752d585218c26b8d0c4e4b459904"} Nov 24 22:03:46 crc kubenswrapper[4812]: I1124 22:03:46.733269 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zm6x/crc-debug-xjvmf" Nov 24 22:03:46 crc kubenswrapper[4812]: I1124 22:03:46.782127 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4zm6x/crc-debug-xjvmf"] Nov 24 22:03:46 crc kubenswrapper[4812]: I1124 22:03:46.789114 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mg7g\" (UniqueName: \"kubernetes.io/projected/f813698a-f9c2-4b17-8e8b-eb5883b2b4dc-kube-api-access-5mg7g\") pod \"f813698a-f9c2-4b17-8e8b-eb5883b2b4dc\" (UID: \"f813698a-f9c2-4b17-8e8b-eb5883b2b4dc\") " Nov 24 22:03:46 crc kubenswrapper[4812]: I1124 22:03:46.789437 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f813698a-f9c2-4b17-8e8b-eb5883b2b4dc-host\") pod \"f813698a-f9c2-4b17-8e8b-eb5883b2b4dc\" (UID: \"f813698a-f9c2-4b17-8e8b-eb5883b2b4dc\") " Nov 24 22:03:46 crc kubenswrapper[4812]: I1124 22:03:46.789668 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f813698a-f9c2-4b17-8e8b-eb5883b2b4dc-host" (OuterVolumeSpecName: "host") pod "f813698a-f9c2-4b17-8e8b-eb5883b2b4dc" (UID: "f813698a-f9c2-4b17-8e8b-eb5883b2b4dc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 22:03:46 crc kubenswrapper[4812]: I1124 22:03:46.790747 4812 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f813698a-f9c2-4b17-8e8b-eb5883b2b4dc-host\") on node \"crc\" DevicePath \"\"" Nov 24 22:03:46 crc kubenswrapper[4812]: I1124 22:03:46.800548 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f813698a-f9c2-4b17-8e8b-eb5883b2b4dc-kube-api-access-5mg7g" (OuterVolumeSpecName: "kube-api-access-5mg7g") pod "f813698a-f9c2-4b17-8e8b-eb5883b2b4dc" (UID: "f813698a-f9c2-4b17-8e8b-eb5883b2b4dc"). InnerVolumeSpecName "kube-api-access-5mg7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:03:46 crc kubenswrapper[4812]: I1124 22:03:46.803846 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4zm6x/crc-debug-xjvmf"] Nov 24 22:03:46 crc kubenswrapper[4812]: I1124 22:03:46.892869 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mg7g\" (UniqueName: \"kubernetes.io/projected/f813698a-f9c2-4b17-8e8b-eb5883b2b4dc-kube-api-access-5mg7g\") on node \"crc\" DevicePath \"\"" Nov 24 22:03:46 crc kubenswrapper[4812]: I1124 22:03:46.983802 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f813698a-f9c2-4b17-8e8b-eb5883b2b4dc" path="/var/lib/kubelet/pods/f813698a-f9c2-4b17-8e8b-eb5883b2b4dc/volumes" Nov 24 22:03:47 crc kubenswrapper[4812]: I1124 22:03:47.602699 4812 scope.go:117] "RemoveContainer" containerID="37b99fa43b53a77e04c49d32367a14323150752d585218c26b8d0c4e4b459904" Nov 24 22:03:47 crc kubenswrapper[4812]: I1124 22:03:47.602725 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zm6x/crc-debug-xjvmf" Nov 24 22:03:47 crc kubenswrapper[4812]: I1124 22:03:47.997222 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4zm6x/crc-debug-6vxnq"] Nov 24 22:03:47 crc kubenswrapper[4812]: E1124 22:03:47.998060 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f813698a-f9c2-4b17-8e8b-eb5883b2b4dc" containerName="container-00" Nov 24 22:03:47 crc kubenswrapper[4812]: I1124 22:03:47.998090 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="f813698a-f9c2-4b17-8e8b-eb5883b2b4dc" containerName="container-00" Nov 24 22:03:47 crc kubenswrapper[4812]: I1124 22:03:47.998512 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="f813698a-f9c2-4b17-8e8b-eb5883b2b4dc" containerName="container-00" Nov 24 22:03:47 crc kubenswrapper[4812]: I1124 22:03:47.999456 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zm6x/crc-debug-6vxnq" Nov 24 22:03:48 crc kubenswrapper[4812]: I1124 22:03:48.119809 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj2zf\" (UniqueName: \"kubernetes.io/projected/045eb846-a9c0-4e72-b31a-464b97b1d228-kube-api-access-rj2zf\") pod \"crc-debug-6vxnq\" (UID: \"045eb846-a9c0-4e72-b31a-464b97b1d228\") " pod="openshift-must-gather-4zm6x/crc-debug-6vxnq" Nov 24 22:03:48 crc kubenswrapper[4812]: I1124 22:03:48.120672 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/045eb846-a9c0-4e72-b31a-464b97b1d228-host\") pod \"crc-debug-6vxnq\" (UID: \"045eb846-a9c0-4e72-b31a-464b97b1d228\") " pod="openshift-must-gather-4zm6x/crc-debug-6vxnq" Nov 24 22:03:48 crc kubenswrapper[4812]: I1124 22:03:48.223596 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj2zf\" (UniqueName: \"kubernetes.io/projected/045eb846-a9c0-4e72-b31a-464b97b1d228-kube-api-access-rj2zf\") pod \"crc-debug-6vxnq\" (UID: \"045eb846-a9c0-4e72-b31a-464b97b1d228\") " pod="openshift-must-gather-4zm6x/crc-debug-6vxnq" Nov 24 22:03:48 crc kubenswrapper[4812]: I1124 22:03:48.224146 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/045eb846-a9c0-4e72-b31a-464b97b1d228-host\") pod \"crc-debug-6vxnq\" (UID: \"045eb846-a9c0-4e72-b31a-464b97b1d228\") " pod="openshift-must-gather-4zm6x/crc-debug-6vxnq" Nov 24 22:03:48 crc kubenswrapper[4812]: I1124 22:03:48.224243 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/045eb846-a9c0-4e72-b31a-464b97b1d228-host\") pod \"crc-debug-6vxnq\" (UID: \"045eb846-a9c0-4e72-b31a-464b97b1d228\") " pod="openshift-must-gather-4zm6x/crc-debug-6vxnq" Nov 24 22:03:48 crc kubenswrapper[4812]: I1124 22:03:48.262770 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj2zf\" (UniqueName: \"kubernetes.io/projected/045eb846-a9c0-4e72-b31a-464b97b1d228-kube-api-access-rj2zf\") pod \"crc-debug-6vxnq\" (UID: \"045eb846-a9c0-4e72-b31a-464b97b1d228\") " pod="openshift-must-gather-4zm6x/crc-debug-6vxnq" Nov 24 22:03:48 crc kubenswrapper[4812]: I1124 22:03:48.327326 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zm6x/crc-debug-6vxnq" Nov 24 22:03:48 crc kubenswrapper[4812]: I1124 22:03:48.613557 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zm6x/crc-debug-6vxnq" event={"ID":"045eb846-a9c0-4e72-b31a-464b97b1d228","Type":"ContainerStarted","Data":"479d28a4e3903541d8630c450706f23eb302d8dd09096e1bc7db22fde7b72293"} Nov 24 22:03:49 crc kubenswrapper[4812]: I1124 22:03:49.629157 4812 generic.go:334] "Generic (PLEG): container finished" podID="045eb846-a9c0-4e72-b31a-464b97b1d228" containerID="a187e32e63c117d09fd0dcbe2b1590db8d75f040ffc1417dd677a9ec27049a03" exitCode=0 Nov 24 22:03:49 crc kubenswrapper[4812]: I1124 22:03:49.629246 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zm6x/crc-debug-6vxnq" event={"ID":"045eb846-a9c0-4e72-b31a-464b97b1d228","Type":"ContainerDied","Data":"a187e32e63c117d09fd0dcbe2b1590db8d75f040ffc1417dd677a9ec27049a03"} Nov 24 22:03:50 crc kubenswrapper[4812]: I1124 22:03:50.198777 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4zm6x/crc-debug-6vxnq"] Nov 24 22:03:50 crc kubenswrapper[4812]: I1124 22:03:50.207913 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4zm6x/crc-debug-6vxnq"] Nov 24 22:03:51 crc kubenswrapper[4812]: I1124 22:03:51.450636 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4zm6x/crc-debug-pqx2v"] Nov 24 22:03:51 crc kubenswrapper[4812]: E1124 22:03:51.451381 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="045eb846-a9c0-4e72-b31a-464b97b1d228" containerName="container-00" Nov 24 22:03:51 crc kubenswrapper[4812]: I1124 22:03:51.451404 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="045eb846-a9c0-4e72-b31a-464b97b1d228" containerName="container-00" Nov 24 22:03:51 crc kubenswrapper[4812]: I1124 22:03:51.451780 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="045eb846-a9c0-4e72-b31a-464b97b1d228" containerName="container-00" Nov 24 22:03:51 crc kubenswrapper[4812]: I1124 22:03:51.452891 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zm6x/crc-debug-pqx2v" Nov 24 22:03:51 crc kubenswrapper[4812]: I1124 22:03:51.517215 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921-host\") pod \"crc-debug-pqx2v\" (UID: \"d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921\") " pod="openshift-must-gather-4zm6x/crc-debug-pqx2v" Nov 24 22:03:51 crc kubenswrapper[4812]: I1124 22:03:51.517326 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vb4p\" (UniqueName: \"kubernetes.io/projected/d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921-kube-api-access-2vb4p\") pod \"crc-debug-pqx2v\" (UID: \"d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921\") " pod="openshift-must-gather-4zm6x/crc-debug-pqx2v" Nov 24 22:03:51 crc kubenswrapper[4812]: I1124 22:03:51.620235 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921-host\") pod \"crc-debug-pqx2v\" (UID: \"d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921\") " pod="openshift-must-gather-4zm6x/crc-debug-pqx2v" Nov 24 22:03:51 crc kubenswrapper[4812]: I1124 22:03:51.621035 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vb4p\" (UniqueName: \"kubernetes.io/projected/d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921-kube-api-access-2vb4p\") pod \"crc-debug-pqx2v\" (UID: \"d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921\") " pod="openshift-must-gather-4zm6x/crc-debug-pqx2v" Nov 24 22:03:51 crc kubenswrapper[4812]: I1124 22:03:51.620962 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921-host\") pod \"crc-debug-pqx2v\" (UID: \"d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921\") " pod="openshift-must-gather-4zm6x/crc-debug-pqx2v" Nov 24 22:03:51 crc kubenswrapper[4812]: I1124 22:03:51.656324 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vb4p\" (UniqueName: \"kubernetes.io/projected/d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921-kube-api-access-2vb4p\") pod \"crc-debug-pqx2v\" (UID: \"d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921\") " pod="openshift-must-gather-4zm6x/crc-debug-pqx2v" Nov 24 22:03:51 crc kubenswrapper[4812]: I1124 22:03:51.693629 4812 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="479d28a4e3903541d8630c450706f23eb302d8dd09096e1bc7db22fde7b72293" Nov 24 22:03:51 crc kubenswrapper[4812]: I1124 22:03:51.755684 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zm6x/crc-debug-6vxnq" Nov 24 22:03:51 crc kubenswrapper[4812]: I1124 22:03:51.792071 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zm6x/crc-debug-pqx2v" Nov 24 22:03:51 crc kubenswrapper[4812]: I1124 22:03:51.825969 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/045eb846-a9c0-4e72-b31a-464b97b1d228-host\") pod \"045eb846-a9c0-4e72-b31a-464b97b1d228\" (UID: \"045eb846-a9c0-4e72-b31a-464b97b1d228\") " Nov 24 22:03:51 crc kubenswrapper[4812]: I1124 22:03:51.826049 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj2zf\" (UniqueName: \"kubernetes.io/projected/045eb846-a9c0-4e72-b31a-464b97b1d228-kube-api-access-rj2zf\") pod \"045eb846-a9c0-4e72-b31a-464b97b1d228\" (UID: \"045eb846-a9c0-4e72-b31a-464b97b1d228\") " Nov 24 22:03:51 crc kubenswrapper[4812]: I1124 22:03:51.826066 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/045eb846-a9c0-4e72-b31a-464b97b1d228-host" (OuterVolumeSpecName: "host") pod "045eb846-a9c0-4e72-b31a-464b97b1d228" (UID: "045eb846-a9c0-4e72-b31a-464b97b1d228"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 22:03:51 crc kubenswrapper[4812]: I1124 22:03:51.827395 4812 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/045eb846-a9c0-4e72-b31a-464b97b1d228-host\") on node \"crc\" DevicePath \"\"" Nov 24 22:03:51 crc kubenswrapper[4812]: I1124 22:03:51.829761 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/045eb846-a9c0-4e72-b31a-464b97b1d228-kube-api-access-rj2zf" (OuterVolumeSpecName: "kube-api-access-rj2zf") pod "045eb846-a9c0-4e72-b31a-464b97b1d228" (UID: "045eb846-a9c0-4e72-b31a-464b97b1d228"). InnerVolumeSpecName "kube-api-access-rj2zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:03:51 crc kubenswrapper[4812]: W1124 22:03:51.832057 4812 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0c4fba1_f393_4eb9_8ac3_2e9f7d98e921.slice/crio-34d70c832b57f07cfab9a455f7ecb3c5b9b24172cb15f18307dac9d9a9240498 WatchSource:0}: Error finding container 34d70c832b57f07cfab9a455f7ecb3c5b9b24172cb15f18307dac9d9a9240498: Status 404 returned error can't find the container with id 34d70c832b57f07cfab9a455f7ecb3c5b9b24172cb15f18307dac9d9a9240498 Nov 24 22:03:51 crc kubenswrapper[4812]: I1124 22:03:51.930304 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj2zf\" (UniqueName: \"kubernetes.io/projected/045eb846-a9c0-4e72-b31a-464b97b1d228-kube-api-access-rj2zf\") on node \"crc\" DevicePath \"\"" Nov 24 22:03:52 crc kubenswrapper[4812]: I1124 22:03:52.710492 4812 generic.go:334] "Generic (PLEG): container finished" podID="d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921" containerID="9d3eefec97dcf4a0d6c37ad393d2743683d440042284d05bec6e84017c954e21" exitCode=0 Nov 24 22:03:52 crc kubenswrapper[4812]: I1124 22:03:52.710597 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zm6x/crc-debug-pqx2v" event={"ID":"d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921","Type":"ContainerDied","Data":"9d3eefec97dcf4a0d6c37ad393d2743683d440042284d05bec6e84017c954e21"} Nov 24 22:03:52 crc kubenswrapper[4812]: I1124 22:03:52.711032 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zm6x/crc-debug-6vxnq" Nov 24 22:03:52 crc kubenswrapper[4812]: I1124 22:03:52.711050 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zm6x/crc-debug-pqx2v" event={"ID":"d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921","Type":"ContainerStarted","Data":"34d70c832b57f07cfab9a455f7ecb3c5b9b24172cb15f18307dac9d9a9240498"} Nov 24 22:03:52 crc kubenswrapper[4812]: I1124 22:03:52.777261 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4zm6x/crc-debug-pqx2v"] Nov 24 22:03:52 crc kubenswrapper[4812]: I1124 22:03:52.796065 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4zm6x/crc-debug-pqx2v"] Nov 24 22:03:52 crc kubenswrapper[4812]: I1124 22:03:52.983315 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="045eb846-a9c0-4e72-b31a-464b97b1d228" path="/var/lib/kubelet/pods/045eb846-a9c0-4e72-b31a-464b97b1d228/volumes" Nov 24 22:03:54 crc kubenswrapper[4812]: I1124 22:03:54.154856 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zm6x/crc-debug-pqx2v" Nov 24 22:03:54 crc kubenswrapper[4812]: I1124 22:03:54.288661 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vb4p\" (UniqueName: \"kubernetes.io/projected/d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921-kube-api-access-2vb4p\") pod \"d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921\" (UID: \"d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921\") " Nov 24 22:03:54 crc kubenswrapper[4812]: I1124 22:03:54.288746 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921-host\") pod \"d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921\" (UID: \"d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921\") " Nov 24 22:03:54 crc kubenswrapper[4812]: I1124 22:03:54.289511 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921-host" (OuterVolumeSpecName: "host") pod "d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921" (UID: "d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 22:03:54 crc kubenswrapper[4812]: I1124 22:03:54.295536 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921-kube-api-access-2vb4p" (OuterVolumeSpecName: "kube-api-access-2vb4p") pod "d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921" (UID: "d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921"). InnerVolumeSpecName "kube-api-access-2vb4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:03:54 crc kubenswrapper[4812]: I1124 22:03:54.391126 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vb4p\" (UniqueName: \"kubernetes.io/projected/d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921-kube-api-access-2vb4p\") on node \"crc\" DevicePath \"\"" Nov 24 22:03:54 crc kubenswrapper[4812]: I1124 22:03:54.391168 4812 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921-host\") on node \"crc\" DevicePath \"\"" Nov 24 22:03:54 crc kubenswrapper[4812]: I1124 22:03:54.753229 4812 scope.go:117] "RemoveContainer" containerID="9d3eefec97dcf4a0d6c37ad393d2743683d440042284d05bec6e84017c954e21" Nov 24 22:03:54 crc kubenswrapper[4812]: I1124 22:03:54.753297 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zm6x/crc-debug-pqx2v" Nov 24 22:03:54 crc kubenswrapper[4812]: I1124 22:03:54.996945 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921" path="/var/lib/kubelet/pods/d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921/volumes" Nov 24 22:04:32 crc kubenswrapper[4812]: I1124 22:04:32.998792 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:04:33 crc kubenswrapper[4812]: I1124 22:04:32.999763 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:05:02 crc kubenswrapper[4812]: I1124 22:05:02.998290 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:05:03 crc kubenswrapper[4812]: I1124 22:05:03.000250 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:05:32 crc kubenswrapper[4812]: I1124 22:05:32.998329 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:05:33 crc kubenswrapper[4812]: I1124 22:05:32.999263 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:05:33 crc kubenswrapper[4812]: I1124 22:05:32.999377 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 22:05:33 crc kubenswrapper[4812]: I1124 22:05:33.000642 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"49054f13e47f83e6b6e31e0eccf51f5fd1d8d5b8e02975ff3d97240cc7947326"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:05:33 crc kubenswrapper[4812]: I1124 22:05:33.000734 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://49054f13e47f83e6b6e31e0eccf51f5fd1d8d5b8e02975ff3d97240cc7947326" gracePeriod=600 Nov 24 22:05:34 crc kubenswrapper[4812]: I1124 22:05:34.122549 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="49054f13e47f83e6b6e31e0eccf51f5fd1d8d5b8e02975ff3d97240cc7947326" exitCode=0 Nov 24 22:05:34 crc kubenswrapper[4812]: I1124 22:05:34.122648 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"49054f13e47f83e6b6e31e0eccf51f5fd1d8d5b8e02975ff3d97240cc7947326"} Nov 24 22:05:34 crc kubenswrapper[4812]: I1124 22:05:34.122842 4812 scope.go:117] "RemoveContainer" containerID="7ab91905cc8cf5ddf0287dbe477a58ed9a47b390211c994bd936304912b10e0f" Nov 24 22:05:35 crc kubenswrapper[4812]: I1124 22:05:35.136500 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e"} Nov 24 22:06:47 crc kubenswrapper[4812]: I1124 22:06:47.290803 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_09f77557-419d-478a-a804-0b610554d370/init-config-reloader/0.log" Nov 24 22:06:47 crc kubenswrapper[4812]: I1124 22:06:47.457916 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_09f77557-419d-478a-a804-0b610554d370/init-config-reloader/0.log" Nov 24 22:06:47 crc kubenswrapper[4812]: I1124 22:06:47.471903 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_09f77557-419d-478a-a804-0b610554d370/alertmanager/0.log" Nov 24 22:06:47 crc kubenswrapper[4812]: I1124 22:06:47.531607 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_09f77557-419d-478a-a804-0b610554d370/config-reloader/0.log" Nov 24 22:06:47 crc kubenswrapper[4812]: I1124 22:06:47.675602 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5/aodh-api/0.log" Nov 24 22:06:47 crc kubenswrapper[4812]: I1124 22:06:47.752343 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5/aodh-evaluator/0.log" Nov 24 22:06:47 crc kubenswrapper[4812]: I1124 22:06:47.858850 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5/aodh-listener/0.log" Nov 24 22:06:47 crc kubenswrapper[4812]: I1124 22:06:47.964517 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_1864c3b2-48cf-4ab8-9d68-9a2c88d44ec5/aodh-notifier/0.log" Nov 24 22:06:47 crc kubenswrapper[4812]: I1124 22:06:47.976575 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c687cccc6-vsgqh_a2480741-4342-4f50-9cca-a5746948eb5d/barbican-api/0.log" Nov 24 22:06:48 crc kubenswrapper[4812]: I1124 22:06:48.064827 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-c687cccc6-vsgqh_a2480741-4342-4f50-9cca-a5746948eb5d/barbican-api-log/0.log" Nov 24 22:06:48 crc kubenswrapper[4812]: I1124 22:06:48.143167 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-858465b8fb-6s7lg_5b777cfd-5ec7-41fb-9119-96779293c4b3/barbican-keystone-listener/0.log" Nov 24 22:06:48 crc kubenswrapper[4812]: I1124 22:06:48.212973 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-858465b8fb-6s7lg_5b777cfd-5ec7-41fb-9119-96779293c4b3/barbican-keystone-listener-log/0.log" Nov 24 22:06:48 crc kubenswrapper[4812]: I1124 22:06:48.334230 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9fc8c454f-zfhjc_97eacd0a-6a86-431a-ad20-8c5b767e02de/barbican-worker/0.log" Nov 24 22:06:48 crc kubenswrapper[4812]: I1124 22:06:48.383429 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9fc8c454f-zfhjc_97eacd0a-6a86-431a-ad20-8c5b767e02de/barbican-worker-log/0.log" Nov 24 22:06:48 crc kubenswrapper[4812]: I1124 22:06:48.540628 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-mlwwr_d632362c-f1f0-4d67-a5aa-4521e500ae30/bootstrap-openstack-openstack-cell1/0.log" Nov 24 22:06:48 crc kubenswrapper[4812]: I1124 22:06:48.641418 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2e813f31-5ed3-4175-90ea-a120cea31966/ceilometer-central-agent/0.log" Nov 24 22:06:48 crc kubenswrapper[4812]: I1124 22:06:48.735977 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2e813f31-5ed3-4175-90ea-a120cea31966/ceilometer-notification-agent/0.log" Nov 24 22:06:48 crc kubenswrapper[4812]: I1124 22:06:48.744701 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2e813f31-5ed3-4175-90ea-a120cea31966/proxy-httpd/0.log" Nov 24 22:06:48 crc kubenswrapper[4812]: I1124 22:06:48.790372 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2e813f31-5ed3-4175-90ea-a120cea31966/sg-core/0.log" Nov 24 22:06:49 crc kubenswrapper[4812]: I1124 22:06:49.646715 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5754dce5-a565-4102-9655-a8c736c9aa70/cinder-api-log/0.log" Nov 24 22:06:49 crc kubenswrapper[4812]: I1124 22:06:49.654778 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5754dce5-a565-4102-9655-a8c736c9aa70/cinder-api/0.log" Nov 24 22:06:49 crc kubenswrapper[4812]: I1124 22:06:49.845007 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_115cede8-0b10-43ab-bf8b-b7ce50ad787e/cinder-scheduler/0.log" Nov 24 22:06:49 crc kubenswrapper[4812]: I1124 22:06:49.921980 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-4bh9q_37665f2f-cfb5-42f7-b07b-5accbe05e5ec/configure-network-openstack-openstack-cell1/0.log" Nov 24 22:06:49 crc kubenswrapper[4812]: I1124 22:06:49.969921 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_115cede8-0b10-43ab-bf8b-b7ce50ad787e/probe/0.log" Nov 24 22:06:50 crc kubenswrapper[4812]: I1124 22:06:50.130875 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-85zlh_58b1f59e-7923-41f2-a94c-ffec3693018c/configure-os-openstack-openstack-cell1/0.log" Nov 24 22:06:50 crc kubenswrapper[4812]: I1124 22:06:50.219363 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-854f6c55f5-qpp9f_757a62a8-91d7-48fd-86c8-ec131f003bc4/init/0.log" Nov 24 22:06:50 crc kubenswrapper[4812]: I1124 22:06:50.388156 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-854f6c55f5-qpp9f_757a62a8-91d7-48fd-86c8-ec131f003bc4/dnsmasq-dns/0.log" Nov 24 22:06:50 crc kubenswrapper[4812]: I1124 22:06:50.407102 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-854f6c55f5-qpp9f_757a62a8-91d7-48fd-86c8-ec131f003bc4/init/0.log" Nov 24 22:06:50 crc kubenswrapper[4812]: I1124 22:06:50.489708 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-kntww_6cf727d3-1ff3-4538-b397-4b5075237e17/download-cache-openstack-openstack-cell1/0.log" Nov 24 22:06:50 crc kubenswrapper[4812]: I1124 22:06:50.618113 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_bbdef179-c841-482d-a2dd-9486caaf1339/glance-httpd/0.log" Nov 24 22:06:50 crc kubenswrapper[4812]: I1124 22:06:50.635506 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_bbdef179-c841-482d-a2dd-9486caaf1339/glance-log/0.log" Nov 24 22:06:50 crc kubenswrapper[4812]: I1124 22:06:50.733949 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_725bf128-3f54-4775-8fbc-f12987560eb7/glance-httpd/0.log" Nov 24 22:06:50 crc kubenswrapper[4812]: I1124 22:06:50.804787 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_725bf128-3f54-4775-8fbc-f12987560eb7/glance-log/0.log" Nov 24 22:06:51 crc kubenswrapper[4812]: I1124 22:06:51.454988 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-69f4dd449f-ktzpp_d9ae0eab-8a74-4569-993b-ac9cfca1fe08/heat-api/0.log" Nov 24 22:06:51 crc kubenswrapper[4812]: I1124 22:06:51.599525 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-65d8fb47c7-4swww_d7ec3a8b-b145-4ae6-9ec5-36d0e8778782/heat-cfnapi/0.log" Nov 24 22:06:51 crc kubenswrapper[4812]: I1124 22:06:51.903583 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-746cdf7854-2hgwq_5840494b-6708-4ad4-b279-183e4d30dc5d/horizon/0.log" Nov 24 22:06:51 crc kubenswrapper[4812]: I1124 22:06:51.926709 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-6d5d7f948-4gnkf_be078702-3711-4540-834f-9627bfc1da1c/heat-engine/0.log" Nov 24 22:06:52 crc kubenswrapper[4812]: I1124 22:06:52.083257 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-9bgq8_189a60ec-9ff7-4e4c-bd6b-b24fde13c9a6/install-certs-openstack-openstack-cell1/0.log" Nov 24 22:06:52 crc kubenswrapper[4812]: I1124 22:06:52.169825 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-h56pj_5a2711f5-c999-47eb-9a01-04e4691e5983/install-os-openstack-openstack-cell1/0.log" Nov 24 22:06:52 crc kubenswrapper[4812]: I1124 22:06:52.366547 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-746cdf7854-2hgwq_5840494b-6708-4ad4-b279-183e4d30dc5d/horizon-log/0.log" Nov 24 22:06:52 crc kubenswrapper[4812]: I1124 22:06:52.503154 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29400301-8fvbw_d891f15c-5fab-4e30-b131-437a574c5c6e/keystone-cron/0.log" Nov 24 22:06:52 crc kubenswrapper[4812]: I1124 22:06:52.513148 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-65845ffd66-mhr2r_cf884fd3-5b39-4003-9246-e340b17bc43f/keystone-api/0.log" Nov 24 22:06:52 crc kubenswrapper[4812]: I1124 22:06:52.584795 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29400361-2gkwr_a4e84932-5719-400d-8c94-df947dd51038/keystone-cron/0.log" Nov 24 22:06:52 crc kubenswrapper[4812]: I1124 22:06:52.916234 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_bdff58f4-afb9-42ce-8bce-89173d306577/kube-state-metrics/0.log" Nov 24 22:06:53 crc kubenswrapper[4812]: I1124 22:06:53.021048 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-j29dm_b7c7c0e8-3eb9-41ef-af51-bb4e6e1ef028/libvirt-openstack-openstack-cell1/0.log" Nov 24 22:06:53 crc kubenswrapper[4812]: I1124 22:06:53.307752 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6f5b87f55c-47mq2_e0f72ed7-1afc-4291-ac01-8832add1eac3/neutron-api/0.log" Nov 24 22:06:53 crc kubenswrapper[4812]: I1124 22:06:53.367403 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6f5b87f55c-47mq2_e0f72ed7-1afc-4291-ac01-8832add1eac3/neutron-httpd/0.log" Nov 24 22:06:53 crc kubenswrapper[4812]: I1124 22:06:53.607438 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-lpvlm_b613d8c2-5066-4eb3-bad2-4e662e2b5078/neutron-dhcp-openstack-openstack-cell1/0.log" Nov 24 22:06:53 crc kubenswrapper[4812]: I1124 22:06:53.684780 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-8jdjj_797f1de2-68d2-44cb-88d8-37f0c13ffed8/neutron-metadata-openstack-openstack-cell1/0.log" Nov 24 22:06:53 crc kubenswrapper[4812]: I1124 22:06:53.857362 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-kdd46_6a59db0a-69f8-4843-9570-3dad3cb5d916/neutron-sriov-openstack-openstack-cell1/0.log" Nov 24 22:06:54 crc kubenswrapper[4812]: I1124 22:06:54.054894 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e7d15f6e-88a0-44d8-a0db-164710a033d1/nova-api-log/0.log" Nov 24 22:06:54 crc kubenswrapper[4812]: I1124 22:06:54.099226 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_e7d15f6e-88a0-44d8-a0db-164710a033d1/nova-api-api/0.log" Nov 24 22:06:54 crc kubenswrapper[4812]: I1124 22:06:54.228819 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_67d0e93a-8395-4a70-8745-a35492914079/nova-cell0-conductor-conductor/0.log" Nov 24 22:06:54 crc kubenswrapper[4812]: I1124 22:06:54.403804 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_5889704f-5b8d-4c34-a1ff-7295054a8cdf/nova-cell1-conductor-conductor/0.log" Nov 24 22:06:54 crc kubenswrapper[4812]: I1124 22:06:54.536770 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_8c7bd34c-5266-40cb-8023-b6013d9cc8a2/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 22:06:54 crc kubenswrapper[4812]: I1124 22:06:54.706073 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellscn4z_2f62942a-1274-4b4d-8f6a-82488ebd090b/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Nov 24 22:06:54 crc kubenswrapper[4812]: I1124 22:06:54.882236 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-rjxhn_f92d5dc8-bc6e-4ff9-9c24-3bdb385a7199/nova-cell1-openstack-openstack-cell1/0.log" Nov 24 22:06:54 crc kubenswrapper[4812]: I1124 22:06:54.978932 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_81ede438-1eb5-43b3-86ae-6d4629ce5acb/nova-metadata-log/0.log" Nov 24 22:06:55 crc kubenswrapper[4812]: I1124 22:06:55.281423 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e2d1fbe4-c950-458c-aed0-03e810894146/nova-scheduler-scheduler/0.log" Nov 24 22:06:55 crc kubenswrapper[4812]: I1124 22:06:55.320794 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-56d499f456-hts46_3527688d-f0b4-4a46-aadd-92bc13dc3f0e/init/0.log" Nov 24 22:06:55 crc kubenswrapper[4812]: I1124 22:06:55.417584 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_81ede438-1eb5-43b3-86ae-6d4629ce5acb/nova-metadata-metadata/0.log" Nov 24 22:06:55 crc kubenswrapper[4812]: I1124 22:06:55.604925 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-56d499f456-hts46_3527688d-f0b4-4a46-aadd-92bc13dc3f0e/init/0.log" Nov 24 22:06:55 crc kubenswrapper[4812]: I1124 22:06:55.615280 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-56d499f456-hts46_3527688d-f0b4-4a46-aadd-92bc13dc3f0e/octavia-api-provider-agent/0.log" Nov 24 22:06:55 crc kubenswrapper[4812]: I1124 22:06:55.826379 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-9z7d6_1bb542bb-fa97-46fa-9c1b-7809b8ae59e2/init/0.log" Nov 24 22:06:55 crc kubenswrapper[4812]: I1124 22:06:55.957514 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-api-56d499f456-hts46_3527688d-f0b4-4a46-aadd-92bc13dc3f0e/octavia-api/0.log" Nov 24 22:06:56 crc kubenswrapper[4812]: I1124 22:06:56.012813 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-9z7d6_1bb542bb-fa97-46fa-9c1b-7809b8ae59e2/init/0.log" Nov 24 22:06:56 crc kubenswrapper[4812]: I1124 22:06:56.128878 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-healthmanager-9z7d6_1bb542bb-fa97-46fa-9c1b-7809b8ae59e2/octavia-healthmanager/0.log" Nov 24 22:06:56 crc kubenswrapper[4812]: I1124 22:06:56.138487 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-b7ndg_735f042b-f017-4e06-8699-518a9853d124/init/0.log" Nov 24 22:06:56 crc kubenswrapper[4812]: I1124 22:06:56.387734 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-b7ndg_735f042b-f017-4e06-8699-518a9853d124/init/0.log" Nov 24 22:06:56 crc kubenswrapper[4812]: I1124 22:06:56.402861 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-5955f5554b-657vt_1228f7a5-f547-4e4c-a49d-597bcbe2860c/init/0.log" Nov 24 22:06:56 crc kubenswrapper[4812]: I1124 22:06:56.413920 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-housekeeping-b7ndg_735f042b-f017-4e06-8699-518a9853d124/octavia-housekeeping/0.log" Nov 24 22:06:56 crc kubenswrapper[4812]: I1124 22:06:56.620311 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-5955f5554b-657vt_1228f7a5-f547-4e4c-a49d-597bcbe2860c/init/0.log" Nov 24 22:06:56 crc kubenswrapper[4812]: I1124 22:06:56.627995 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-image-upload-5955f5554b-657vt_1228f7a5-f547-4e4c-a49d-597bcbe2860c/octavia-amphora-httpd/0.log" Nov 24 22:06:56 crc kubenswrapper[4812]: I1124 22:06:56.690140 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-5mflp_1d5ceee2-ae1a-468e-9c5f-ed0160b4db18/init/0.log" Nov 24 22:06:56 crc kubenswrapper[4812]: I1124 22:06:56.870733 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-5mflp_1d5ceee2-ae1a-468e-9c5f-ed0160b4db18/octavia-rsyslog/0.log" Nov 24 22:06:56 crc kubenswrapper[4812]: I1124 22:06:56.901864 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-rsyslog-5mflp_1d5ceee2-ae1a-468e-9c5f-ed0160b4db18/init/0.log" Nov 24 22:06:56 crc kubenswrapper[4812]: I1124 22:06:56.975775 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-6cwpp_41e05256-a3b2-4fed-8f9a-970cdbf2d392/init/0.log" Nov 24 22:06:57 crc kubenswrapper[4812]: I1124 22:06:57.128584 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-6cwpp_41e05256-a3b2-4fed-8f9a-970cdbf2d392/init/0.log" Nov 24 22:06:57 crc kubenswrapper[4812]: I1124 22:06:57.251174 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8f5f9fd3-3b4a-4670-9daf-06e0527067ab/mysql-bootstrap/0.log" Nov 24 22:06:57 crc kubenswrapper[4812]: I1124 22:06:57.274531 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_octavia-worker-6cwpp_41e05256-a3b2-4fed-8f9a-970cdbf2d392/octavia-worker/0.log" Nov 24 22:06:57 crc kubenswrapper[4812]: I1124 22:06:57.383511 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8f5f9fd3-3b4a-4670-9daf-06e0527067ab/mysql-bootstrap/0.log" Nov 24 22:06:57 crc kubenswrapper[4812]: I1124 22:06:57.456584 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8f5f9fd3-3b4a-4670-9daf-06e0527067ab/galera/0.log" Nov 24 22:06:57 crc kubenswrapper[4812]: I1124 22:06:57.517310 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dfb18991-c497-4318-abcf-de576607d11c/mysql-bootstrap/0.log" Nov 24 22:06:57 crc kubenswrapper[4812]: I1124 22:06:57.722969 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dfb18991-c497-4318-abcf-de576607d11c/galera/0.log" Nov 24 22:06:57 crc kubenswrapper[4812]: I1124 22:06:57.728349 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dfb18991-c497-4318-abcf-de576607d11c/mysql-bootstrap/0.log" Nov 24 22:06:57 crc kubenswrapper[4812]: I1124 22:06:57.757473 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_db618fd5-6493-454f-aaa3-6800c4a15546/openstackclient/0.log" Nov 24 22:06:57 crc kubenswrapper[4812]: I1124 22:06:57.893883 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-qxt9h_0adcb27e-b811-40b7-a37a-4179f629c4f9/openstack-network-exporter/0.log" Nov 24 22:06:58 crc kubenswrapper[4812]: I1124 22:06:58.135783 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7n6sd_b7be236c-9a14-4aa5-97ed-df06b89d34b7/ovsdb-server-init/0.log" Nov 24 22:06:58 crc kubenswrapper[4812]: I1124 22:06:58.305237 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7n6sd_b7be236c-9a14-4aa5-97ed-df06b89d34b7/ovs-vswitchd/0.log" Nov 24 22:06:58 crc kubenswrapper[4812]: I1124 22:06:58.321278 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7n6sd_b7be236c-9a14-4aa5-97ed-df06b89d34b7/ovsdb-server/0.log" Nov 24 22:06:58 crc kubenswrapper[4812]: I1124 22:06:58.335077 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7n6sd_b7be236c-9a14-4aa5-97ed-df06b89d34b7/ovsdb-server-init/0.log" Nov 24 22:06:58 crc kubenswrapper[4812]: I1124 22:06:58.504552 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-xrmlq_00bfc83b-f048-4338-bd95-93abfee34089/ovn-controller/0.log" Nov 24 22:06:58 crc kubenswrapper[4812]: I1124 22:06:58.536386 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53/openstack-network-exporter/0.log" Nov 24 22:06:59 crc kubenswrapper[4812]: I1124 22:06:59.232228 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7d2ac6f4-f8d8-4a31-a4db-bf95c1b2da53/ovn-northd/0.log" Nov 24 22:06:59 crc kubenswrapper[4812]: I1124 22:06:59.318819 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-qxwxw_a7c8f911-0614-4e94-959f-8a0eabb6f1db/ovn-openstack-openstack-cell1/0.log" Nov 24 22:06:59 crc kubenswrapper[4812]: I1124 22:06:59.404185 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2e559397-8cae-43cf-a277-20d09d01f986/openstack-network-exporter/0.log" Nov 24 22:06:59 crc kubenswrapper[4812]: I1124 22:06:59.503244 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2e559397-8cae-43cf-a277-20d09d01f986/ovsdbserver-nb/0.log" Nov 24 22:06:59 crc kubenswrapper[4812]: I1124 22:06:59.567296 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b/openstack-network-exporter/0.log" Nov 24 22:06:59 crc kubenswrapper[4812]: I1124 22:06:59.693797 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_be72f6f6-bdc0-4b52-8eaa-f07fdf94c66b/ovsdbserver-nb/0.log" Nov 24 22:06:59 crc kubenswrapper[4812]: I1124 22:06:59.758501 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_ef13b518-4ed5-4eed-8777-14c768a0e2ce/ovsdbserver-nb/0.log" Nov 24 22:06:59 crc kubenswrapper[4812]: I1124 22:06:59.776184 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_ef13b518-4ed5-4eed-8777-14c768a0e2ce/openstack-network-exporter/0.log" Nov 24 22:06:59 crc kubenswrapper[4812]: I1124 22:06:59.939612 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_22e355a3-e7d9-43a2-8b76-f0d8808a0f85/openstack-network-exporter/0.log" Nov 24 22:06:59 crc kubenswrapper[4812]: I1124 22:06:59.964860 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_22e355a3-e7d9-43a2-8b76-f0d8808a0f85/ovsdbserver-sb/0.log" Nov 24 22:07:00 crc kubenswrapper[4812]: I1124 22:07:00.140817 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_05fc892d-542f-483d-a083-ba3dac6f222b/openstack-network-exporter/0.log" Nov 24 22:07:00 crc kubenswrapper[4812]: I1124 22:07:00.206247 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_05fc892d-542f-483d-a083-ba3dac6f222b/ovsdbserver-sb/0.log" Nov 24 22:07:00 crc kubenswrapper[4812]: I1124 22:07:00.253956 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_58438938-925a-4426-9405-e5d3a56db751/openstack-network-exporter/0.log" Nov 24 22:07:00 crc kubenswrapper[4812]: I1124 22:07:00.941071 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_58438938-925a-4426-9405-e5d3a56db751/ovsdbserver-sb/0.log" Nov 24 22:07:01 crc kubenswrapper[4812]: I1124 22:07:01.053200 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-55749cf576-r94b9_81353499-bb5a-4a56-85b1-1e009e60e610/placement-api/0.log" Nov 24 22:07:01 crc kubenswrapper[4812]: I1124 22:07:01.094635 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-55749cf576-r94b9_81353499-bb5a-4a56-85b1-1e009e60e610/placement-log/0.log" Nov 24 22:07:01 crc kubenswrapper[4812]: I1124 22:07:01.196541 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-c7mtxz_99fed2e9-34ea-4ee4-b596-134d9190482f/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Nov 24 22:07:01 crc kubenswrapper[4812]: I1124 22:07:01.338523 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_b3547ac3-1f61-41e5-9674-317a8280dbfc/init-config-reloader/0.log" Nov 24 22:07:01 crc kubenswrapper[4812]: I1124 22:07:01.494674 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_b3547ac3-1f61-41e5-9674-317a8280dbfc/init-config-reloader/0.log" Nov 24 22:07:01 crc kubenswrapper[4812]: I1124 22:07:01.512314 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_b3547ac3-1f61-41e5-9674-317a8280dbfc/config-reloader/0.log" Nov 24 22:07:01 crc kubenswrapper[4812]: I1124 22:07:01.572610 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_b3547ac3-1f61-41e5-9674-317a8280dbfc/thanos-sidecar/0.log" Nov 24 22:07:01 crc kubenswrapper[4812]: I1124 22:07:01.621574 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_b3547ac3-1f61-41e5-9674-317a8280dbfc/prometheus/0.log" Nov 24 22:07:02 crc kubenswrapper[4812]: I1124 22:07:02.055808 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2d432b03-6333-4650-b763-433dd01c0977/setup-container/0.log" Nov 24 22:07:02 crc kubenswrapper[4812]: I1124 22:07:02.205211 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2d432b03-6333-4650-b763-433dd01c0977/setup-container/0.log" Nov 24 22:07:02 crc kubenswrapper[4812]: I1124 22:07:02.216839 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_2d432b03-6333-4650-b763-433dd01c0977/rabbitmq/0.log" Nov 24 22:07:02 crc kubenswrapper[4812]: I1124 22:07:02.281651 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_67f318f4-965e-4123-8fd2-21d1f495d110/setup-container/0.log" Nov 24 22:07:02 crc kubenswrapper[4812]: I1124 22:07:02.571969 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_67f318f4-965e-4123-8fd2-21d1f495d110/setup-container/0.log" Nov 24 22:07:02 crc kubenswrapper[4812]: I1124 22:07:02.579209 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-v8879_e3bff763-259a-4fae-b22e-c48e71a1c9a3/reboot-os-openstack-openstack-cell1/0.log" Nov 24 22:07:02 crc kubenswrapper[4812]: I1124 22:07:02.649359 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_67f318f4-965e-4123-8fd2-21d1f495d110/rabbitmq/0.log" Nov 24 22:07:02 crc kubenswrapper[4812]: I1124 22:07:02.829612 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-mvcx7_73077426-5c9e-4705-8840-6d65e962cf45/run-os-openstack-openstack-cell1/0.log" Nov 24 22:07:02 crc kubenswrapper[4812]: I1124 22:07:02.899199 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-n5jp4_e3443cfc-e84b-4210-a03e-7596c3e620bf/ssh-known-hosts-openstack/0.log" Nov 24 22:07:03 crc kubenswrapper[4812]: I1124 22:07:03.111674 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f55484fc8-26grz_ca44fa40-7e18-4f18-b5a9-8714994880b8/proxy-server/0.log" Nov 24 22:07:03 crc kubenswrapper[4812]: I1124 22:07:03.246177 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-f55484fc8-26grz_ca44fa40-7e18-4f18-b5a9-8714994880b8/proxy-httpd/0.log" Nov 24 22:07:03 crc kubenswrapper[4812]: I1124 22:07:03.300997 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-wfb4f_fb34afdf-0308-4f83-9ebe-1d25aef208cb/swift-ring-rebalance/0.log" Nov 24 22:07:03 crc kubenswrapper[4812]: I1124 22:07:03.468501 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-hwsjw_6ae142d4-b3e1-42c0-9d0e-184d42fcc9fe/telemetry-openstack-openstack-cell1/0.log" Nov 24 22:07:03 crc kubenswrapper[4812]: I1124 22:07:03.517750 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-g8kll_92040984-aa11-4d6a-9069-58324eac2e33/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Nov 24 22:07:03 crc kubenswrapper[4812]: I1124 22:07:03.674113 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-c22ms_9cedb7d1-e8f4-4652-9d14-bec67cb873eb/validate-network-openstack-openstack-cell1/0.log" Nov 24 22:07:06 crc kubenswrapper[4812]: I1124 22:07:06.348385 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zrb44"] Nov 24 22:07:06 crc kubenswrapper[4812]: E1124 22:07:06.349328 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921" containerName="container-00" Nov 24 22:07:06 crc kubenswrapper[4812]: I1124 22:07:06.349356 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921" containerName="container-00" Nov 24 22:07:06 crc kubenswrapper[4812]: I1124 22:07:06.349595 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0c4fba1-f393-4eb9-8ac3-2e9f7d98e921" containerName="container-00" Nov 24 22:07:06 crc kubenswrapper[4812]: I1124 22:07:06.351215 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrb44" Nov 24 22:07:06 crc kubenswrapper[4812]: I1124 22:07:06.366603 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zrb44"] Nov 24 22:07:06 crc kubenswrapper[4812]: I1124 22:07:06.456503 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82481841-153a-429e-83a7-807e4b9ccf86-catalog-content\") pod \"community-operators-zrb44\" (UID: \"82481841-153a-429e-83a7-807e4b9ccf86\") " pod="openshift-marketplace/community-operators-zrb44" Nov 24 22:07:06 crc kubenswrapper[4812]: I1124 22:07:06.456570 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82481841-153a-429e-83a7-807e4b9ccf86-utilities\") pod \"community-operators-zrb44\" (UID: \"82481841-153a-429e-83a7-807e4b9ccf86\") " pod="openshift-marketplace/community-operators-zrb44" Nov 24 22:07:06 crc kubenswrapper[4812]: I1124 22:07:06.456672 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmvp8\" (UniqueName: \"kubernetes.io/projected/82481841-153a-429e-83a7-807e4b9ccf86-kube-api-access-fmvp8\") pod \"community-operators-zrb44\" (UID: \"82481841-153a-429e-83a7-807e4b9ccf86\") " pod="openshift-marketplace/community-operators-zrb44" Nov 24 22:07:06 crc kubenswrapper[4812]: I1124 22:07:06.558098 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82481841-153a-429e-83a7-807e4b9ccf86-catalog-content\") pod \"community-operators-zrb44\" (UID: \"82481841-153a-429e-83a7-807e4b9ccf86\") " pod="openshift-marketplace/community-operators-zrb44" Nov 24 22:07:06 crc kubenswrapper[4812]: I1124 22:07:06.558163 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82481841-153a-429e-83a7-807e4b9ccf86-utilities\") pod \"community-operators-zrb44\" (UID: \"82481841-153a-429e-83a7-807e4b9ccf86\") " pod="openshift-marketplace/community-operators-zrb44" Nov 24 22:07:06 crc kubenswrapper[4812]: I1124 22:07:06.558262 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmvp8\" (UniqueName: \"kubernetes.io/projected/82481841-153a-429e-83a7-807e4b9ccf86-kube-api-access-fmvp8\") pod \"community-operators-zrb44\" (UID: \"82481841-153a-429e-83a7-807e4b9ccf86\") " pod="openshift-marketplace/community-operators-zrb44" Nov 24 22:07:06 crc kubenswrapper[4812]: I1124 22:07:06.559015 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82481841-153a-429e-83a7-807e4b9ccf86-catalog-content\") pod \"community-operators-zrb44\" (UID: \"82481841-153a-429e-83a7-807e4b9ccf86\") " pod="openshift-marketplace/community-operators-zrb44" Nov 24 22:07:06 crc kubenswrapper[4812]: I1124 22:07:06.559230 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82481841-153a-429e-83a7-807e4b9ccf86-utilities\") pod \"community-operators-zrb44\" (UID: \"82481841-153a-429e-83a7-807e4b9ccf86\") " pod="openshift-marketplace/community-operators-zrb44" Nov 24 22:07:06 crc kubenswrapper[4812]: I1124 22:07:06.612481 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmvp8\" (UniqueName: \"kubernetes.io/projected/82481841-153a-429e-83a7-807e4b9ccf86-kube-api-access-fmvp8\") pod \"community-operators-zrb44\" (UID: \"82481841-153a-429e-83a7-807e4b9ccf86\") " pod="openshift-marketplace/community-operators-zrb44" Nov 24 22:07:06 crc kubenswrapper[4812]: I1124 22:07:06.696226 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrb44" Nov 24 22:07:07 crc kubenswrapper[4812]: I1124 22:07:07.301844 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zrb44"] Nov 24 22:07:07 crc kubenswrapper[4812]: I1124 22:07:07.356260 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrb44" event={"ID":"82481841-153a-429e-83a7-807e4b9ccf86","Type":"ContainerStarted","Data":"5eb81694104096a0cc7fda5ca60ba20d0338e8aee95120143efcc19ddd6247d8"} Nov 24 22:07:08 crc kubenswrapper[4812]: I1124 22:07:08.200252 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_6007348b-cefa-4899-bdf4-bc09671bdf1e/memcached/0.log" Nov 24 22:07:08 crc kubenswrapper[4812]: I1124 22:07:08.369016 4812 generic.go:334] "Generic (PLEG): container finished" podID="82481841-153a-429e-83a7-807e4b9ccf86" containerID="36fb3170f9e35a4eb2988668140d83d5a97f26401ddce03de85162e0c4815f0c" exitCode=0 Nov 24 22:07:08 crc kubenswrapper[4812]: I1124 22:07:08.370208 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrb44" event={"ID":"82481841-153a-429e-83a7-807e4b9ccf86","Type":"ContainerDied","Data":"36fb3170f9e35a4eb2988668140d83d5a97f26401ddce03de85162e0c4815f0c"} Nov 24 22:07:10 crc kubenswrapper[4812]: I1124 22:07:10.390723 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrb44" event={"ID":"82481841-153a-429e-83a7-807e4b9ccf86","Type":"ContainerStarted","Data":"979c758af26090f2f5b341eb62422c6eeb2ed3c658089541750d05df15952ed0"} Nov 24 22:07:11 crc kubenswrapper[4812]: I1124 22:07:11.401402 4812 generic.go:334] "Generic (PLEG): container finished" podID="82481841-153a-429e-83a7-807e4b9ccf86" containerID="979c758af26090f2f5b341eb62422c6eeb2ed3c658089541750d05df15952ed0" exitCode=0 Nov 24 22:07:11 crc kubenswrapper[4812]: I1124 22:07:11.401455 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrb44" event={"ID":"82481841-153a-429e-83a7-807e4b9ccf86","Type":"ContainerDied","Data":"979c758af26090f2f5b341eb62422c6eeb2ed3c658089541750d05df15952ed0"} Nov 24 22:07:12 crc kubenswrapper[4812]: I1124 22:07:12.417463 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrb44" event={"ID":"82481841-153a-429e-83a7-807e4b9ccf86","Type":"ContainerStarted","Data":"8889045b20390f04f05d118eef29c5c937f6125f90644b4ff899c4653044cc63"} Nov 24 22:07:12 crc kubenswrapper[4812]: I1124 22:07:12.444588 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zrb44" podStartSLOduration=2.9864093560000002 podStartE2EDuration="6.444570785s" podCreationTimestamp="2025-11-24 22:07:06 +0000 UTC" firstStartedPulling="2025-11-24 22:07:08.371024516 +0000 UTC m=+10222.159976887" lastFinishedPulling="2025-11-24 22:07:11.829185945 +0000 UTC m=+10225.618138316" observedRunningTime="2025-11-24 22:07:12.442214328 +0000 UTC m=+10226.231166699" watchObservedRunningTime="2025-11-24 22:07:12.444570785 +0000 UTC m=+10226.233523146" Nov 24 22:07:16 crc kubenswrapper[4812]: I1124 22:07:16.697722 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zrb44" Nov 24 22:07:16 crc kubenswrapper[4812]: I1124 22:07:16.698865 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zrb44" Nov 24 22:07:16 crc kubenswrapper[4812]: I1124 22:07:16.760384 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zrb44" Nov 24 22:07:17 crc kubenswrapper[4812]: I1124 22:07:17.560517 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zrb44" Nov 24 22:07:17 crc kubenswrapper[4812]: I1124 22:07:17.610956 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zrb44"] Nov 24 22:07:19 crc kubenswrapper[4812]: I1124 22:07:19.510654 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zrb44" podUID="82481841-153a-429e-83a7-807e4b9ccf86" containerName="registry-server" containerID="cri-o://8889045b20390f04f05d118eef29c5c937f6125f90644b4ff899c4653044cc63" gracePeriod=2 Nov 24 22:07:19 crc kubenswrapper[4812]: I1124 22:07:19.985420 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrb44" Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.165836 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82481841-153a-429e-83a7-807e4b9ccf86-utilities\") pod \"82481841-153a-429e-83a7-807e4b9ccf86\" (UID: \"82481841-153a-429e-83a7-807e4b9ccf86\") " Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.165942 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82481841-153a-429e-83a7-807e4b9ccf86-catalog-content\") pod \"82481841-153a-429e-83a7-807e4b9ccf86\" (UID: \"82481841-153a-429e-83a7-807e4b9ccf86\") " Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.166245 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmvp8\" (UniqueName: \"kubernetes.io/projected/82481841-153a-429e-83a7-807e4b9ccf86-kube-api-access-fmvp8\") pod \"82481841-153a-429e-83a7-807e4b9ccf86\" (UID: \"82481841-153a-429e-83a7-807e4b9ccf86\") " Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.166536 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82481841-153a-429e-83a7-807e4b9ccf86-utilities" (OuterVolumeSpecName: "utilities") pod "82481841-153a-429e-83a7-807e4b9ccf86" (UID: "82481841-153a-429e-83a7-807e4b9ccf86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.167127 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82481841-153a-429e-83a7-807e4b9ccf86-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.172121 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82481841-153a-429e-83a7-807e4b9ccf86-kube-api-access-fmvp8" (OuterVolumeSpecName: "kube-api-access-fmvp8") pod "82481841-153a-429e-83a7-807e4b9ccf86" (UID: "82481841-153a-429e-83a7-807e4b9ccf86"). InnerVolumeSpecName "kube-api-access-fmvp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.214647 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82481841-153a-429e-83a7-807e4b9ccf86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82481841-153a-429e-83a7-807e4b9ccf86" (UID: "82481841-153a-429e-83a7-807e4b9ccf86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.269563 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82481841-153a-429e-83a7-807e4b9ccf86-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.269610 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmvp8\" (UniqueName: \"kubernetes.io/projected/82481841-153a-429e-83a7-807e4b9ccf86-kube-api-access-fmvp8\") on node \"crc\" DevicePath \"\"" Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.532862 4812 generic.go:334] "Generic (PLEG): container finished" podID="82481841-153a-429e-83a7-807e4b9ccf86" containerID="8889045b20390f04f05d118eef29c5c937f6125f90644b4ff899c4653044cc63" exitCode=0 Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.532905 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrb44" event={"ID":"82481841-153a-429e-83a7-807e4b9ccf86","Type":"ContainerDied","Data":"8889045b20390f04f05d118eef29c5c937f6125f90644b4ff899c4653044cc63"} Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.532932 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrb44" event={"ID":"82481841-153a-429e-83a7-807e4b9ccf86","Type":"ContainerDied","Data":"5eb81694104096a0cc7fda5ca60ba20d0338e8aee95120143efcc19ddd6247d8"} Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.532950 4812 scope.go:117] "RemoveContainer" containerID="8889045b20390f04f05d118eef29c5c937f6125f90644b4ff899c4653044cc63" Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.533068 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrb44" Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.570782 4812 scope.go:117] "RemoveContainer" containerID="979c758af26090f2f5b341eb62422c6eeb2ed3c658089541750d05df15952ed0" Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.577710 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zrb44"] Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.587281 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zrb44"] Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.619269 4812 scope.go:117] "RemoveContainer" containerID="36fb3170f9e35a4eb2988668140d83d5a97f26401ddce03de85162e0c4815f0c" Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.711642 4812 scope.go:117] "RemoveContainer" containerID="8889045b20390f04f05d118eef29c5c937f6125f90644b4ff899c4653044cc63" Nov 24 22:07:20 crc kubenswrapper[4812]: E1124 22:07:20.712126 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8889045b20390f04f05d118eef29c5c937f6125f90644b4ff899c4653044cc63\": container with ID starting with 8889045b20390f04f05d118eef29c5c937f6125f90644b4ff899c4653044cc63 not found: ID does not exist" containerID="8889045b20390f04f05d118eef29c5c937f6125f90644b4ff899c4653044cc63" Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.712230 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8889045b20390f04f05d118eef29c5c937f6125f90644b4ff899c4653044cc63"} err="failed to get container status \"8889045b20390f04f05d118eef29c5c937f6125f90644b4ff899c4653044cc63\": rpc error: code = NotFound desc = could not find container \"8889045b20390f04f05d118eef29c5c937f6125f90644b4ff899c4653044cc63\": container with ID starting with 8889045b20390f04f05d118eef29c5c937f6125f90644b4ff899c4653044cc63 not found: ID does not exist" Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.712307 4812 scope.go:117] "RemoveContainer" containerID="979c758af26090f2f5b341eb62422c6eeb2ed3c658089541750d05df15952ed0" Nov 24 22:07:20 crc kubenswrapper[4812]: E1124 22:07:20.716611 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"979c758af26090f2f5b341eb62422c6eeb2ed3c658089541750d05df15952ed0\": container with ID starting with 979c758af26090f2f5b341eb62422c6eeb2ed3c658089541750d05df15952ed0 not found: ID does not exist" containerID="979c758af26090f2f5b341eb62422c6eeb2ed3c658089541750d05df15952ed0" Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.716661 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"979c758af26090f2f5b341eb62422c6eeb2ed3c658089541750d05df15952ed0"} err="failed to get container status \"979c758af26090f2f5b341eb62422c6eeb2ed3c658089541750d05df15952ed0\": rpc error: code = NotFound desc = could not find container \"979c758af26090f2f5b341eb62422c6eeb2ed3c658089541750d05df15952ed0\": container with ID starting with 979c758af26090f2f5b341eb62422c6eeb2ed3c658089541750d05df15952ed0 not found: ID does not exist" Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.716689 4812 scope.go:117] "RemoveContainer" containerID="36fb3170f9e35a4eb2988668140d83d5a97f26401ddce03de85162e0c4815f0c" Nov 24 22:07:20 crc kubenswrapper[4812]: E1124 22:07:20.723249 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36fb3170f9e35a4eb2988668140d83d5a97f26401ddce03de85162e0c4815f0c\": container with ID starting with 36fb3170f9e35a4eb2988668140d83d5a97f26401ddce03de85162e0c4815f0c not found: ID does not exist" containerID="36fb3170f9e35a4eb2988668140d83d5a97f26401ddce03de85162e0c4815f0c" Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.723286 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36fb3170f9e35a4eb2988668140d83d5a97f26401ddce03de85162e0c4815f0c"} err="failed to get container status \"36fb3170f9e35a4eb2988668140d83d5a97f26401ddce03de85162e0c4815f0c\": rpc error: code = NotFound desc = could not find container \"36fb3170f9e35a4eb2988668140d83d5a97f26401ddce03de85162e0c4815f0c\": container with ID starting with 36fb3170f9e35a4eb2988668140d83d5a97f26401ddce03de85162e0c4815f0c not found: ID does not exist" Nov 24 22:07:20 crc kubenswrapper[4812]: I1124 22:07:20.977664 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82481841-153a-429e-83a7-807e4b9ccf86" path="/var/lib/kubelet/pods/82481841-153a-429e-83a7-807e4b9ccf86/volumes" Nov 24 22:07:36 crc kubenswrapper[4812]: I1124 22:07:36.254575 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-wqgxl_baf03f1e-597c-434a-b90d-821e44d586d1/kube-rbac-proxy/0.log" Nov 24 22:07:36 crc kubenswrapper[4812]: I1124 22:07:36.283437 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-wqgxl_baf03f1e-597c-434a-b90d-821e44d586d1/manager/0.log" Nov 24 22:07:36 crc kubenswrapper[4812]: I1124 22:07:36.456812 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw_66e69f66-9eb1-4276-b756-572b50aa2417/util/0.log" Nov 24 22:07:37 crc kubenswrapper[4812]: I1124 22:07:37.330496 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw_66e69f66-9eb1-4276-b756-572b50aa2417/pull/0.log" Nov 24 22:07:37 crc kubenswrapper[4812]: I1124 22:07:37.375423 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw_66e69f66-9eb1-4276-b756-572b50aa2417/pull/0.log" Nov 24 22:07:37 crc kubenswrapper[4812]: I1124 22:07:37.393514 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw_66e69f66-9eb1-4276-b756-572b50aa2417/util/0.log" Nov 24 22:07:37 crc kubenswrapper[4812]: I1124 22:07:37.547958 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw_66e69f66-9eb1-4276-b756-572b50aa2417/pull/0.log" Nov 24 22:07:37 crc kubenswrapper[4812]: I1124 22:07:37.574112 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw_66e69f66-9eb1-4276-b756-572b50aa2417/extract/0.log" Nov 24 22:07:37 crc kubenswrapper[4812]: I1124 22:07:37.584097 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_bbe0292a041351b2e91c74017e768208b36f144dd799fdf82c414fd15f5fhzw_66e69f66-9eb1-4276-b756-572b50aa2417/util/0.log" Nov 24 22:07:37 crc kubenswrapper[4812]: I1124 22:07:37.766101 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-98txq_e41fac25-1110-4e6e-a45e-5529ef0ef2f1/kube-rbac-proxy/0.log" Nov 24 22:07:37 crc kubenswrapper[4812]: I1124 22:07:37.788086 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-98txq_e41fac25-1110-4e6e-a45e-5529ef0ef2f1/manager/0.log" Nov 24 22:07:37 crc kubenswrapper[4812]: I1124 22:07:37.814195 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-48cqh_851fd2ec-631c-41d1-826f-934b3561cd70/kube-rbac-proxy/0.log" Nov 24 22:07:37 crc kubenswrapper[4812]: I1124 22:07:37.980523 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-5j5g2_82485996-0922-41c0-903c-e44eadf8be30/kube-rbac-proxy/0.log" Nov 24 22:07:37 crc kubenswrapper[4812]: I1124 22:07:37.988223 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-48cqh_851fd2ec-631c-41d1-826f-934b3561cd70/manager/0.log" Nov 24 22:07:38 crc kubenswrapper[4812]: I1124 22:07:38.134027 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-5j5g2_82485996-0922-41c0-903c-e44eadf8be30/manager/0.log" Nov 24 22:07:38 crc kubenswrapper[4812]: I1124 22:07:38.167411 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-mz2ht_2f7164ed-0304-404a-938f-134952b55d15/kube-rbac-proxy/0.log" Nov 24 22:07:38 crc kubenswrapper[4812]: I1124 22:07:38.240013 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-mz2ht_2f7164ed-0304-404a-938f-134952b55d15/manager/0.log" Nov 24 22:07:38 crc kubenswrapper[4812]: I1124 22:07:38.321928 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-6h6pl_c317b8b9-c5b6-4aa3-b666-425be5ee68fc/kube-rbac-proxy/0.log" Nov 24 22:07:38 crc kubenswrapper[4812]: I1124 22:07:38.460906 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-6h6pl_c317b8b9-c5b6-4aa3-b666-425be5ee68fc/manager/0.log" Nov 24 22:07:38 crc kubenswrapper[4812]: I1124 22:07:38.517735 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-fprj2_4a7a540a-2cc4-4703-bfa6-b85acdffe4a7/kube-rbac-proxy/0.log" Nov 24 22:07:38 crc kubenswrapper[4812]: I1124 22:07:38.884023 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-fprj2_4a7a540a-2cc4-4703-bfa6-b85acdffe4a7/manager/0.log" Nov 24 22:07:39 crc kubenswrapper[4812]: I1124 22:07:39.257378 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-zxcm8_393e1e76-aacc-4ae3-95e3-cb2f4cddd6bb/kube-rbac-proxy/0.log" Nov 24 22:07:39 crc kubenswrapper[4812]: I1124 22:07:39.275883 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-zxcm8_393e1e76-aacc-4ae3-95e3-cb2f4cddd6bb/manager/0.log" Nov 24 22:07:39 crc kubenswrapper[4812]: I1124 22:07:39.418591 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-b5ghq_3aafb524-8fa5-4752-b3f5-a4c700c8b4ee/kube-rbac-proxy/0.log" Nov 24 22:07:39 crc kubenswrapper[4812]: I1124 22:07:39.529697 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-5fw6l_b745dd21-3908-4dba-8966-a4be4aea8aa4/kube-rbac-proxy/0.log" Nov 24 22:07:39 crc kubenswrapper[4812]: I1124 22:07:39.562397 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-b5ghq_3aafb524-8fa5-4752-b3f5-a4c700c8b4ee/manager/0.log" Nov 24 22:07:39 crc kubenswrapper[4812]: I1124 22:07:39.675576 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-5fw6l_b745dd21-3908-4dba-8966-a4be4aea8aa4/manager/0.log" Nov 24 22:07:39 crc kubenswrapper[4812]: I1124 22:07:39.718876 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-s9pm6_aa881065-4590-4fbb-9f31-926164d07125/kube-rbac-proxy/0.log" Nov 24 22:07:39 crc kubenswrapper[4812]: I1124 22:07:39.753739 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-s9pm6_aa881065-4590-4fbb-9f31-926164d07125/manager/0.log" Nov 24 22:07:39 crc kubenswrapper[4812]: I1124 22:07:39.861595 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-7thtk_39c92508-c338-4865-b7fb-aa0c026e5f6b/kube-rbac-proxy/0.log" Nov 24 22:07:39 crc kubenswrapper[4812]: I1124 22:07:39.962591 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-jpljb_599321b1-4c22-4f86-a8fb-16ab7335db6a/kube-rbac-proxy/0.log" Nov 24 22:07:40 crc kubenswrapper[4812]: I1124 22:07:40.014069 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-7thtk_39c92508-c338-4865-b7fb-aa0c026e5f6b/manager/0.log" Nov 24 22:07:40 crc kubenswrapper[4812]: I1124 22:07:40.127274 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-jpljb_599321b1-4c22-4f86-a8fb-16ab7335db6a/manager/0.log" Nov 24 22:07:40 crc kubenswrapper[4812]: I1124 22:07:40.192150 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-cgt44_8e0c969b-35df-4a7c-85ff-ae79ea881c06/kube-rbac-proxy/0.log" Nov 24 22:07:40 crc kubenswrapper[4812]: I1124 22:07:40.240123 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-cgt44_8e0c969b-35df-4a7c-85ff-ae79ea881c06/manager/0.log" Nov 24 22:07:40 crc kubenswrapper[4812]: I1124 22:07:40.323073 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-b58f89467-8bjbz_34cfb52c-d093-4d71-bba3-0ab2e2047e74/manager/0.log" Nov 24 22:07:40 crc kubenswrapper[4812]: I1124 22:07:40.328624 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-b58f89467-8bjbz_34cfb52c-d093-4d71-bba3-0ab2e2047e74/kube-rbac-proxy/0.log" Nov 24 22:07:40 crc kubenswrapper[4812]: I1124 22:07:40.637161 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7b567956b5-x6rdv_6a6cc9d7-e241-4ff6-bdc6-fa1ab7862f9f/operator/0.log" Nov 24 22:07:40 crc kubenswrapper[4812]: I1124 22:07:40.725627 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-jg58x_76a00195-5eec-4790-93cf-0167e77c2d69/kube-rbac-proxy/0.log" Nov 24 22:07:40 crc kubenswrapper[4812]: I1124 22:07:40.774312 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-d74vw_b3fcebf7-47e1-4467-92c7-391f0d9bcc5b/registry-server/0.log" Nov 24 22:07:41 crc kubenswrapper[4812]: I1124 22:07:41.024196 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-jg58x_76a00195-5eec-4790-93cf-0167e77c2d69/manager/0.log" Nov 24 22:07:41 crc kubenswrapper[4812]: I1124 22:07:41.038420 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-ptcwx_31db8161-54b4-417d-b6ec-85109904df50/kube-rbac-proxy/0.log" Nov 24 22:07:41 crc kubenswrapper[4812]: I1124 22:07:41.055975 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-ptcwx_31db8161-54b4-417d-b6ec-85109904df50/manager/0.log" Nov 24 22:07:41 crc kubenswrapper[4812]: I1124 22:07:41.219389 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-mrr28_a601c469-c7c1-4311-8927-a0ebccc2722b/operator/0.log" Nov 24 22:07:41 crc kubenswrapper[4812]: I1124 22:07:41.277933 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-68nf9_2ec7107c-8ae0-4a00-901e-e70ac99520e7/kube-rbac-proxy/0.log" Nov 24 22:07:41 crc kubenswrapper[4812]: I1124 22:07:41.379957 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-68nf9_2ec7107c-8ae0-4a00-901e-e70ac99520e7/manager/0.log" Nov 24 22:07:41 crc kubenswrapper[4812]: I1124 22:07:41.473411 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-x8j6n_2ac87fe9-d5cf-44a3-b5fe-0717ea4087c8/kube-rbac-proxy/0.log" Nov 24 22:07:41 crc kubenswrapper[4812]: I1124 22:07:41.653067 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-28xt4_05059f83-e196-4181-916f-ba36ed828d22/manager/0.log" Nov 24 22:07:41 crc kubenswrapper[4812]: I1124 22:07:41.707229 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-28xt4_05059f83-e196-4181-916f-ba36ed828d22/kube-rbac-proxy/0.log" Nov 24 22:07:41 crc kubenswrapper[4812]: I1124 22:07:41.828793 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-567f98c9d-x8j6n_2ac87fe9-d5cf-44a3-b5fe-0717ea4087c8/manager/0.log" Nov 24 22:07:41 crc kubenswrapper[4812]: I1124 22:07:41.878102 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-h7bnm_8b38e013-64de-4e6d-8092-b9607b1c21f7/kube-rbac-proxy/0.log" Nov 24 22:07:41 crc kubenswrapper[4812]: I1124 22:07:41.945252 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-h7bnm_8b38e013-64de-4e6d-8092-b9607b1c21f7/manager/0.log" Nov 24 22:07:42 crc kubenswrapper[4812]: I1124 22:07:42.880211 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7cd5954d9-ntd69_eff10de0-2386-4e12-97a9-b6f08b1eda95/manager/0.log" Nov 24 22:07:52 crc kubenswrapper[4812]: I1124 22:07:52.163760 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r2bwd"] Nov 24 22:07:52 crc kubenswrapper[4812]: E1124 22:07:52.164746 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82481841-153a-429e-83a7-807e4b9ccf86" containerName="extract-utilities" Nov 24 22:07:52 crc kubenswrapper[4812]: I1124 22:07:52.164761 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="82481841-153a-429e-83a7-807e4b9ccf86" containerName="extract-utilities" Nov 24 22:07:52 crc kubenswrapper[4812]: E1124 22:07:52.164784 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82481841-153a-429e-83a7-807e4b9ccf86" containerName="extract-content" Nov 24 22:07:52 crc kubenswrapper[4812]: I1124 22:07:52.164793 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="82481841-153a-429e-83a7-807e4b9ccf86" containerName="extract-content" Nov 24 22:07:52 crc kubenswrapper[4812]: E1124 22:07:52.164828 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82481841-153a-429e-83a7-807e4b9ccf86" containerName="registry-server" Nov 24 22:07:52 crc kubenswrapper[4812]: I1124 22:07:52.164835 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="82481841-153a-429e-83a7-807e4b9ccf86" containerName="registry-server" Nov 24 22:07:52 crc kubenswrapper[4812]: I1124 22:07:52.165038 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="82481841-153a-429e-83a7-807e4b9ccf86" containerName="registry-server" Nov 24 22:07:52 crc kubenswrapper[4812]: I1124 22:07:52.166610 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2bwd" Nov 24 22:07:52 crc kubenswrapper[4812]: I1124 22:07:52.176638 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2bwd"] Nov 24 22:07:52 crc kubenswrapper[4812]: I1124 22:07:52.272767 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b74576-7600-4bad-9a9a-8f38273e9328-utilities\") pod \"redhat-marketplace-r2bwd\" (UID: \"02b74576-7600-4bad-9a9a-8f38273e9328\") " pod="openshift-marketplace/redhat-marketplace-r2bwd" Nov 24 22:07:52 crc kubenswrapper[4812]: I1124 22:07:52.272952 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdlcj\" (UniqueName: \"kubernetes.io/projected/02b74576-7600-4bad-9a9a-8f38273e9328-kube-api-access-bdlcj\") pod \"redhat-marketplace-r2bwd\" (UID: \"02b74576-7600-4bad-9a9a-8f38273e9328\") " pod="openshift-marketplace/redhat-marketplace-r2bwd" Nov 24 22:07:52 crc kubenswrapper[4812]: I1124 22:07:52.273130 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b74576-7600-4bad-9a9a-8f38273e9328-catalog-content\") pod \"redhat-marketplace-r2bwd\" (UID: \"02b74576-7600-4bad-9a9a-8f38273e9328\") " pod="openshift-marketplace/redhat-marketplace-r2bwd" Nov 24 22:07:52 crc kubenswrapper[4812]: I1124 22:07:52.374752 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b74576-7600-4bad-9a9a-8f38273e9328-utilities\") pod \"redhat-marketplace-r2bwd\" (UID: \"02b74576-7600-4bad-9a9a-8f38273e9328\") " pod="openshift-marketplace/redhat-marketplace-r2bwd" Nov 24 22:07:52 crc kubenswrapper[4812]: I1124 22:07:52.374843 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdlcj\" (UniqueName: \"kubernetes.io/projected/02b74576-7600-4bad-9a9a-8f38273e9328-kube-api-access-bdlcj\") pod \"redhat-marketplace-r2bwd\" (UID: \"02b74576-7600-4bad-9a9a-8f38273e9328\") " pod="openshift-marketplace/redhat-marketplace-r2bwd" Nov 24 22:07:52 crc kubenswrapper[4812]: I1124 22:07:52.374944 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b74576-7600-4bad-9a9a-8f38273e9328-catalog-content\") pod \"redhat-marketplace-r2bwd\" (UID: \"02b74576-7600-4bad-9a9a-8f38273e9328\") " pod="openshift-marketplace/redhat-marketplace-r2bwd" Nov 24 22:07:52 crc kubenswrapper[4812]: I1124 22:07:52.375602 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b74576-7600-4bad-9a9a-8f38273e9328-catalog-content\") pod \"redhat-marketplace-r2bwd\" (UID: \"02b74576-7600-4bad-9a9a-8f38273e9328\") " pod="openshift-marketplace/redhat-marketplace-r2bwd" Nov 24 22:07:52 crc kubenswrapper[4812]: I1124 22:07:52.375897 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b74576-7600-4bad-9a9a-8f38273e9328-utilities\") pod \"redhat-marketplace-r2bwd\" (UID: \"02b74576-7600-4bad-9a9a-8f38273e9328\") " pod="openshift-marketplace/redhat-marketplace-r2bwd" Nov 24 22:07:52 crc kubenswrapper[4812]: I1124 22:07:52.403394 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdlcj\" (UniqueName: \"kubernetes.io/projected/02b74576-7600-4bad-9a9a-8f38273e9328-kube-api-access-bdlcj\") pod \"redhat-marketplace-r2bwd\" (UID: \"02b74576-7600-4bad-9a9a-8f38273e9328\") " pod="openshift-marketplace/redhat-marketplace-r2bwd" Nov 24 22:07:52 crc kubenswrapper[4812]: I1124 22:07:52.495302 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2bwd" Nov 24 22:07:53 crc kubenswrapper[4812]: I1124 22:07:53.033467 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2bwd"] Nov 24 22:07:53 crc kubenswrapper[4812]: I1124 22:07:53.919873 4812 generic.go:334] "Generic (PLEG): container finished" podID="02b74576-7600-4bad-9a9a-8f38273e9328" containerID="901a35cf1efc4772fb7d55761bd59152cf63cb3032ff1c82b7b8108d6e069744" exitCode=0 Nov 24 22:07:53 crc kubenswrapper[4812]: I1124 22:07:53.920180 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2bwd" event={"ID":"02b74576-7600-4bad-9a9a-8f38273e9328","Type":"ContainerDied","Data":"901a35cf1efc4772fb7d55761bd59152cf63cb3032ff1c82b7b8108d6e069744"} Nov 24 22:07:53 crc kubenswrapper[4812]: I1124 22:07:53.920209 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2bwd" event={"ID":"02b74576-7600-4bad-9a9a-8f38273e9328","Type":"ContainerStarted","Data":"2b7f018be9dcbc7865e92beb418896078cf00a3cd98ef8e4472887bfde6268bb"} Nov 24 22:07:53 crc kubenswrapper[4812]: I1124 22:07:53.922092 4812 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:07:54 crc kubenswrapper[4812]: I1124 22:07:54.929434 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2bwd" event={"ID":"02b74576-7600-4bad-9a9a-8f38273e9328","Type":"ContainerStarted","Data":"40a21cb34f3cf95d2755668124e3c42afe4eb87ba013ff16094eef8621ab7139"} Nov 24 22:07:55 crc kubenswrapper[4812]: I1124 22:07:55.942413 4812 generic.go:334] "Generic (PLEG): container finished" podID="02b74576-7600-4bad-9a9a-8f38273e9328" containerID="40a21cb34f3cf95d2755668124e3c42afe4eb87ba013ff16094eef8621ab7139" exitCode=0 Nov 24 22:07:55 crc kubenswrapper[4812]: I1124 22:07:55.942536 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2bwd" event={"ID":"02b74576-7600-4bad-9a9a-8f38273e9328","Type":"ContainerDied","Data":"40a21cb34f3cf95d2755668124e3c42afe4eb87ba013ff16094eef8621ab7139"} Nov 24 22:07:56 crc kubenswrapper[4812]: I1124 22:07:56.953872 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2bwd" event={"ID":"02b74576-7600-4bad-9a9a-8f38273e9328","Type":"ContainerStarted","Data":"8d64e0c1fcddc388307be45f87cae458f2893fb0e0261418637b86aaa18abd7f"} Nov 24 22:07:56 crc kubenswrapper[4812]: I1124 22:07:56.974754 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r2bwd" podStartSLOduration=2.543702503 podStartE2EDuration="4.974722027s" podCreationTimestamp="2025-11-24 22:07:52 +0000 UTC" firstStartedPulling="2025-11-24 22:07:53.921806199 +0000 UTC m=+10267.710758570" lastFinishedPulling="2025-11-24 22:07:56.352825723 +0000 UTC m=+10270.141778094" observedRunningTime="2025-11-24 22:07:56.970881998 +0000 UTC m=+10270.759834379" watchObservedRunningTime="2025-11-24 22:07:56.974722027 +0000 UTC m=+10270.763674438" Nov 24 22:08:01 crc kubenswrapper[4812]: I1124 22:08:01.069622 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qr7zk_0d003f73-5e4b-475f-bb57-66f13916a0c5/control-plane-machine-set-operator/0.log" Nov 24 22:08:01 crc kubenswrapper[4812]: I1124 22:08:01.203814 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6nn48_c35074ed-4b34-451a-aea5-9022f6f8c685/kube-rbac-proxy/0.log" Nov 24 22:08:01 crc kubenswrapper[4812]: I1124 22:08:01.253995 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-6nn48_c35074ed-4b34-451a-aea5-9022f6f8c685/machine-api-operator/0.log" Nov 24 22:08:02 crc kubenswrapper[4812]: I1124 22:08:02.496110 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r2bwd" Nov 24 22:08:02 crc kubenswrapper[4812]: I1124 22:08:02.496427 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r2bwd" Nov 24 22:08:02 crc kubenswrapper[4812]: I1124 22:08:02.565130 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r2bwd" Nov 24 22:08:03 crc kubenswrapper[4812]: I1124 22:08:02.998633 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:08:03 crc kubenswrapper[4812]: I1124 22:08:02.998976 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:08:03 crc kubenswrapper[4812]: I1124 22:08:03.070360 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r2bwd" Nov 24 22:08:03 crc kubenswrapper[4812]: I1124 22:08:03.126116 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2bwd"] Nov 24 22:08:05 crc kubenswrapper[4812]: I1124 22:08:05.045886 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r2bwd" podUID="02b74576-7600-4bad-9a9a-8f38273e9328" containerName="registry-server" containerID="cri-o://8d64e0c1fcddc388307be45f87cae458f2893fb0e0261418637b86aaa18abd7f" gracePeriod=2 Nov 24 22:08:05 crc kubenswrapper[4812]: I1124 22:08:05.662634 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2bwd" Nov 24 22:08:05 crc kubenswrapper[4812]: I1124 22:08:05.802535 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b74576-7600-4bad-9a9a-8f38273e9328-catalog-content\") pod \"02b74576-7600-4bad-9a9a-8f38273e9328\" (UID: \"02b74576-7600-4bad-9a9a-8f38273e9328\") " Nov 24 22:08:05 crc kubenswrapper[4812]: I1124 22:08:05.802703 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdlcj\" (UniqueName: \"kubernetes.io/projected/02b74576-7600-4bad-9a9a-8f38273e9328-kube-api-access-bdlcj\") pod \"02b74576-7600-4bad-9a9a-8f38273e9328\" (UID: \"02b74576-7600-4bad-9a9a-8f38273e9328\") " Nov 24 22:08:05 crc kubenswrapper[4812]: I1124 22:08:05.802870 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b74576-7600-4bad-9a9a-8f38273e9328-utilities\") pod \"02b74576-7600-4bad-9a9a-8f38273e9328\" (UID: \"02b74576-7600-4bad-9a9a-8f38273e9328\") " Nov 24 22:08:05 crc kubenswrapper[4812]: I1124 22:08:05.810488 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02b74576-7600-4bad-9a9a-8f38273e9328-utilities" (OuterVolumeSpecName: "utilities") pod "02b74576-7600-4bad-9a9a-8f38273e9328" (UID: "02b74576-7600-4bad-9a9a-8f38273e9328"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:08:05 crc kubenswrapper[4812]: I1124 22:08:05.815640 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b74576-7600-4bad-9a9a-8f38273e9328-kube-api-access-bdlcj" (OuterVolumeSpecName: "kube-api-access-bdlcj") pod "02b74576-7600-4bad-9a9a-8f38273e9328" (UID: "02b74576-7600-4bad-9a9a-8f38273e9328"). InnerVolumeSpecName "kube-api-access-bdlcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:08:05 crc kubenswrapper[4812]: I1124 22:08:05.821415 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02b74576-7600-4bad-9a9a-8f38273e9328-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02b74576-7600-4bad-9a9a-8f38273e9328" (UID: "02b74576-7600-4bad-9a9a-8f38273e9328"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:08:05 crc kubenswrapper[4812]: I1124 22:08:05.904920 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02b74576-7600-4bad-9a9a-8f38273e9328-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:05 crc kubenswrapper[4812]: I1124 22:08:05.904956 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02b74576-7600-4bad-9a9a-8f38273e9328-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:05 crc kubenswrapper[4812]: I1124 22:08:05.904968 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdlcj\" (UniqueName: \"kubernetes.io/projected/02b74576-7600-4bad-9a9a-8f38273e9328-kube-api-access-bdlcj\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:06 crc kubenswrapper[4812]: I1124 22:08:06.058976 4812 generic.go:334] "Generic (PLEG): container finished" podID="02b74576-7600-4bad-9a9a-8f38273e9328" containerID="8d64e0c1fcddc388307be45f87cae458f2893fb0e0261418637b86aaa18abd7f" exitCode=0 Nov 24 22:08:06 crc kubenswrapper[4812]: I1124 22:08:06.059015 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2bwd" event={"ID":"02b74576-7600-4bad-9a9a-8f38273e9328","Type":"ContainerDied","Data":"8d64e0c1fcddc388307be45f87cae458f2893fb0e0261418637b86aaa18abd7f"} Nov 24 22:08:06 crc kubenswrapper[4812]: I1124 22:08:06.059244 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r2bwd" event={"ID":"02b74576-7600-4bad-9a9a-8f38273e9328","Type":"ContainerDied","Data":"2b7f018be9dcbc7865e92beb418896078cf00a3cd98ef8e4472887bfde6268bb"} Nov 24 22:08:06 crc kubenswrapper[4812]: I1124 22:08:06.059264 4812 scope.go:117] "RemoveContainer" containerID="8d64e0c1fcddc388307be45f87cae458f2893fb0e0261418637b86aaa18abd7f" Nov 24 22:08:06 crc kubenswrapper[4812]: I1124 22:08:06.059047 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r2bwd" Nov 24 22:08:06 crc kubenswrapper[4812]: I1124 22:08:06.096289 4812 scope.go:117] "RemoveContainer" containerID="40a21cb34f3cf95d2755668124e3c42afe4eb87ba013ff16094eef8621ab7139" Nov 24 22:08:06 crc kubenswrapper[4812]: I1124 22:08:06.100254 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2bwd"] Nov 24 22:08:06 crc kubenswrapper[4812]: I1124 22:08:06.110925 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r2bwd"] Nov 24 22:08:06 crc kubenswrapper[4812]: I1124 22:08:06.138740 4812 scope.go:117] "RemoveContainer" containerID="901a35cf1efc4772fb7d55761bd59152cf63cb3032ff1c82b7b8108d6e069744" Nov 24 22:08:06 crc kubenswrapper[4812]: I1124 22:08:06.179994 4812 scope.go:117] "RemoveContainer" containerID="8d64e0c1fcddc388307be45f87cae458f2893fb0e0261418637b86aaa18abd7f" Nov 24 22:08:06 crc kubenswrapper[4812]: E1124 22:08:06.180611 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d64e0c1fcddc388307be45f87cae458f2893fb0e0261418637b86aaa18abd7f\": container with ID starting with 8d64e0c1fcddc388307be45f87cae458f2893fb0e0261418637b86aaa18abd7f not found: ID does not exist" containerID="8d64e0c1fcddc388307be45f87cae458f2893fb0e0261418637b86aaa18abd7f" Nov 24 22:08:06 crc kubenswrapper[4812]: I1124 22:08:06.180642 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d64e0c1fcddc388307be45f87cae458f2893fb0e0261418637b86aaa18abd7f"} err="failed to get container status \"8d64e0c1fcddc388307be45f87cae458f2893fb0e0261418637b86aaa18abd7f\": rpc error: code = NotFound desc = could not find container \"8d64e0c1fcddc388307be45f87cae458f2893fb0e0261418637b86aaa18abd7f\": container with ID starting with 8d64e0c1fcddc388307be45f87cae458f2893fb0e0261418637b86aaa18abd7f not found: ID does not exist" Nov 24 22:08:06 crc kubenswrapper[4812]: I1124 22:08:06.180668 4812 scope.go:117] "RemoveContainer" containerID="40a21cb34f3cf95d2755668124e3c42afe4eb87ba013ff16094eef8621ab7139" Nov 24 22:08:06 crc kubenswrapper[4812]: E1124 22:08:06.181137 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40a21cb34f3cf95d2755668124e3c42afe4eb87ba013ff16094eef8621ab7139\": container with ID starting with 40a21cb34f3cf95d2755668124e3c42afe4eb87ba013ff16094eef8621ab7139 not found: ID does not exist" containerID="40a21cb34f3cf95d2755668124e3c42afe4eb87ba013ff16094eef8621ab7139" Nov 24 22:08:06 crc kubenswrapper[4812]: I1124 22:08:06.181168 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40a21cb34f3cf95d2755668124e3c42afe4eb87ba013ff16094eef8621ab7139"} err="failed to get container status \"40a21cb34f3cf95d2755668124e3c42afe4eb87ba013ff16094eef8621ab7139\": rpc error: code = NotFound desc = could not find container \"40a21cb34f3cf95d2755668124e3c42afe4eb87ba013ff16094eef8621ab7139\": container with ID starting with 40a21cb34f3cf95d2755668124e3c42afe4eb87ba013ff16094eef8621ab7139 not found: ID does not exist" Nov 24 22:08:06 crc kubenswrapper[4812]: I1124 22:08:06.181186 4812 scope.go:117] "RemoveContainer" containerID="901a35cf1efc4772fb7d55761bd59152cf63cb3032ff1c82b7b8108d6e069744" Nov 24 22:08:06 crc kubenswrapper[4812]: E1124 22:08:06.181649 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"901a35cf1efc4772fb7d55761bd59152cf63cb3032ff1c82b7b8108d6e069744\": container with ID starting with 901a35cf1efc4772fb7d55761bd59152cf63cb3032ff1c82b7b8108d6e069744 not found: ID does not exist" containerID="901a35cf1efc4772fb7d55761bd59152cf63cb3032ff1c82b7b8108d6e069744" Nov 24 22:08:06 crc kubenswrapper[4812]: I1124 22:08:06.181737 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"901a35cf1efc4772fb7d55761bd59152cf63cb3032ff1c82b7b8108d6e069744"} err="failed to get container status \"901a35cf1efc4772fb7d55761bd59152cf63cb3032ff1c82b7b8108d6e069744\": rpc error: code = NotFound desc = could not find container \"901a35cf1efc4772fb7d55761bd59152cf63cb3032ff1c82b7b8108d6e069744\": container with ID starting with 901a35cf1efc4772fb7d55761bd59152cf63cb3032ff1c82b7b8108d6e069744 not found: ID does not exist" Nov 24 22:08:06 crc kubenswrapper[4812]: I1124 22:08:06.986921 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02b74576-7600-4bad-9a9a-8f38273e9328" path="/var/lib/kubelet/pods/02b74576-7600-4bad-9a9a-8f38273e9328/volumes" Nov 24 22:08:08 crc kubenswrapper[4812]: I1124 22:08:08.217691 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nd2xr"] Nov 24 22:08:08 crc kubenswrapper[4812]: E1124 22:08:08.218649 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b74576-7600-4bad-9a9a-8f38273e9328" containerName="registry-server" Nov 24 22:08:08 crc kubenswrapper[4812]: I1124 22:08:08.218667 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b74576-7600-4bad-9a9a-8f38273e9328" containerName="registry-server" Nov 24 22:08:08 crc kubenswrapper[4812]: E1124 22:08:08.218692 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b74576-7600-4bad-9a9a-8f38273e9328" containerName="extract-utilities" Nov 24 22:08:08 crc kubenswrapper[4812]: I1124 22:08:08.218702 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b74576-7600-4bad-9a9a-8f38273e9328" containerName="extract-utilities" Nov 24 22:08:08 crc kubenswrapper[4812]: E1124 22:08:08.218751 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b74576-7600-4bad-9a9a-8f38273e9328" containerName="extract-content" Nov 24 22:08:08 crc kubenswrapper[4812]: I1124 22:08:08.218759 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b74576-7600-4bad-9a9a-8f38273e9328" containerName="extract-content" Nov 24 22:08:08 crc kubenswrapper[4812]: I1124 22:08:08.219043 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b74576-7600-4bad-9a9a-8f38273e9328" containerName="registry-server" Nov 24 22:08:08 crc kubenswrapper[4812]: I1124 22:08:08.221305 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nd2xr" Nov 24 22:08:08 crc kubenswrapper[4812]: I1124 22:08:08.230853 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nd2xr"] Nov 24 22:08:08 crc kubenswrapper[4812]: I1124 22:08:08.267891 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527c9e69-7fd7-45f7-81b0-4a48148cf88d-catalog-content\") pod \"redhat-operators-nd2xr\" (UID: \"527c9e69-7fd7-45f7-81b0-4a48148cf88d\") " pod="openshift-marketplace/redhat-operators-nd2xr" Nov 24 22:08:08 crc kubenswrapper[4812]: I1124 22:08:08.268255 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hvqf\" (UniqueName: \"kubernetes.io/projected/527c9e69-7fd7-45f7-81b0-4a48148cf88d-kube-api-access-9hvqf\") pod \"redhat-operators-nd2xr\" (UID: \"527c9e69-7fd7-45f7-81b0-4a48148cf88d\") " pod="openshift-marketplace/redhat-operators-nd2xr" Nov 24 22:08:08 crc kubenswrapper[4812]: I1124 22:08:08.268630 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527c9e69-7fd7-45f7-81b0-4a48148cf88d-utilities\") pod \"redhat-operators-nd2xr\" (UID: \"527c9e69-7fd7-45f7-81b0-4a48148cf88d\") " pod="openshift-marketplace/redhat-operators-nd2xr" Nov 24 22:08:08 crc kubenswrapper[4812]: I1124 22:08:08.370914 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527c9e69-7fd7-45f7-81b0-4a48148cf88d-utilities\") pod \"redhat-operators-nd2xr\" (UID: \"527c9e69-7fd7-45f7-81b0-4a48148cf88d\") " pod="openshift-marketplace/redhat-operators-nd2xr" Nov 24 22:08:08 crc kubenswrapper[4812]: I1124 22:08:08.371006 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527c9e69-7fd7-45f7-81b0-4a48148cf88d-catalog-content\") pod \"redhat-operators-nd2xr\" (UID: \"527c9e69-7fd7-45f7-81b0-4a48148cf88d\") " pod="openshift-marketplace/redhat-operators-nd2xr" Nov 24 22:08:08 crc kubenswrapper[4812]: I1124 22:08:08.371040 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hvqf\" (UniqueName: \"kubernetes.io/projected/527c9e69-7fd7-45f7-81b0-4a48148cf88d-kube-api-access-9hvqf\") pod \"redhat-operators-nd2xr\" (UID: \"527c9e69-7fd7-45f7-81b0-4a48148cf88d\") " pod="openshift-marketplace/redhat-operators-nd2xr" Nov 24 22:08:08 crc kubenswrapper[4812]: I1124 22:08:08.371742 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527c9e69-7fd7-45f7-81b0-4a48148cf88d-utilities\") pod \"redhat-operators-nd2xr\" (UID: \"527c9e69-7fd7-45f7-81b0-4a48148cf88d\") " pod="openshift-marketplace/redhat-operators-nd2xr" Nov 24 22:08:08 crc kubenswrapper[4812]: I1124 22:08:08.371752 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527c9e69-7fd7-45f7-81b0-4a48148cf88d-catalog-content\") pod \"redhat-operators-nd2xr\" (UID: \"527c9e69-7fd7-45f7-81b0-4a48148cf88d\") " pod="openshift-marketplace/redhat-operators-nd2xr" Nov 24 22:08:08 crc kubenswrapper[4812]: I1124 22:08:08.395531 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hvqf\" (UniqueName: \"kubernetes.io/projected/527c9e69-7fd7-45f7-81b0-4a48148cf88d-kube-api-access-9hvqf\") pod \"redhat-operators-nd2xr\" (UID: \"527c9e69-7fd7-45f7-81b0-4a48148cf88d\") " pod="openshift-marketplace/redhat-operators-nd2xr" Nov 24 22:08:08 crc kubenswrapper[4812]: I1124 22:08:08.543348 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nd2xr" Nov 24 22:08:09 crc kubenswrapper[4812]: I1124 22:08:09.024140 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nd2xr"] Nov 24 22:08:09 crc kubenswrapper[4812]: I1124 22:08:09.097699 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nd2xr" event={"ID":"527c9e69-7fd7-45f7-81b0-4a48148cf88d","Type":"ContainerStarted","Data":"4221cf5de2a5b50d1c4eca8f7988a7a49b2f79c4c2c7d4eb3ac52d426b8316a4"} Nov 24 22:08:10 crc kubenswrapper[4812]: I1124 22:08:10.112061 4812 generic.go:334] "Generic (PLEG): container finished" podID="527c9e69-7fd7-45f7-81b0-4a48148cf88d" containerID="727323c6b11ab5a0421e319d146d7226b2862e34d0333e3c4467d1e8d59cad27" exitCode=0 Nov 24 22:08:10 crc kubenswrapper[4812]: I1124 22:08:10.112183 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nd2xr" event={"ID":"527c9e69-7fd7-45f7-81b0-4a48148cf88d","Type":"ContainerDied","Data":"727323c6b11ab5a0421e319d146d7226b2862e34d0333e3c4467d1e8d59cad27"} Nov 24 22:08:11 crc kubenswrapper[4812]: I1124 22:08:11.126172 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nd2xr" event={"ID":"527c9e69-7fd7-45f7-81b0-4a48148cf88d","Type":"ContainerStarted","Data":"45ec9afa443daab5ac059ce9de3977a7398a873df3f127e3a8bbda585ad4f523"} Nov 24 22:08:16 crc kubenswrapper[4812]: I1124 22:08:16.175309 4812 generic.go:334] "Generic (PLEG): container finished" podID="527c9e69-7fd7-45f7-81b0-4a48148cf88d" containerID="45ec9afa443daab5ac059ce9de3977a7398a873df3f127e3a8bbda585ad4f523" exitCode=0 Nov 24 22:08:16 crc kubenswrapper[4812]: I1124 22:08:16.175387 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nd2xr" event={"ID":"527c9e69-7fd7-45f7-81b0-4a48148cf88d","Type":"ContainerDied","Data":"45ec9afa443daab5ac059ce9de3977a7398a873df3f127e3a8bbda585ad4f523"} Nov 24 22:08:16 crc kubenswrapper[4812]: I1124 22:08:16.356072 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-d2zlq_8637886d-1ab8-45c9-9b90-7d94a6820292/cert-manager-controller/0.log" Nov 24 22:08:16 crc kubenswrapper[4812]: I1124 22:08:16.464167 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-qqxzg_999f6f49-032a-4641-9fc4-b0d7b9094d87/cert-manager-cainjector/0.log" Nov 24 22:08:16 crc kubenswrapper[4812]: I1124 22:08:16.545502 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-kjhvq_47ba0344-b57c-47c2-917f-52f94e072562/cert-manager-webhook/0.log" Nov 24 22:08:18 crc kubenswrapper[4812]: I1124 22:08:18.200330 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nd2xr" event={"ID":"527c9e69-7fd7-45f7-81b0-4a48148cf88d","Type":"ContainerStarted","Data":"58528dc62501ab1308573eee22282e5838529b60b566ca617cca2d9a0e7036ce"} Nov 24 22:08:18 crc kubenswrapper[4812]: I1124 22:08:18.220978 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nd2xr" podStartSLOduration=3.766783905 podStartE2EDuration="10.220962974s" podCreationTimestamp="2025-11-24 22:08:08 +0000 UTC" firstStartedPulling="2025-11-24 22:08:10.114710492 +0000 UTC m=+10283.903662873" lastFinishedPulling="2025-11-24 22:08:16.568889561 +0000 UTC m=+10290.357841942" observedRunningTime="2025-11-24 22:08:18.218927526 +0000 UTC m=+10292.007879907" watchObservedRunningTime="2025-11-24 22:08:18.220962974 +0000 UTC m=+10292.009915345" Nov 24 22:08:18 crc kubenswrapper[4812]: I1124 22:08:18.544837 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nd2xr" Nov 24 22:08:18 crc kubenswrapper[4812]: I1124 22:08:18.544914 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nd2xr" Nov 24 22:08:19 crc kubenswrapper[4812]: I1124 22:08:19.587082 4812 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nd2xr" podUID="527c9e69-7fd7-45f7-81b0-4a48148cf88d" containerName="registry-server" probeResult="failure" output=< Nov 24 22:08:19 crc kubenswrapper[4812]: timeout: failed to connect service ":50051" within 1s Nov 24 22:08:19 crc kubenswrapper[4812]: > Nov 24 22:08:28 crc kubenswrapper[4812]: I1124 22:08:28.631264 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nd2xr" Nov 24 22:08:28 crc kubenswrapper[4812]: I1124 22:08:28.688219 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nd2xr" Nov 24 22:08:28 crc kubenswrapper[4812]: I1124 22:08:28.877975 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nd2xr"] Nov 24 22:08:30 crc kubenswrapper[4812]: I1124 22:08:30.344659 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nd2xr" podUID="527c9e69-7fd7-45f7-81b0-4a48148cf88d" containerName="registry-server" containerID="cri-o://58528dc62501ab1308573eee22282e5838529b60b566ca617cca2d9a0e7036ce" gracePeriod=2 Nov 24 22:08:30 crc kubenswrapper[4812]: I1124 22:08:30.778388 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nd2xr" Nov 24 22:08:30 crc kubenswrapper[4812]: I1124 22:08:30.870271 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hvqf\" (UniqueName: \"kubernetes.io/projected/527c9e69-7fd7-45f7-81b0-4a48148cf88d-kube-api-access-9hvqf\") pod \"527c9e69-7fd7-45f7-81b0-4a48148cf88d\" (UID: \"527c9e69-7fd7-45f7-81b0-4a48148cf88d\") " Nov 24 22:08:30 crc kubenswrapper[4812]: I1124 22:08:30.870316 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527c9e69-7fd7-45f7-81b0-4a48148cf88d-catalog-content\") pod \"527c9e69-7fd7-45f7-81b0-4a48148cf88d\" (UID: \"527c9e69-7fd7-45f7-81b0-4a48148cf88d\") " Nov 24 22:08:30 crc kubenswrapper[4812]: I1124 22:08:30.870368 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527c9e69-7fd7-45f7-81b0-4a48148cf88d-utilities\") pod \"527c9e69-7fd7-45f7-81b0-4a48148cf88d\" (UID: \"527c9e69-7fd7-45f7-81b0-4a48148cf88d\") " Nov 24 22:08:30 crc kubenswrapper[4812]: I1124 22:08:30.871531 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/527c9e69-7fd7-45f7-81b0-4a48148cf88d-utilities" (OuterVolumeSpecName: "utilities") pod "527c9e69-7fd7-45f7-81b0-4a48148cf88d" (UID: "527c9e69-7fd7-45f7-81b0-4a48148cf88d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:08:30 crc kubenswrapper[4812]: I1124 22:08:30.877452 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/527c9e69-7fd7-45f7-81b0-4a48148cf88d-kube-api-access-9hvqf" (OuterVolumeSpecName: "kube-api-access-9hvqf") pod "527c9e69-7fd7-45f7-81b0-4a48148cf88d" (UID: "527c9e69-7fd7-45f7-81b0-4a48148cf88d"). InnerVolumeSpecName "kube-api-access-9hvqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:08:30 crc kubenswrapper[4812]: I1124 22:08:30.972803 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hvqf\" (UniqueName: \"kubernetes.io/projected/527c9e69-7fd7-45f7-81b0-4a48148cf88d-kube-api-access-9hvqf\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:30 crc kubenswrapper[4812]: I1124 22:08:30.972835 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/527c9e69-7fd7-45f7-81b0-4a48148cf88d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:30 crc kubenswrapper[4812]: I1124 22:08:30.987768 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/527c9e69-7fd7-45f7-81b0-4a48148cf88d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "527c9e69-7fd7-45f7-81b0-4a48148cf88d" (UID: "527c9e69-7fd7-45f7-81b0-4a48148cf88d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:08:31 crc kubenswrapper[4812]: I1124 22:08:31.074915 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/527c9e69-7fd7-45f7-81b0-4a48148cf88d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:31 crc kubenswrapper[4812]: I1124 22:08:31.357073 4812 generic.go:334] "Generic (PLEG): container finished" podID="527c9e69-7fd7-45f7-81b0-4a48148cf88d" containerID="58528dc62501ab1308573eee22282e5838529b60b566ca617cca2d9a0e7036ce" exitCode=0 Nov 24 22:08:31 crc kubenswrapper[4812]: I1124 22:08:31.357112 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nd2xr" event={"ID":"527c9e69-7fd7-45f7-81b0-4a48148cf88d","Type":"ContainerDied","Data":"58528dc62501ab1308573eee22282e5838529b60b566ca617cca2d9a0e7036ce"} Nov 24 22:08:31 crc kubenswrapper[4812]: I1124 22:08:31.357138 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nd2xr" event={"ID":"527c9e69-7fd7-45f7-81b0-4a48148cf88d","Type":"ContainerDied","Data":"4221cf5de2a5b50d1c4eca8f7988a7a49b2f79c4c2c7d4eb3ac52d426b8316a4"} Nov 24 22:08:31 crc kubenswrapper[4812]: I1124 22:08:31.357156 4812 scope.go:117] "RemoveContainer" containerID="58528dc62501ab1308573eee22282e5838529b60b566ca617cca2d9a0e7036ce" Nov 24 22:08:31 crc kubenswrapper[4812]: I1124 22:08:31.357277 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nd2xr" Nov 24 22:08:31 crc kubenswrapper[4812]: I1124 22:08:31.380871 4812 scope.go:117] "RemoveContainer" containerID="45ec9afa443daab5ac059ce9de3977a7398a873df3f127e3a8bbda585ad4f523" Nov 24 22:08:31 crc kubenswrapper[4812]: I1124 22:08:31.396633 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nd2xr"] Nov 24 22:08:31 crc kubenswrapper[4812]: I1124 22:08:31.406052 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nd2xr"] Nov 24 22:08:31 crc kubenswrapper[4812]: I1124 22:08:31.414005 4812 scope.go:117] "RemoveContainer" containerID="727323c6b11ab5a0421e319d146d7226b2862e34d0333e3c4467d1e8d59cad27" Nov 24 22:08:32 crc kubenswrapper[4812]: I1124 22:08:32.179230 4812 scope.go:117] "RemoveContainer" containerID="58528dc62501ab1308573eee22282e5838529b60b566ca617cca2d9a0e7036ce" Nov 24 22:08:32 crc kubenswrapper[4812]: E1124 22:08:32.181662 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58528dc62501ab1308573eee22282e5838529b60b566ca617cca2d9a0e7036ce\": container with ID starting with 58528dc62501ab1308573eee22282e5838529b60b566ca617cca2d9a0e7036ce not found: ID does not exist" containerID="58528dc62501ab1308573eee22282e5838529b60b566ca617cca2d9a0e7036ce" Nov 24 22:08:32 crc kubenswrapper[4812]: I1124 22:08:32.181718 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58528dc62501ab1308573eee22282e5838529b60b566ca617cca2d9a0e7036ce"} err="failed to get container status \"58528dc62501ab1308573eee22282e5838529b60b566ca617cca2d9a0e7036ce\": rpc error: code = NotFound desc = could not find container \"58528dc62501ab1308573eee22282e5838529b60b566ca617cca2d9a0e7036ce\": container with ID starting with 58528dc62501ab1308573eee22282e5838529b60b566ca617cca2d9a0e7036ce not found: ID does not exist" Nov 24 22:08:32 crc kubenswrapper[4812]: I1124 22:08:32.181753 4812 scope.go:117] "RemoveContainer" containerID="45ec9afa443daab5ac059ce9de3977a7398a873df3f127e3a8bbda585ad4f523" Nov 24 22:08:32 crc kubenswrapper[4812]: E1124 22:08:32.182127 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45ec9afa443daab5ac059ce9de3977a7398a873df3f127e3a8bbda585ad4f523\": container with ID starting with 45ec9afa443daab5ac059ce9de3977a7398a873df3f127e3a8bbda585ad4f523 not found: ID does not exist" containerID="45ec9afa443daab5ac059ce9de3977a7398a873df3f127e3a8bbda585ad4f523" Nov 24 22:08:32 crc kubenswrapper[4812]: I1124 22:08:32.182166 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45ec9afa443daab5ac059ce9de3977a7398a873df3f127e3a8bbda585ad4f523"} err="failed to get container status \"45ec9afa443daab5ac059ce9de3977a7398a873df3f127e3a8bbda585ad4f523\": rpc error: code = NotFound desc = could not find container \"45ec9afa443daab5ac059ce9de3977a7398a873df3f127e3a8bbda585ad4f523\": container with ID starting with 45ec9afa443daab5ac059ce9de3977a7398a873df3f127e3a8bbda585ad4f523 not found: ID does not exist" Nov 24 22:08:32 crc kubenswrapper[4812]: I1124 22:08:32.182192 4812 scope.go:117] "RemoveContainer" containerID="727323c6b11ab5a0421e319d146d7226b2862e34d0333e3c4467d1e8d59cad27" Nov 24 22:08:32 crc kubenswrapper[4812]: E1124 22:08:32.182787 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"727323c6b11ab5a0421e319d146d7226b2862e34d0333e3c4467d1e8d59cad27\": container with ID starting with 727323c6b11ab5a0421e319d146d7226b2862e34d0333e3c4467d1e8d59cad27 not found: ID does not exist" containerID="727323c6b11ab5a0421e319d146d7226b2862e34d0333e3c4467d1e8d59cad27" Nov 24 22:08:32 crc kubenswrapper[4812]: I1124 22:08:32.182843 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"727323c6b11ab5a0421e319d146d7226b2862e34d0333e3c4467d1e8d59cad27"} err="failed to get container status \"727323c6b11ab5a0421e319d146d7226b2862e34d0333e3c4467d1e8d59cad27\": rpc error: code = NotFound desc = could not find container \"727323c6b11ab5a0421e319d146d7226b2862e34d0333e3c4467d1e8d59cad27\": container with ID starting with 727323c6b11ab5a0421e319d146d7226b2862e34d0333e3c4467d1e8d59cad27 not found: ID does not exist" Nov 24 22:08:32 crc kubenswrapper[4812]: I1124 22:08:32.355500 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-h2g74_84931e32-2b26-4a7b-8387-78c99e46841d/nmstate-console-plugin/0.log" Nov 24 22:08:32 crc kubenswrapper[4812]: I1124 22:08:32.395796 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-fp6mk_901a9498-a0c1-497c-b671-122385a07c36/nmstate-handler/0.log" Nov 24 22:08:32 crc kubenswrapper[4812]: I1124 22:08:32.572972 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-dt784_b313856b-afca-4673-ba78-cdcbf5d465cb/kube-rbac-proxy/0.log" Nov 24 22:08:32 crc kubenswrapper[4812]: I1124 22:08:32.603870 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-dt784_b313856b-afca-4673-ba78-cdcbf5d465cb/nmstate-metrics/0.log" Nov 24 22:08:32 crc kubenswrapper[4812]: I1124 22:08:32.835573 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-ttqsd_e0231310-bf09-4709-a225-ffc202a7ca6a/nmstate-operator/0.log" Nov 24 22:08:32 crc kubenswrapper[4812]: I1124 22:08:32.856526 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-5ts74_dd98c63c-2347-487d-95d0-b987f616398c/nmstate-webhook/0.log" Nov 24 22:08:32 crc kubenswrapper[4812]: I1124 22:08:32.977429 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="527c9e69-7fd7-45f7-81b0-4a48148cf88d" path="/var/lib/kubelet/pods/527c9e69-7fd7-45f7-81b0-4a48148cf88d/volumes" Nov 24 22:08:32 crc kubenswrapper[4812]: I1124 22:08:32.998145 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:08:32 crc kubenswrapper[4812]: I1124 22:08:32.998217 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:08:48 crc kubenswrapper[4812]: I1124 22:08:48.709108 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-fnrdw_50657362-f6ce-4ba4-8a38-9d512e1abb86/kube-rbac-proxy/0.log" Nov 24 22:08:48 crc kubenswrapper[4812]: I1124 22:08:48.889207 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pnj5c_492fe1b2-51da-4c19-a7d9-0aa2727a9989/cp-frr-files/0.log" Nov 24 22:08:49 crc kubenswrapper[4812]: I1124 22:08:49.073672 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-fnrdw_50657362-f6ce-4ba4-8a38-9d512e1abb86/controller/0.log" Nov 24 22:08:49 crc kubenswrapper[4812]: I1124 22:08:49.080403 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pnj5c_492fe1b2-51da-4c19-a7d9-0aa2727a9989/cp-frr-files/0.log" Nov 24 22:08:49 crc kubenswrapper[4812]: I1124 22:08:49.087093 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pnj5c_492fe1b2-51da-4c19-a7d9-0aa2727a9989/cp-reloader/0.log" Nov 24 22:08:49 crc kubenswrapper[4812]: I1124 22:08:49.118510 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pnj5c_492fe1b2-51da-4c19-a7d9-0aa2727a9989/cp-metrics/0.log" Nov 24 22:08:49 crc kubenswrapper[4812]: I1124 22:08:49.225082 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pnj5c_492fe1b2-51da-4c19-a7d9-0aa2727a9989/cp-reloader/0.log" Nov 24 22:08:49 crc kubenswrapper[4812]: I1124 22:08:49.402116 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pnj5c_492fe1b2-51da-4c19-a7d9-0aa2727a9989/cp-reloader/0.log" Nov 24 22:08:49 crc kubenswrapper[4812]: I1124 22:08:49.424822 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pnj5c_492fe1b2-51da-4c19-a7d9-0aa2727a9989/cp-metrics/0.log" Nov 24 22:08:49 crc kubenswrapper[4812]: I1124 22:08:49.430838 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pnj5c_492fe1b2-51da-4c19-a7d9-0aa2727a9989/cp-metrics/0.log" Nov 24 22:08:49 crc kubenswrapper[4812]: I1124 22:08:49.466731 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pnj5c_492fe1b2-51da-4c19-a7d9-0aa2727a9989/cp-frr-files/0.log" Nov 24 22:08:49 crc kubenswrapper[4812]: I1124 22:08:49.599945 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pnj5c_492fe1b2-51da-4c19-a7d9-0aa2727a9989/cp-frr-files/0.log" Nov 24 22:08:49 crc kubenswrapper[4812]: I1124 22:08:49.604097 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pnj5c_492fe1b2-51da-4c19-a7d9-0aa2727a9989/cp-reloader/0.log" Nov 24 22:08:49 crc kubenswrapper[4812]: I1124 22:08:49.621108 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pnj5c_492fe1b2-51da-4c19-a7d9-0aa2727a9989/cp-metrics/0.log" Nov 24 22:08:49 crc kubenswrapper[4812]: I1124 22:08:49.664804 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pnj5c_492fe1b2-51da-4c19-a7d9-0aa2727a9989/controller/0.log" Nov 24 22:08:49 crc kubenswrapper[4812]: I1124 22:08:49.755805 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pnj5c_492fe1b2-51da-4c19-a7d9-0aa2727a9989/frr-metrics/0.log" Nov 24 22:08:49 crc kubenswrapper[4812]: I1124 22:08:49.819743 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pnj5c_492fe1b2-51da-4c19-a7d9-0aa2727a9989/kube-rbac-proxy/0.log" Nov 24 22:08:49 crc kubenswrapper[4812]: I1124 22:08:49.861087 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pnj5c_492fe1b2-51da-4c19-a7d9-0aa2727a9989/kube-rbac-proxy-frr/0.log" Nov 24 22:08:49 crc kubenswrapper[4812]: I1124 22:08:49.946588 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pnj5c_492fe1b2-51da-4c19-a7d9-0aa2727a9989/reloader/0.log" Nov 24 22:08:50 crc kubenswrapper[4812]: I1124 22:08:50.077598 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-sp8mm_7b2ec99a-e925-44c3-b8ca-f60f2bc7c015/frr-k8s-webhook-server/0.log" Nov 24 22:08:50 crc kubenswrapper[4812]: I1124 22:08:50.221088 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-78f47c68d-wvmj4_3d0c4f09-ce13-4072-8746-9b086bca839f/manager/0.log" Nov 24 22:08:50 crc kubenswrapper[4812]: I1124 22:08:50.376484 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-84b697fbfb-749d6_66596d46-48a0-4690-935e-e0144ee6923c/webhook-server/0.log" Nov 24 22:08:50 crc kubenswrapper[4812]: I1124 22:08:50.509428 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wx4bd_b2a8b1dc-073e-4abb-bb26-f6bce6e2344f/kube-rbac-proxy/0.log" Nov 24 22:08:51 crc kubenswrapper[4812]: I1124 22:08:51.429315 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-wx4bd_b2a8b1dc-073e-4abb-bb26-f6bce6e2344f/speaker/0.log" Nov 24 22:08:53 crc kubenswrapper[4812]: I1124 22:08:53.289239 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pnj5c_492fe1b2-51da-4c19-a7d9-0aa2727a9989/frr/0.log" Nov 24 22:09:02 crc kubenswrapper[4812]: I1124 22:09:02.998927 4812 patch_prober.go:28] interesting pod/machine-config-daemon-nscsk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:09:03 crc kubenswrapper[4812]: I1124 22:09:03.000481 4812 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:09:03 crc kubenswrapper[4812]: I1124 22:09:03.000625 4812 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" Nov 24 22:09:03 crc kubenswrapper[4812]: I1124 22:09:03.001601 4812 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e"} pod="openshift-machine-config-operator/machine-config-daemon-nscsk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:09:03 crc kubenswrapper[4812]: I1124 22:09:03.001749 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerName="machine-config-daemon" containerID="cri-o://29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" gracePeriod=600 Nov 24 22:09:03 crc kubenswrapper[4812]: E1124 22:09:03.130229 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:09:03 crc kubenswrapper[4812]: I1124 22:09:03.749024 4812 generic.go:334] "Generic (PLEG): container finished" podID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" exitCode=0 Nov 24 22:09:03 crc kubenswrapper[4812]: I1124 22:09:03.749124 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerDied","Data":"29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e"} Nov 24 22:09:03 crc kubenswrapper[4812]: I1124 22:09:03.749488 4812 scope.go:117] "RemoveContainer" containerID="49054f13e47f83e6b6e31e0eccf51f5fd1d8d5b8e02975ff3d97240cc7947326" Nov 24 22:09:03 crc kubenswrapper[4812]: I1124 22:09:03.750156 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:09:03 crc kubenswrapper[4812]: E1124 22:09:03.750526 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:09:04 crc kubenswrapper[4812]: I1124 22:09:04.702178 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb_6364088b-2467-4d1d-b6e9-b29ec6ea75a5/util/0.log" Nov 24 22:09:04 crc kubenswrapper[4812]: I1124 22:09:04.895474 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb_6364088b-2467-4d1d-b6e9-b29ec6ea75a5/pull/0.log" Nov 24 22:09:04 crc kubenswrapper[4812]: I1124 22:09:04.932666 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb_6364088b-2467-4d1d-b6e9-b29ec6ea75a5/util/0.log" Nov 24 22:09:04 crc kubenswrapper[4812]: I1124 22:09:04.949383 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb_6364088b-2467-4d1d-b6e9-b29ec6ea75a5/pull/0.log" Nov 24 22:09:05 crc kubenswrapper[4812]: I1124 22:09:05.126329 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb_6364088b-2467-4d1d-b6e9-b29ec6ea75a5/util/0.log" Nov 24 22:09:05 crc kubenswrapper[4812]: I1124 22:09:05.126653 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb_6364088b-2467-4d1d-b6e9-b29ec6ea75a5/extract/0.log" Nov 24 22:09:05 crc kubenswrapper[4812]: I1124 22:09:05.143074 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5qpzb_6364088b-2467-4d1d-b6e9-b29ec6ea75a5/pull/0.log" Nov 24 22:09:05 crc kubenswrapper[4812]: I1124 22:09:05.302458 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9_ce87880b-2429-4769-8f74-00539adf6377/util/0.log" Nov 24 22:09:05 crc kubenswrapper[4812]: I1124 22:09:05.457383 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9_ce87880b-2429-4769-8f74-00539adf6377/util/0.log" Nov 24 22:09:05 crc kubenswrapper[4812]: I1124 22:09:05.484727 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9_ce87880b-2429-4769-8f74-00539adf6377/pull/0.log" Nov 24 22:09:05 crc kubenswrapper[4812]: I1124 22:09:05.486594 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9_ce87880b-2429-4769-8f74-00539adf6377/pull/0.log" Nov 24 22:09:05 crc kubenswrapper[4812]: I1124 22:09:05.649402 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9_ce87880b-2429-4769-8f74-00539adf6377/util/0.log" Nov 24 22:09:05 crc kubenswrapper[4812]: I1124 22:09:05.671237 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9_ce87880b-2429-4769-8f74-00539adf6377/pull/0.log" Nov 24 22:09:05 crc kubenswrapper[4812]: I1124 22:09:05.703459 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ernsl9_ce87880b-2429-4769-8f74-00539adf6377/extract/0.log" Nov 24 22:09:05 crc kubenswrapper[4812]: I1124 22:09:05.821691 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv_c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe/util/0.log" Nov 24 22:09:05 crc kubenswrapper[4812]: I1124 22:09:05.977036 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv_c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe/pull/0.log" Nov 24 22:09:05 crc kubenswrapper[4812]: I1124 22:09:05.982060 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv_c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe/pull/0.log" Nov 24 22:09:05 crc kubenswrapper[4812]: I1124 22:09:05.992438 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv_c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe/util/0.log" Nov 24 22:09:06 crc kubenswrapper[4812]: I1124 22:09:06.204022 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv_c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe/pull/0.log" Nov 24 22:09:06 crc kubenswrapper[4812]: I1124 22:09:06.207947 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv_c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe/util/0.log" Nov 24 22:09:06 crc kubenswrapper[4812]: I1124 22:09:06.223631 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102vdvv_c4ee6d91-39d3-42a3-8a0f-a05c994e8bbe/extract/0.log" Nov 24 22:09:06 crc kubenswrapper[4812]: I1124 22:09:06.341153 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rvt8f_00e9d0b7-e26a-493c-a555-e03a8faaa937/extract-utilities/0.log" Nov 24 22:09:06 crc kubenswrapper[4812]: I1124 22:09:06.529953 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rvt8f_00e9d0b7-e26a-493c-a555-e03a8faaa937/extract-utilities/0.log" Nov 24 22:09:06 crc kubenswrapper[4812]: I1124 22:09:06.537712 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rvt8f_00e9d0b7-e26a-493c-a555-e03a8faaa937/extract-content/0.log" Nov 24 22:09:06 crc kubenswrapper[4812]: I1124 22:09:06.575268 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rvt8f_00e9d0b7-e26a-493c-a555-e03a8faaa937/extract-content/0.log" Nov 24 22:09:06 crc kubenswrapper[4812]: I1124 22:09:06.691167 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rvt8f_00e9d0b7-e26a-493c-a555-e03a8faaa937/extract-utilities/0.log" Nov 24 22:09:06 crc kubenswrapper[4812]: I1124 22:09:06.707645 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rvt8f_00e9d0b7-e26a-493c-a555-e03a8faaa937/extract-content/0.log" Nov 24 22:09:06 crc kubenswrapper[4812]: I1124 22:09:06.894903 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6rkng_338b7e7c-a0ec-4709-87d2-27b5d5852f3c/extract-utilities/0.log" Nov 24 22:09:07 crc kubenswrapper[4812]: I1124 22:09:07.087732 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6rkng_338b7e7c-a0ec-4709-87d2-27b5d5852f3c/extract-utilities/0.log" Nov 24 22:09:07 crc kubenswrapper[4812]: I1124 22:09:07.471646 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rvt8f_00e9d0b7-e26a-493c-a555-e03a8faaa937/registry-server/0.log" Nov 24 22:09:07 crc kubenswrapper[4812]: I1124 22:09:07.816988 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6rkng_338b7e7c-a0ec-4709-87d2-27b5d5852f3c/extract-content/0.log" Nov 24 22:09:07 crc kubenswrapper[4812]: I1124 22:09:07.850622 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6rkng_338b7e7c-a0ec-4709-87d2-27b5d5852f3c/extract-content/0.log" Nov 24 22:09:08 crc kubenswrapper[4812]: I1124 22:09:08.204008 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6rkng_338b7e7c-a0ec-4709-87d2-27b5d5852f3c/extract-content/0.log" Nov 24 22:09:08 crc kubenswrapper[4812]: I1124 22:09:08.223477 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd_0e652404-fed6-42f2-aae4-02b572479de1/util/0.log" Nov 24 22:09:08 crc kubenswrapper[4812]: I1124 22:09:08.230366 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6rkng_338b7e7c-a0ec-4709-87d2-27b5d5852f3c/extract-utilities/0.log" Nov 24 22:09:08 crc kubenswrapper[4812]: I1124 22:09:08.442225 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd_0e652404-fed6-42f2-aae4-02b572479de1/pull/0.log" Nov 24 22:09:08 crc kubenswrapper[4812]: I1124 22:09:08.467458 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd_0e652404-fed6-42f2-aae4-02b572479de1/pull/0.log" Nov 24 22:09:08 crc kubenswrapper[4812]: I1124 22:09:08.475039 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd_0e652404-fed6-42f2-aae4-02b572479de1/util/0.log" Nov 24 22:09:08 crc kubenswrapper[4812]: I1124 22:09:08.634965 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd_0e652404-fed6-42f2-aae4-02b572479de1/util/0.log" Nov 24 22:09:08 crc kubenswrapper[4812]: I1124 22:09:08.727274 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd_0e652404-fed6-42f2-aae4-02b572479de1/pull/0.log" Nov 24 22:09:08 crc kubenswrapper[4812]: I1124 22:09:08.738770 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6xb2nd_0e652404-fed6-42f2-aae4-02b572479de1/extract/0.log" Nov 24 22:09:09 crc kubenswrapper[4812]: I1124 22:09:09.447379 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vdg8s_de8b682e-4003-4de2-b2b2-68a61dbb5835/extract-utilities/0.log" Nov 24 22:09:09 crc kubenswrapper[4812]: I1124 22:09:09.496976 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-xbqxn_d8906e19-a1ac-49ea-b9c0-d4470aa2b806/marketplace-operator/0.log" Nov 24 22:09:09 crc kubenswrapper[4812]: I1124 22:09:09.612686 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vdg8s_de8b682e-4003-4de2-b2b2-68a61dbb5835/extract-utilities/0.log" Nov 24 22:09:09 crc kubenswrapper[4812]: I1124 22:09:09.645025 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vdg8s_de8b682e-4003-4de2-b2b2-68a61dbb5835/extract-content/0.log" Nov 24 22:09:09 crc kubenswrapper[4812]: I1124 22:09:09.689789 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6rkng_338b7e7c-a0ec-4709-87d2-27b5d5852f3c/registry-server/0.log" Nov 24 22:09:09 crc kubenswrapper[4812]: I1124 22:09:09.705832 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vdg8s_de8b682e-4003-4de2-b2b2-68a61dbb5835/extract-content/0.log" Nov 24 22:09:09 crc kubenswrapper[4812]: I1124 22:09:09.882065 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vdg8s_de8b682e-4003-4de2-b2b2-68a61dbb5835/extract-utilities/0.log" Nov 24 22:09:09 crc kubenswrapper[4812]: I1124 22:09:09.887639 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vdg8s_de8b682e-4003-4de2-b2b2-68a61dbb5835/extract-content/0.log" Nov 24 22:09:09 crc kubenswrapper[4812]: I1124 22:09:09.909960 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8kmt6_01b2c16e-f16e-4b31-abf4-ba0a69e849d6/extract-utilities/0.log" Nov 24 22:09:10 crc kubenswrapper[4812]: I1124 22:09:10.131412 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8kmt6_01b2c16e-f16e-4b31-abf4-ba0a69e849d6/extract-content/0.log" Nov 24 22:09:10 crc kubenswrapper[4812]: I1124 22:09:10.135593 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8kmt6_01b2c16e-f16e-4b31-abf4-ba0a69e849d6/extract-utilities/0.log" Nov 24 22:09:10 crc kubenswrapper[4812]: I1124 22:09:10.198227 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vdg8s_de8b682e-4003-4de2-b2b2-68a61dbb5835/registry-server/0.log" Nov 24 22:09:10 crc kubenswrapper[4812]: I1124 22:09:10.214890 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8kmt6_01b2c16e-f16e-4b31-abf4-ba0a69e849d6/extract-content/0.log" Nov 24 22:09:10 crc kubenswrapper[4812]: I1124 22:09:10.321175 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8kmt6_01b2c16e-f16e-4b31-abf4-ba0a69e849d6/extract-utilities/0.log" Nov 24 22:09:10 crc kubenswrapper[4812]: I1124 22:09:10.341601 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8kmt6_01b2c16e-f16e-4b31-abf4-ba0a69e849d6/extract-content/0.log" Nov 24 22:09:11 crc kubenswrapper[4812]: I1124 22:09:11.488611 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8kmt6_01b2c16e-f16e-4b31-abf4-ba0a69e849d6/registry-server/0.log" Nov 24 22:09:17 crc kubenswrapper[4812]: I1124 22:09:17.965779 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:09:17 crc kubenswrapper[4812]: E1124 22:09:17.967275 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:09:24 crc kubenswrapper[4812]: I1124 22:09:24.129715 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-djtpv_2dfb256e-0c0f-4208-84f8-389921336bf1/prometheus-operator/0.log" Nov 24 22:09:24 crc kubenswrapper[4812]: I1124 22:09:24.345606 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c965c4cd8-sf5b8_67d260a7-9265-4dc5-ad9a-0d52330ffab9/prometheus-operator-admission-webhook/0.log" Nov 24 22:09:24 crc kubenswrapper[4812]: I1124 22:09:24.354993 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c965c4cd8-7lm9l_b8adfdfd-e5dd-4213-8cb5-56e8d2cf1b4e/prometheus-operator-admission-webhook/0.log" Nov 24 22:09:24 crc kubenswrapper[4812]: I1124 22:09:24.532061 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-wk9fv_90781d50-55b2-4978-85f5-c107971271e1/operator/0.log" Nov 24 22:09:24 crc kubenswrapper[4812]: I1124 22:09:24.642048 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-b78tl_429f0e17-50aa-44ad-917b-2c5f8b25b182/perses-operator/0.log" Nov 24 22:09:29 crc kubenswrapper[4812]: I1124 22:09:29.966412 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:09:29 crc kubenswrapper[4812]: E1124 22:09:29.967491 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:09:44 crc kubenswrapper[4812]: I1124 22:09:44.966382 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:09:44 crc kubenswrapper[4812]: E1124 22:09:44.967386 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:09:55 crc kubenswrapper[4812]: E1124 22:09:55.268505 4812 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.36:38094->38.102.83.36:46073: write tcp 38.102.83.36:38094->38.102.83.36:46073: write: broken pipe Nov 24 22:09:57 crc kubenswrapper[4812]: I1124 22:09:57.965733 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:09:57 crc kubenswrapper[4812]: E1124 22:09:57.966379 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:10:06 crc kubenswrapper[4812]: I1124 22:10:06.497186 4812 scope.go:117] "RemoveContainer" containerID="a187e32e63c117d09fd0dcbe2b1590db8d75f040ffc1417dd677a9ec27049a03" Nov 24 22:10:09 crc kubenswrapper[4812]: I1124 22:10:09.966222 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:10:09 crc kubenswrapper[4812]: E1124 22:10:09.967594 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:10:11 crc kubenswrapper[4812]: I1124 22:10:11.522281 4812 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-84dmh"] Nov 24 22:10:11 crc kubenswrapper[4812]: E1124 22:10:11.523025 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527c9e69-7fd7-45f7-81b0-4a48148cf88d" containerName="extract-content" Nov 24 22:10:11 crc kubenswrapper[4812]: I1124 22:10:11.523038 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="527c9e69-7fd7-45f7-81b0-4a48148cf88d" containerName="extract-content" Nov 24 22:10:11 crc kubenswrapper[4812]: E1124 22:10:11.523048 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527c9e69-7fd7-45f7-81b0-4a48148cf88d" containerName="registry-server" Nov 24 22:10:11 crc kubenswrapper[4812]: I1124 22:10:11.523053 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="527c9e69-7fd7-45f7-81b0-4a48148cf88d" containerName="registry-server" Nov 24 22:10:11 crc kubenswrapper[4812]: E1124 22:10:11.523077 4812 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="527c9e69-7fd7-45f7-81b0-4a48148cf88d" containerName="extract-utilities" Nov 24 22:10:11 crc kubenswrapper[4812]: I1124 22:10:11.523085 4812 state_mem.go:107] "Deleted CPUSet assignment" podUID="527c9e69-7fd7-45f7-81b0-4a48148cf88d" containerName="extract-utilities" Nov 24 22:10:11 crc kubenswrapper[4812]: I1124 22:10:11.524929 4812 memory_manager.go:354] "RemoveStaleState removing state" podUID="527c9e69-7fd7-45f7-81b0-4a48148cf88d" containerName="registry-server" Nov 24 22:10:11 crc kubenswrapper[4812]: I1124 22:10:11.527046 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84dmh" Nov 24 22:10:11 crc kubenswrapper[4812]: I1124 22:10:11.540232 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-84dmh"] Nov 24 22:10:11 crc kubenswrapper[4812]: I1124 22:10:11.711636 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a40ab24b-5359-4420-a7d0-e6503fc6f3a6-catalog-content\") pod \"certified-operators-84dmh\" (UID: \"a40ab24b-5359-4420-a7d0-e6503fc6f3a6\") " pod="openshift-marketplace/certified-operators-84dmh" Nov 24 22:10:11 crc kubenswrapper[4812]: I1124 22:10:11.711706 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4kvd\" (UniqueName: \"kubernetes.io/projected/a40ab24b-5359-4420-a7d0-e6503fc6f3a6-kube-api-access-z4kvd\") pod \"certified-operators-84dmh\" (UID: \"a40ab24b-5359-4420-a7d0-e6503fc6f3a6\") " pod="openshift-marketplace/certified-operators-84dmh" Nov 24 22:10:11 crc kubenswrapper[4812]: I1124 22:10:11.711836 4812 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a40ab24b-5359-4420-a7d0-e6503fc6f3a6-utilities\") pod \"certified-operators-84dmh\" (UID: \"a40ab24b-5359-4420-a7d0-e6503fc6f3a6\") " pod="openshift-marketplace/certified-operators-84dmh" Nov 24 22:10:11 crc kubenswrapper[4812]: I1124 22:10:11.813980 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a40ab24b-5359-4420-a7d0-e6503fc6f3a6-catalog-content\") pod \"certified-operators-84dmh\" (UID: \"a40ab24b-5359-4420-a7d0-e6503fc6f3a6\") " pod="openshift-marketplace/certified-operators-84dmh" Nov 24 22:10:11 crc kubenswrapper[4812]: I1124 22:10:11.814050 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4kvd\" (UniqueName: \"kubernetes.io/projected/a40ab24b-5359-4420-a7d0-e6503fc6f3a6-kube-api-access-z4kvd\") pod \"certified-operators-84dmh\" (UID: \"a40ab24b-5359-4420-a7d0-e6503fc6f3a6\") " pod="openshift-marketplace/certified-operators-84dmh" Nov 24 22:10:11 crc kubenswrapper[4812]: I1124 22:10:11.814134 4812 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a40ab24b-5359-4420-a7d0-e6503fc6f3a6-utilities\") pod \"certified-operators-84dmh\" (UID: \"a40ab24b-5359-4420-a7d0-e6503fc6f3a6\") " pod="openshift-marketplace/certified-operators-84dmh" Nov 24 22:10:11 crc kubenswrapper[4812]: I1124 22:10:11.814584 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a40ab24b-5359-4420-a7d0-e6503fc6f3a6-catalog-content\") pod \"certified-operators-84dmh\" (UID: \"a40ab24b-5359-4420-a7d0-e6503fc6f3a6\") " pod="openshift-marketplace/certified-operators-84dmh" Nov 24 22:10:11 crc kubenswrapper[4812]: I1124 22:10:11.814723 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a40ab24b-5359-4420-a7d0-e6503fc6f3a6-utilities\") pod \"certified-operators-84dmh\" (UID: \"a40ab24b-5359-4420-a7d0-e6503fc6f3a6\") " pod="openshift-marketplace/certified-operators-84dmh" Nov 24 22:10:11 crc kubenswrapper[4812]: I1124 22:10:11.833973 4812 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4kvd\" (UniqueName: \"kubernetes.io/projected/a40ab24b-5359-4420-a7d0-e6503fc6f3a6-kube-api-access-z4kvd\") pod \"certified-operators-84dmh\" (UID: \"a40ab24b-5359-4420-a7d0-e6503fc6f3a6\") " pod="openshift-marketplace/certified-operators-84dmh" Nov 24 22:10:11 crc kubenswrapper[4812]: I1124 22:10:11.876666 4812 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84dmh" Nov 24 22:10:12 crc kubenswrapper[4812]: I1124 22:10:12.467782 4812 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-84dmh"] Nov 24 22:10:12 crc kubenswrapper[4812]: I1124 22:10:12.580698 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84dmh" event={"ID":"a40ab24b-5359-4420-a7d0-e6503fc6f3a6","Type":"ContainerStarted","Data":"e82238fee472232e7a71bb68ee1a5cdd424e8f73229157417fa6a125f1ce028a"} Nov 24 22:10:13 crc kubenswrapper[4812]: I1124 22:10:13.595232 4812 generic.go:334] "Generic (PLEG): container finished" podID="a40ab24b-5359-4420-a7d0-e6503fc6f3a6" containerID="6d8e299fe2e5f58511bd1ae4747beb4b72c3f0efca7b172105b6c67c6db5ce26" exitCode=0 Nov 24 22:10:13 crc kubenswrapper[4812]: I1124 22:10:13.595363 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84dmh" event={"ID":"a40ab24b-5359-4420-a7d0-e6503fc6f3a6","Type":"ContainerDied","Data":"6d8e299fe2e5f58511bd1ae4747beb4b72c3f0efca7b172105b6c67c6db5ce26"} Nov 24 22:10:14 crc kubenswrapper[4812]: I1124 22:10:14.612321 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84dmh" event={"ID":"a40ab24b-5359-4420-a7d0-e6503fc6f3a6","Type":"ContainerStarted","Data":"6d4ec4df968d10f4d2799abab1293d256cbf052bfd0d40e24364badefa42aa01"} Nov 24 22:10:15 crc kubenswrapper[4812]: I1124 22:10:15.622805 4812 generic.go:334] "Generic (PLEG): container finished" podID="a40ab24b-5359-4420-a7d0-e6503fc6f3a6" containerID="6d4ec4df968d10f4d2799abab1293d256cbf052bfd0d40e24364badefa42aa01" exitCode=0 Nov 24 22:10:15 crc kubenswrapper[4812]: I1124 22:10:15.622865 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84dmh" event={"ID":"a40ab24b-5359-4420-a7d0-e6503fc6f3a6","Type":"ContainerDied","Data":"6d4ec4df968d10f4d2799abab1293d256cbf052bfd0d40e24364badefa42aa01"} Nov 24 22:10:16 crc kubenswrapper[4812]: I1124 22:10:16.636630 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84dmh" event={"ID":"a40ab24b-5359-4420-a7d0-e6503fc6f3a6","Type":"ContainerStarted","Data":"0ef8f18a5c0b4ee453644ca885d70ebd3c1db8a09792d2729f87c7ac807c7546"} Nov 24 22:10:16 crc kubenswrapper[4812]: I1124 22:10:16.665417 4812 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-84dmh" podStartSLOduration=3.257478021 podStartE2EDuration="5.665391053s" podCreationTimestamp="2025-11-24 22:10:11 +0000 UTC" firstStartedPulling="2025-11-24 22:10:13.601656049 +0000 UTC m=+10407.390608420" lastFinishedPulling="2025-11-24 22:10:16.009569051 +0000 UTC m=+10409.798521452" observedRunningTime="2025-11-24 22:10:16.658459207 +0000 UTC m=+10410.447411578" watchObservedRunningTime="2025-11-24 22:10:16.665391053 +0000 UTC m=+10410.454343434" Nov 24 22:10:20 crc kubenswrapper[4812]: I1124 22:10:20.966618 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:10:20 crc kubenswrapper[4812]: E1124 22:10:20.967763 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:10:21 crc kubenswrapper[4812]: I1124 22:10:21.877646 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-84dmh" Nov 24 22:10:21 crc kubenswrapper[4812]: I1124 22:10:21.879798 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-84dmh" Nov 24 22:10:21 crc kubenswrapper[4812]: I1124 22:10:21.969822 4812 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-84dmh" Nov 24 22:10:22 crc kubenswrapper[4812]: I1124 22:10:22.793573 4812 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-84dmh" Nov 24 22:10:22 crc kubenswrapper[4812]: I1124 22:10:22.856074 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-84dmh"] Nov 24 22:10:24 crc kubenswrapper[4812]: I1124 22:10:24.734387 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-84dmh" podUID="a40ab24b-5359-4420-a7d0-e6503fc6f3a6" containerName="registry-server" containerID="cri-o://0ef8f18a5c0b4ee453644ca885d70ebd3c1db8a09792d2729f87c7ac807c7546" gracePeriod=2 Nov 24 22:10:25 crc kubenswrapper[4812]: I1124 22:10:25.281229 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84dmh" Nov 24 22:10:25 crc kubenswrapper[4812]: I1124 22:10:25.369483 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4kvd\" (UniqueName: \"kubernetes.io/projected/a40ab24b-5359-4420-a7d0-e6503fc6f3a6-kube-api-access-z4kvd\") pod \"a40ab24b-5359-4420-a7d0-e6503fc6f3a6\" (UID: \"a40ab24b-5359-4420-a7d0-e6503fc6f3a6\") " Nov 24 22:10:25 crc kubenswrapper[4812]: I1124 22:10:25.369636 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a40ab24b-5359-4420-a7d0-e6503fc6f3a6-catalog-content\") pod \"a40ab24b-5359-4420-a7d0-e6503fc6f3a6\" (UID: \"a40ab24b-5359-4420-a7d0-e6503fc6f3a6\") " Nov 24 22:10:25 crc kubenswrapper[4812]: I1124 22:10:25.369700 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a40ab24b-5359-4420-a7d0-e6503fc6f3a6-utilities\") pod \"a40ab24b-5359-4420-a7d0-e6503fc6f3a6\" (UID: \"a40ab24b-5359-4420-a7d0-e6503fc6f3a6\") " Nov 24 22:10:25 crc kubenswrapper[4812]: I1124 22:10:25.370405 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a40ab24b-5359-4420-a7d0-e6503fc6f3a6-utilities" (OuterVolumeSpecName: "utilities") pod "a40ab24b-5359-4420-a7d0-e6503fc6f3a6" (UID: "a40ab24b-5359-4420-a7d0-e6503fc6f3a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:10:25 crc kubenswrapper[4812]: I1124 22:10:25.374422 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a40ab24b-5359-4420-a7d0-e6503fc6f3a6-kube-api-access-z4kvd" (OuterVolumeSpecName: "kube-api-access-z4kvd") pod "a40ab24b-5359-4420-a7d0-e6503fc6f3a6" (UID: "a40ab24b-5359-4420-a7d0-e6503fc6f3a6"). InnerVolumeSpecName "kube-api-access-z4kvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:10:25 crc kubenswrapper[4812]: I1124 22:10:25.471820 4812 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a40ab24b-5359-4420-a7d0-e6503fc6f3a6-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:25 crc kubenswrapper[4812]: I1124 22:10:25.471854 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4kvd\" (UniqueName: \"kubernetes.io/projected/a40ab24b-5359-4420-a7d0-e6503fc6f3a6-kube-api-access-z4kvd\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:25 crc kubenswrapper[4812]: I1124 22:10:25.748988 4812 generic.go:334] "Generic (PLEG): container finished" podID="a40ab24b-5359-4420-a7d0-e6503fc6f3a6" containerID="0ef8f18a5c0b4ee453644ca885d70ebd3c1db8a09792d2729f87c7ac807c7546" exitCode=0 Nov 24 22:10:25 crc kubenswrapper[4812]: I1124 22:10:25.749041 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84dmh" event={"ID":"a40ab24b-5359-4420-a7d0-e6503fc6f3a6","Type":"ContainerDied","Data":"0ef8f18a5c0b4ee453644ca885d70ebd3c1db8a09792d2729f87c7ac807c7546"} Nov 24 22:10:25 crc kubenswrapper[4812]: I1124 22:10:25.749055 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84dmh" Nov 24 22:10:25 crc kubenswrapper[4812]: I1124 22:10:25.749072 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84dmh" event={"ID":"a40ab24b-5359-4420-a7d0-e6503fc6f3a6","Type":"ContainerDied","Data":"e82238fee472232e7a71bb68ee1a5cdd424e8f73229157417fa6a125f1ce028a"} Nov 24 22:10:25 crc kubenswrapper[4812]: I1124 22:10:25.749095 4812 scope.go:117] "RemoveContainer" containerID="0ef8f18a5c0b4ee453644ca885d70ebd3c1db8a09792d2729f87c7ac807c7546" Nov 24 22:10:25 crc kubenswrapper[4812]: I1124 22:10:25.793856 4812 scope.go:117] "RemoveContainer" containerID="6d4ec4df968d10f4d2799abab1293d256cbf052bfd0d40e24364badefa42aa01" Nov 24 22:10:25 crc kubenswrapper[4812]: I1124 22:10:25.830820 4812 scope.go:117] "RemoveContainer" containerID="6d8e299fe2e5f58511bd1ae4747beb4b72c3f0efca7b172105b6c67c6db5ce26" Nov 24 22:10:25 crc kubenswrapper[4812]: I1124 22:10:25.912254 4812 scope.go:117] "RemoveContainer" containerID="0ef8f18a5c0b4ee453644ca885d70ebd3c1db8a09792d2729f87c7ac807c7546" Nov 24 22:10:25 crc kubenswrapper[4812]: E1124 22:10:25.913090 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ef8f18a5c0b4ee453644ca885d70ebd3c1db8a09792d2729f87c7ac807c7546\": container with ID starting with 0ef8f18a5c0b4ee453644ca885d70ebd3c1db8a09792d2729f87c7ac807c7546 not found: ID does not exist" containerID="0ef8f18a5c0b4ee453644ca885d70ebd3c1db8a09792d2729f87c7ac807c7546" Nov 24 22:10:25 crc kubenswrapper[4812]: I1124 22:10:25.913226 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ef8f18a5c0b4ee453644ca885d70ebd3c1db8a09792d2729f87c7ac807c7546"} err="failed to get container status \"0ef8f18a5c0b4ee453644ca885d70ebd3c1db8a09792d2729f87c7ac807c7546\": rpc error: code = NotFound desc = could not find container \"0ef8f18a5c0b4ee453644ca885d70ebd3c1db8a09792d2729f87c7ac807c7546\": container with ID starting with 0ef8f18a5c0b4ee453644ca885d70ebd3c1db8a09792d2729f87c7ac807c7546 not found: ID does not exist" Nov 24 22:10:25 crc kubenswrapper[4812]: I1124 22:10:25.913361 4812 scope.go:117] "RemoveContainer" containerID="6d4ec4df968d10f4d2799abab1293d256cbf052bfd0d40e24364badefa42aa01" Nov 24 22:10:25 crc kubenswrapper[4812]: E1124 22:10:25.914465 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d4ec4df968d10f4d2799abab1293d256cbf052bfd0d40e24364badefa42aa01\": container with ID starting with 6d4ec4df968d10f4d2799abab1293d256cbf052bfd0d40e24364badefa42aa01 not found: ID does not exist" containerID="6d4ec4df968d10f4d2799abab1293d256cbf052bfd0d40e24364badefa42aa01" Nov 24 22:10:25 crc kubenswrapper[4812]: I1124 22:10:25.914483 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d4ec4df968d10f4d2799abab1293d256cbf052bfd0d40e24364badefa42aa01"} err="failed to get container status \"6d4ec4df968d10f4d2799abab1293d256cbf052bfd0d40e24364badefa42aa01\": rpc error: code = NotFound desc = could not find container \"6d4ec4df968d10f4d2799abab1293d256cbf052bfd0d40e24364badefa42aa01\": container with ID starting with 6d4ec4df968d10f4d2799abab1293d256cbf052bfd0d40e24364badefa42aa01 not found: ID does not exist" Nov 24 22:10:25 crc kubenswrapper[4812]: I1124 22:10:25.914496 4812 scope.go:117] "RemoveContainer" containerID="6d8e299fe2e5f58511bd1ae4747beb4b72c3f0efca7b172105b6c67c6db5ce26" Nov 24 22:10:25 crc kubenswrapper[4812]: E1124 22:10:25.914763 4812 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d8e299fe2e5f58511bd1ae4747beb4b72c3f0efca7b172105b6c67c6db5ce26\": container with ID starting with 6d8e299fe2e5f58511bd1ae4747beb4b72c3f0efca7b172105b6c67c6db5ce26 not found: ID does not exist" containerID="6d8e299fe2e5f58511bd1ae4747beb4b72c3f0efca7b172105b6c67c6db5ce26" Nov 24 22:10:25 crc kubenswrapper[4812]: I1124 22:10:25.914974 4812 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d8e299fe2e5f58511bd1ae4747beb4b72c3f0efca7b172105b6c67c6db5ce26"} err="failed to get container status \"6d8e299fe2e5f58511bd1ae4747beb4b72c3f0efca7b172105b6c67c6db5ce26\": rpc error: code = NotFound desc = could not find container \"6d8e299fe2e5f58511bd1ae4747beb4b72c3f0efca7b172105b6c67c6db5ce26\": container with ID starting with 6d8e299fe2e5f58511bd1ae4747beb4b72c3f0efca7b172105b6c67c6db5ce26 not found: ID does not exist" Nov 24 22:10:26 crc kubenswrapper[4812]: I1124 22:10:26.361516 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a40ab24b-5359-4420-a7d0-e6503fc6f3a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a40ab24b-5359-4420-a7d0-e6503fc6f3a6" (UID: "a40ab24b-5359-4420-a7d0-e6503fc6f3a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:10:26 crc kubenswrapper[4812]: I1124 22:10:26.408154 4812 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a40ab24b-5359-4420-a7d0-e6503fc6f3a6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:10:26 crc kubenswrapper[4812]: I1124 22:10:26.683319 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-84dmh"] Nov 24 22:10:26 crc kubenswrapper[4812]: I1124 22:10:26.692851 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-84dmh"] Nov 24 22:10:26 crc kubenswrapper[4812]: I1124 22:10:26.996082 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a40ab24b-5359-4420-a7d0-e6503fc6f3a6" path="/var/lib/kubelet/pods/a40ab24b-5359-4420-a7d0-e6503fc6f3a6/volumes" Nov 24 22:10:34 crc kubenswrapper[4812]: I1124 22:10:34.965869 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:10:34 crc kubenswrapper[4812]: E1124 22:10:34.966996 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:10:47 crc kubenswrapper[4812]: I1124 22:10:47.967079 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:10:47 crc kubenswrapper[4812]: E1124 22:10:47.968056 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:11:01 crc kubenswrapper[4812]: I1124 22:11:01.968429 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:11:01 crc kubenswrapper[4812]: E1124 22:11:01.969222 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:11:12 crc kubenswrapper[4812]: I1124 22:11:12.970258 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:11:12 crc kubenswrapper[4812]: E1124 22:11:12.971451 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:11:27 crc kubenswrapper[4812]: I1124 22:11:27.969068 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:11:27 crc kubenswrapper[4812]: E1124 22:11:27.970556 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:11:30 crc kubenswrapper[4812]: I1124 22:11:30.921841 4812 generic.go:334] "Generic (PLEG): container finished" podID="c5889655-07e1-486a-9f90-deb9ec6781b9" containerID="f15ab36df18f4c44e925963216ab0cd9288b54d4807ee64374c809b407fef859" exitCode=0 Nov 24 22:11:30 crc kubenswrapper[4812]: I1124 22:11:30.922495 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4zm6x/must-gather-2fczk" event={"ID":"c5889655-07e1-486a-9f90-deb9ec6781b9","Type":"ContainerDied","Data":"f15ab36df18f4c44e925963216ab0cd9288b54d4807ee64374c809b407fef859"} Nov 24 22:11:30 crc kubenswrapper[4812]: I1124 22:11:30.923811 4812 scope.go:117] "RemoveContainer" containerID="f15ab36df18f4c44e925963216ab0cd9288b54d4807ee64374c809b407fef859" Nov 24 22:11:31 crc kubenswrapper[4812]: I1124 22:11:31.631660 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4zm6x_must-gather-2fczk_c5889655-07e1-486a-9f90-deb9ec6781b9/gather/0.log" Nov 24 22:11:39 crc kubenswrapper[4812]: I1124 22:11:39.507183 4812 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4zm6x/must-gather-2fczk"] Nov 24 22:11:39 crc kubenswrapper[4812]: I1124 22:11:39.508234 4812 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-4zm6x/must-gather-2fczk" podUID="c5889655-07e1-486a-9f90-deb9ec6781b9" containerName="copy" containerID="cri-o://8b464f8f308aa024320f40f1e2d856f89cc53b8487cb8fffa2c22a338ff4afe0" gracePeriod=2 Nov 24 22:11:39 crc kubenswrapper[4812]: I1124 22:11:39.516357 4812 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4zm6x/must-gather-2fczk"] Nov 24 22:11:40 crc kubenswrapper[4812]: I1124 22:11:40.016686 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4zm6x_must-gather-2fczk_c5889655-07e1-486a-9f90-deb9ec6781b9/copy/0.log" Nov 24 22:11:40 crc kubenswrapper[4812]: I1124 22:11:40.017690 4812 generic.go:334] "Generic (PLEG): container finished" podID="c5889655-07e1-486a-9f90-deb9ec6781b9" containerID="8b464f8f308aa024320f40f1e2d856f89cc53b8487cb8fffa2c22a338ff4afe0" exitCode=143 Nov 24 22:11:40 crc kubenswrapper[4812]: I1124 22:11:40.639557 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4zm6x_must-gather-2fczk_c5889655-07e1-486a-9f90-deb9ec6781b9/copy/0.log" Nov 24 22:11:40 crc kubenswrapper[4812]: I1124 22:11:40.640167 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zm6x/must-gather-2fczk" Nov 24 22:11:40 crc kubenswrapper[4812]: I1124 22:11:40.749226 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-297cs\" (UniqueName: \"kubernetes.io/projected/c5889655-07e1-486a-9f90-deb9ec6781b9-kube-api-access-297cs\") pod \"c5889655-07e1-486a-9f90-deb9ec6781b9\" (UID: \"c5889655-07e1-486a-9f90-deb9ec6781b9\") " Nov 24 22:11:40 crc kubenswrapper[4812]: I1124 22:11:40.749536 4812 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c5889655-07e1-486a-9f90-deb9ec6781b9-must-gather-output\") pod \"c5889655-07e1-486a-9f90-deb9ec6781b9\" (UID: \"c5889655-07e1-486a-9f90-deb9ec6781b9\") " Nov 24 22:11:40 crc kubenswrapper[4812]: I1124 22:11:40.758621 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5889655-07e1-486a-9f90-deb9ec6781b9-kube-api-access-297cs" (OuterVolumeSpecName: "kube-api-access-297cs") pod "c5889655-07e1-486a-9f90-deb9ec6781b9" (UID: "c5889655-07e1-486a-9f90-deb9ec6781b9"). InnerVolumeSpecName "kube-api-access-297cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:11:40 crc kubenswrapper[4812]: I1124 22:11:40.851867 4812 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-297cs\" (UniqueName: \"kubernetes.io/projected/c5889655-07e1-486a-9f90-deb9ec6781b9-kube-api-access-297cs\") on node \"crc\" DevicePath \"\"" Nov 24 22:11:40 crc kubenswrapper[4812]: I1124 22:11:40.972984 4812 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5889655-07e1-486a-9f90-deb9ec6781b9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "c5889655-07e1-486a-9f90-deb9ec6781b9" (UID: "c5889655-07e1-486a-9f90-deb9ec6781b9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:11:40 crc kubenswrapper[4812]: I1124 22:11:40.977018 4812 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5889655-07e1-486a-9f90-deb9ec6781b9" path="/var/lib/kubelet/pods/c5889655-07e1-486a-9f90-deb9ec6781b9/volumes" Nov 24 22:11:41 crc kubenswrapper[4812]: I1124 22:11:41.028892 4812 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4zm6x_must-gather-2fczk_c5889655-07e1-486a-9f90-deb9ec6781b9/copy/0.log" Nov 24 22:11:41 crc kubenswrapper[4812]: I1124 22:11:41.029643 4812 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4zm6x/must-gather-2fczk" Nov 24 22:11:41 crc kubenswrapper[4812]: I1124 22:11:41.029670 4812 scope.go:117] "RemoveContainer" containerID="8b464f8f308aa024320f40f1e2d856f89cc53b8487cb8fffa2c22a338ff4afe0" Nov 24 22:11:41 crc kubenswrapper[4812]: I1124 22:11:41.049030 4812 scope.go:117] "RemoveContainer" containerID="f15ab36df18f4c44e925963216ab0cd9288b54d4807ee64374c809b407fef859" Nov 24 22:11:41 crc kubenswrapper[4812]: I1124 22:11:41.056621 4812 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c5889655-07e1-486a-9f90-deb9ec6781b9-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 22:11:41 crc kubenswrapper[4812]: I1124 22:11:41.965466 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:11:41 crc kubenswrapper[4812]: E1124 22:11:41.965879 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:11:56 crc kubenswrapper[4812]: I1124 22:11:56.975269 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:11:56 crc kubenswrapper[4812]: E1124 22:11:56.976522 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:12:10 crc kubenswrapper[4812]: I1124 22:12:10.965922 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:12:10 crc kubenswrapper[4812]: E1124 22:12:10.966748 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:12:25 crc kubenswrapper[4812]: I1124 22:12:25.965537 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:12:25 crc kubenswrapper[4812]: E1124 22:12:25.966394 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:12:36 crc kubenswrapper[4812]: I1124 22:12:36.980196 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:12:36 crc kubenswrapper[4812]: E1124 22:12:36.983455 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:12:48 crc kubenswrapper[4812]: I1124 22:12:48.972903 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:12:48 crc kubenswrapper[4812]: E1124 22:12:48.974459 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:13:00 crc kubenswrapper[4812]: I1124 22:13:00.966999 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:13:00 crc kubenswrapper[4812]: E1124 22:13:00.968104 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:13:15 crc kubenswrapper[4812]: I1124 22:13:15.966000 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:13:15 crc kubenswrapper[4812]: E1124 22:13:15.966915 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:13:26 crc kubenswrapper[4812]: I1124 22:13:26.990037 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:13:26 crc kubenswrapper[4812]: E1124 22:13:26.990753 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:13:40 crc kubenswrapper[4812]: I1124 22:13:40.966823 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:13:40 crc kubenswrapper[4812]: E1124 22:13:40.967883 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:13:52 crc kubenswrapper[4812]: I1124 22:13:52.966554 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:13:52 crc kubenswrapper[4812]: E1124 22:13:52.967704 4812 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-nscsk_openshift-machine-config-operator(bcb3ad4b-5afb-47fe-8963-9f79489d45d5)\"" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" podUID="bcb3ad4b-5afb-47fe-8963-9f79489d45d5" Nov 24 22:14:03 crc kubenswrapper[4812]: I1124 22:14:03.965893 4812 scope.go:117] "RemoveContainer" containerID="29f008bdefcee00a33e94e16cef46771932b0bbe0975652aad2abc587aa3730e" Nov 24 22:14:04 crc kubenswrapper[4812]: I1124 22:14:04.935248 4812 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-nscsk" event={"ID":"bcb3ad4b-5afb-47fe-8963-9f79489d45d5","Type":"ContainerStarted","Data":"3126e044ff16d8e80ed22bd466ba63b02cf24b5c4007a989b999d0685864e3d5"}